Results 11  20
of
267
HigherOrder Web Link Analysis Using Multilinear Algebra
 IEEE INTERNATIONAL CONFERENCE ON DATA MINING
, 2005
"... Linear algebra is a powerful and proven tool in web search. Techniques, such as the PageRank algorithm of Brin and Page and the HITS algorithm of Kleinberg, score web pages based on the principal eigenvector (or singular vector) of a particular nonnegative matrix that captures the hyperlink structu ..."
Abstract

Cited by 45 (16 self)
 Add to MetaCart
Linear algebra is a powerful and proven tool in web search. Techniques, such as the PageRank algorithm of Brin and Page and the HITS algorithm of Kleinberg, score web pages based on the principal eigenvector (or singular vector) of a particular nonnegative matrix that captures the hyperlink structure of the web graph. We propose and test a new methodology that uses multilinear algebra to elicit more information from a higherorder representation of the hyperlink graph. We start by labeling the edges in our graph with the anchor text of the hyperlinks so that the associated linear algebra representation is a sparse, threeway tensor. The first two dimensions of the tensor represent the web pages while the third dimension adds the anchor text. We then use the rank1 factors of a multilinear PARAFAC tensor decomposition, which are akin to singular vectors of the SVD, to automatically identify topics in the collection along with the associated authoritative web pages.
On the Uniqueness of Multilinear Decomposition of Nway arrays
, 2000
"... INTRODUCTION Consider an I # J matrix X and suppose that rank (X) = 3. Let x i,j denote the (i, j)th entry of X.Thenit holds that x i,j admits a threecomponent bilinear decomposition x i#j # # 3 f #1 a i#f b j#f #1# for all i = 1,...,I and j = 1,...,J. Equivalently, letting a f := [a 1,f ..."
Abstract

Cited by 45 (9 self)
 Add to MetaCart
INTRODUCTION Consider an I # J matrix X and suppose that rank (X) = 3. Let x i,j denote the (i, j)th entry of X.Thenit holds that x i,j admits a threecomponent bilinear decomposition x i#j # # 3 f #1 a i#f b j#f #1# for all i = 1,...,I and j = 1,...,J. Equivalently, letting a f := [a 1,f ,...,a I,f ] T and similarly for b f , X # a 1 b T 1 # a 2 b T 2 # a 3 b T 3 #2# i
Efficient MATLAB computations with sparse and factored tensors
 SIAM JOURNAL ON SCIENTIFIC COMPUTING
, 2007
"... In this paper, the term tensor refers simply to a multidimensional or $N$way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose stori ..."
Abstract

Cited by 45 (13 self)
 Add to MetaCart
In this paper, the term tensor refers simply to a multidimensional or $N$way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more basic components. We consider two specific types: A Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a Kruskal tensor can be expressed as the sum of rank1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB.
G: Minimizing GCV/GML scores with multiple smoothing parameters via the Newton method
 Siam J Sci Statist Comp
, 1991
"... Abstract. The (modified) Newton method is adapted to optimize generalized cross validation (GCV) and generalized maximum likelihood (GML) scores with multiple smoothing parameters. The main concerns in solving the optimization problem are the speed and the reliability of the algorithm, as well as th ..."
Abstract

Cited by 43 (8 self)
 Add to MetaCart
Abstract. The (modified) Newton method is adapted to optimize generalized cross validation (GCV) and generalized maximum likelihood (GML) scores with multiple smoothing parameters. The main concerns in solving the optimization problem are the speed and the reliability of the algorithm, as well as the invariance of the algorithm under transformations under which the problem itself is invariant. The proposed algorithm is believed to be highly efficient for the problem, though it is still rather expensive for large data sets, since its operational counts are (2/3)kn + O(n2), with k the number of smoothing parameters and n the number of observations. Sensible procedures for computing good starting values are also proposed, which should help in keeping the execution load to the minimum possible. The algorithm is implemented in Rkpack [RKPACK and its applications: Fitting smoothing spline models, Tech. Report 857, Department of Statistics, University of Wisconsin, Madison, WI, 1989] and illustrated by examples of fitting additive and interaction spline models. It is noted that the algorithm can also be applied to the maximum likelihood (ML) and the restricted maximum likelihood (REML) estimation of the variance component models.
Decomposing EEG data into spacetimefrequency components using parallel factor analysis
 Neuroimage
"... Finding the means to efficiently summarize electroencephalographic data has been a longstanding problem in electrophysiology. A popular approach is identification of component modes on the basis of the timevarying spectrum of multichannel EEG recordings—in other words, a space/frequency/time atomic ..."
Abstract

Cited by 42 (0 self)
 Add to MetaCart
Finding the means to efficiently summarize electroencephalographic data has been a longstanding problem in electrophysiology. A popular approach is identification of component modes on the basis of the timevarying spectrum of multichannel EEG recordings—in other words, a space/frequency/time atomic decomposition of the timevarying EEG spectrum. Previous work has been limited to only two of these dimensions. Principal Component Analysis (PCA) and Independent Component Analysis (ICA) have been used to create space/time decompositions; suffering an inherent lack of uniqueness that is overcome only by imposing constraints of orthogonality or independence of atoms. Conventional frequency/time decompositions ignore the spatial aspects of the EEG. Framing of the data being as a threeway array indexed by channel, frequency, and time allows the application of a unique decomposition that is known as Parallel Factor Analysis (PARAFAC). Each atom is the trilinear decomposition into a spatial,
Unsupervised multiway data analysis: A literature survey
 IEEE Transactions on Knowledge and Data Engineering
, 2008
"... Multiway data analysis captures multilinear structures in higherorder datasets, where data have more than two modes. Standard twoway methods commonly applied on matrices often fail to find the underlying structures in multiway arrays. With increasing number of application areas, multiway data anal ..."
Abstract

Cited by 42 (8 self)
 Add to MetaCart
Multiway data analysis captures multilinear structures in higherorder datasets, where data have more than two modes. Standard twoway methods commonly applied on matrices often fail to find the underlying structures in multiway arrays. With increasing number of application areas, multiway data analysis has become popular as an exploratory analysis tool. We provide a review of significant contributions in literature on multiway models, algorithms as well as their applications in diverse disciplines including chemometrics, neuroscience, computer vision, and social network analysis. 1.
Beyond pairwise clustering
 in IEEE Computer Society Conference on Computer Vision and Pattern Recognition
"... We consider the problem of clustering in domains where the affinity relations are not dyadic (pairwise), but rather triadic, tetradic or higher. The problem is an instance of the hypergraph partitioning problem. We propose a twostep algorithm for solving this problem. In the first step we use a nove ..."
Abstract

Cited by 41 (2 self)
 Add to MetaCart
We consider the problem of clustering in domains where the affinity relations are not dyadic (pairwise), but rather triadic, tetradic or higher. The problem is an instance of the hypergraph partitioning problem. We propose a twostep algorithm for solving this problem. In the first step we use a novel scheme to approximate the hypergraph using a weighted graph. In the second step a spectral partitioning algorithm is used to partition the vertices of this graph. The algorithm is capable of handling hyperedges of all orders including order two, thus incorporating information of all orders simultaneously. We present a theoretical analysis that relates our algorithm to an existing hypergraph partitioning algorithm and explain the reasons for its superior performance. We report the performance of our algorithm on a variety of computer vision problems and compare it to several existing hypergraph partitioning algorithms. 1.
Symmetric tensors and symmetric tensor rank
 Scientific Computing and Computational Mathematics (SCCM
, 2006
"... Abstract. A symmetric tensor is a higher order generalization of a symmetric matrix. In this paper, we study various properties of symmetric tensors in relation to a decomposition into a symmetric sum of outer product of vectors. A rank1 orderk tensor is the outer product of k nonzero vectors. An ..."
Abstract

Cited by 40 (18 self)
 Add to MetaCart
Abstract. A symmetric tensor is a higher order generalization of a symmetric matrix. In this paper, we study various properties of symmetric tensors in relation to a decomposition into a symmetric sum of outer product of vectors. A rank1 orderk tensor is the outer product of k nonzero vectors. Any symmetric tensor can be decomposed into a linear combination of rank1 tensors, each of them being symmetric or not. The rank of a symmetric tensor is the minimal number of rank1 tensors that is necessary to reconstruct it. The symmetric rank is obtained when the constituting rank1 tensors are imposed to be themselves symmetric. It is shown that rank and symmetric rank are equal in a number of cases, and that they always exist in an algebraically closed field. We will discuss the notion of the generic symmetric rank, which, due to the work of Alexander and Hirschowitz, is now known for any values of dimension and order. We will also show that the set of symmetric tensors of symmetric rank at most r is not closed, unless r = 1. Key words. Tensors, multiway arrays, outer product decomposition, symmetric outer product decomposition, candecomp, parafac, tensor rank, symmetric rank, symmetric tensor rank, generic symmetric rank, maximal symmetric rank, quantics AMS subject classifications. 15A03, 15A21, 15A72, 15A69, 15A18 1. Introduction. We
Computation of the canonical decomposition by means of a simultaneous generalized schur decomposition
 SIAM J. Matrix Anal. Appl
, 2004
"... Abstract. The canonical decomposition of higherorder tensors is a key tool in multilinear algebra. First we review the state of the art. Then we show that, under certain conditions, the problem can be rephrased as the simultaneous diagonalization, by equivalence or congruence, of a set of matrices. ..."
Abstract

Cited by 35 (6 self)
 Add to MetaCart
Abstract. The canonical decomposition of higherorder tensors is a key tool in multilinear algebra. First we review the state of the art. Then we show that, under certain conditions, the problem can be rephrased as the simultaneous diagonalization, by equivalence or congruence, of a set of matrices. Necessary and sufficient conditions for the uniqueness of these simultaneous matrix decompositions are derived. In a next step, the problem can be translated into a simultaneous generalized Schur decomposition, with orthogonal unknowns [A.J. van der Veen and A. Paulraj, IEEE Trans. Signal Process., 44 (1996), pp. 1136–1155]. A firstorder perturbation analysis of the simultaneous generalized Schur decomposition is carried out. We discuss some computational techniques (including a new Jacobi algorithm) and illustrate their behavior by means of a number of numerical experiments.
Multilinear operators for higherorder decompositions
, 2006
"... We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The ﬁrst operator,
which we call the Tucker operator, is shorthand for performing an nmode matrix multiplication for every mode of a given tensor and ..."
Abstract

Cited by 31 (9 self)
 Add to MetaCart
We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The ﬁrst operator,
which we call the Tucker operator, is shorthand for performing an nmode matrix multiplication for every mode of a given tensor and can be employed to consisely express the Tucker decomposition. The second operator, which we call the Kruskal operator, is shorthand for the sum of the outerproducts of the columns of N matrices and allows a divorce from a matricized representation and a very consise expression of the PARAFAC decomposition. We explore the
properties of the Tucker and Kruskal operators independently of the related decompositions.
Additionally, we provide a review of the matrix and tensor operations that are frequently used in the context of tensor decompositions.