Results 1  10
of
118
Tensor Decompositions and Applications
 SIAM REVIEW
, 2009
"... This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal proce ..."
Abstract

Cited by 705 (17 self)
 Add to MetaCart
(Show Context)
This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, etc. Two particular tensor decompositions can be considered to be higherorder extensions of the matrix singular value decompo
sition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rankone tensors, and the Tucker decomposition is a higherorder form of principal components analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The Nway Toolbox and Tensor Toolbox, both for MATLAB, and the Multilinear Engine are examples of software packages for working with tensors.
Efficient MATLAB computations with sparse and factored tensors
 SIAM JOURNAL ON SCIENTIFIC COMPUTING
, 2007
"... In this paper, the term tensor refers simply to a multidimensional or $N$way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose stori ..."
Abstract

Cited by 80 (15 self)
 Add to MetaCart
(Show Context)
In this paper, the term tensor refers simply to a multidimensional or $N$way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more basic components. We consider two specific types: A Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a Kruskal tensor can be expressed as the sum of rank1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB.
Unsupervised multiway data analysis: A literature survey
 IEEE Transactions on Knowledge and Data Engineering
, 2008
"... Multiway data analysis captures multilinear structures in higherorder datasets, where data have more than two modes. Standard twoway methods commonly applied on matrices often fail to find the underlying structures in multiway arrays. With increasing number of application areas, multiway data anal ..."
Abstract

Cited by 80 (10 self)
 Add to MetaCart
Multiway data analysis captures multilinear structures in higherorder datasets, where data have more than two modes. Standard twoway methods commonly applied on matrices often fail to find the underlying structures in multiway arrays. With increasing number of application areas, multiway data analysis has become popular as an exploratory analysis tool. We provide a review of significant contributions in literature on multiway models, algorithms as well as their applications in diverse disciplines including chemometrics, neuroscience, computer vision, and social network analysis. 1.
Tensor decompositions for learning latent variable models
, 2014
"... This work considers a computationally and statistically efficient parameter estimation method for a wide class of latent variable models—including Gaussian mixture models, hidden Markov models, and latent Dirichlet allocation—which exploits a certain tensor structure in their loworder observable mo ..."
Abstract

Cited by 72 (5 self)
 Add to MetaCart
This work considers a computationally and statistically efficient parameter estimation method for a wide class of latent variable models—including Gaussian mixture models, hidden Markov models, and latent Dirichlet allocation—which exploits a certain tensor structure in their loworder observable moments (typically, of second and thirdorder). Specifically, parameter estimation is reduced to the problem of extracting a certain (orthogonal) decomposition of a symmetric tensor derived from the moments; this decomposition can be viewed as a natural generalization of the singular value decomposition for matrices. Although tensor decompositions are generally intractable to compute, the decomposition of these specially structured tensors can be efficiently obtained by a variety of approaches, including power iterations and maximization approaches (similar to the case of matrices). A detailed analysis of a robust tensor power method is provided, establishing an analogue of Wedin’s perturbation theorem for the singular vectors of matrices. This implies a robust and computationally tractable estimation approach for several popular latent variable models.
Eigenvalues and invariants of tensors
, 2007
"... A tensor is represented by a supermatrix under a coordinate system. In this paper, we define Eeigenvalues and Eeigenvectors for tensors and supermatrices. By the resultant theory, we define the Echaracteristic polynomial of a tensor. An Eeigenvalue of a tensor is a root of the Echaracteristic p ..."
Abstract

Cited by 53 (22 self)
 Add to MetaCart
A tensor is represented by a supermatrix under a coordinate system. In this paper, we define Eeigenvalues and Eeigenvectors for tensors and supermatrices. By the resultant theory, we define the Echaracteristic polynomial of a tensor. An Eeigenvalue of a tensor is a root of the Echaracteristic polynomial. In the regular case, a complex number is an Eeigenvalue if and only if it is a root of the Echaracteristic polynomial. We convert the Echaracteristic polynomial of a tensor to a monic polynomial and show that the coefficients of that monic polynomial are invariants of that tensor, i.e., they are invariant under coordinate system changes. We call them principal invariants of that tensor. The maximum number of principal invariants of mth order ndimensional tensors is a function of m and n. We denote it by d(m,n) and show that d(1, n) = 1, d(2, n) = n, d(m,2) = m for m 3 and d(m,n) mn−1 + · · · + m for m,n 3. We also define the rank of a tensor. All real eigenvectors associated with nonzero Eeigenvalues are in a subspace with dimension equal to its rank.
Most tensor problems are NP hard
 CORR
, 2009
"... The idea that one might extend numerical linear algebra, the collection of matrix computational methods that form the workhorse of scientific and engineering computing, to numerical multilinear algebra, an analogous collection of tools involving hypermatrices/tensors, appears very promising and has ..."
Abstract

Cited by 44 (6 self)
 Add to MetaCart
(Show Context)
The idea that one might extend numerical linear algebra, the collection of matrix computational methods that form the workhorse of scientific and engineering computing, to numerical multilinear algebra, an analogous collection of tools involving hypermatrices/tensors, appears very promising and has attracted a lot of attention recently. We examine here the computational tractability of some core problems in numerical multilinear algebra. We show that tensor analogues of several standard problems that are readily computable in the matrix (i.e. 2tensor) case are NP hard. Our list here includes: determining the feasibility of a system of bilinear equations, determining an eigenvalue, a singular value, or the spectral norm of a 3tensor, determining a best rank1 approximation to a 3tensor, determining the rank of a 3tensor over R or C. Hence making tensor computations feasible is likely to be a challenge.
Finding the largest eigenvalue of a nonnegative tensor
 SIAM J. MATRIX ANAL. APPL
, 2009
"... In this paper we propose an iterative method for calculating the largest eigenvalue of an irreducible nonnegative tensor. This method is an extension of a method of Collatz (1942) for calculating the spectral radius of an irreducible nonnegative matrix. Numerical results show that our proposed meth ..."
Abstract

Cited by 43 (24 self)
 Add to MetaCart
(Show Context)
In this paper we propose an iterative method for calculating the largest eigenvalue of an irreducible nonnegative tensor. This method is an extension of a method of Collatz (1942) for calculating the spectral radius of an irreducible nonnegative matrix. Numerical results show that our proposed method is promising. We also apply the method to studying higherorder Markov chains.
SHIFTED POWER METHOD FOR COMPUTING TENSOR EIGENPAIRS ∗
, 1007
"... Abstract. Recent work on eigenvalues and eigenvectors for tensors of order m ≥ 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetrictensor eigenpairs of the form ..."
Abstract

Cited by 38 (3 self)
 Add to MetaCart
Abstract. Recent work on eigenvalues and eigenvectors for tensors of order m ≥ 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetrictensor eigenpairs of the form Axm−1 = λx subject to ‖x ‖ = 1, which is closely related to optimal rank1 approximation of a symmetric tensor. Our contribution is a novel shifted symmetric higherorder power method (SSHOPM), which we show is guaranteed to converge to a tensor eigenpair. SSHOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higherorder power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to finding complex eigenpairs. Key words. tensor eigenvalues, Eeigenpairs, Zeigenpairs, l2eigenpairs, rank1 approximation, symmetric higherorder power method (SHOPM), shifted symmetric higherorder power method
SPECTRA OF UNIFORM HYPERGRAPHS
"... Abstract. We present a spectral theory of uniform hypergraphs that closely parallels Spectral Graph Theory. A number of recent developments building upon classical work has led to a rich understanding of “symmetric hyperdeterminants ” of hypermatrices, a.k.a. multidimensional arrays. Symmetric hyper ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
(Show Context)
Abstract. We present a spectral theory of uniform hypergraphs that closely parallels Spectral Graph Theory. A number of recent developments building upon classical work has led to a rich understanding of “symmetric hyperdeterminants ” of hypermatrices, a.k.a. multidimensional arrays. Symmetric hyperdeterminants share many properties with determinants, but the context of multilinear algebra is substantially more complicated than the linear algebra required to address Spectral Graph Theory (i.e., ordinary matrices). Nonetheless, it is possible to define eigenvalues of a hypermatrix via its characteristic polynomial as well as variationally. We apply this notion to the “adjacency hypermatrix” of a uniform hypergraph, and prove a number of natural analogues of basic results in Spectral Graph Theory. Open problems abound, and