Results 1  10
of
33
Tensor Decompositions and Applications
 SIAM REVIEW
, 2009
"... This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal proce ..."
Abstract

Cited by 237 (14 self)
 Add to MetaCart
This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, etc. Two particular tensor decompositions can be considered to be higherorder extensions of the matrix singular value decompo
sition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rankone tensors, and the Tucker decomposition is a higherorder form of principal components analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The Nway Toolbox and Tensor Toolbox, both for MATLAB, and the Multilinear Engine are examples of software packages for working with tensors.
Graph Kernels
, 2007
"... We present a unified framework to study graph kernels, special cases of which include the random walk (Gärtner et al., 2003; Borgwardt et al., 2005) and marginalized (Kashima et al., 2003, 2004; Mahé et al., 2004) graph kernels. Through reduction to a Sylvester equation we improve the time complexit ..."
Abstract

Cited by 40 (4 self)
 Add to MetaCart
We present a unified framework to study graph kernels, special cases of which include the random walk (Gärtner et al., 2003; Borgwardt et al., 2005) and marginalized (Kashima et al., 2003, 2004; Mahé et al., 2004) graph kernels. Through reduction to a Sylvester equation we improve the time complexity of kernel computation between unlabeled graphs with n vertices from O(n 6) to O(n 3). We find a spectral decomposition approach even more efficient when computing entire kernel matrices. For labeled graphs we develop conjugate gradient and fixedpoint methods that take O(dn 3) time per iteration, where d is the size of the label set. By extending the necessary linear algebra to Reproducing Kernel Hilbert Spaces (RKHS) we obtain the same result for ddimensional edge kernels, and O(n 4) in the infinitedimensional case; on sparse graphs these algorithms only take O(n 2) time per iteration in all cases. Experiments on graphs from bioinformatics and other application domains show that these techniques can speed up computation of the kernel by an order of magnitude or more. We also show that certain rational kernels (Cortes et al., 2002, 2003, 2004) when specialized to graphs reduce to our random walk graph kernel. Finally, we relate our framework to Rconvolution kernels (Haussler, 1999) and provide a kernel that is close to the optimal assignment kernel of Fröhlich et al. (2006) yet provably positive semidefinite.
Enhanced line search: A novel method to accelerate Parafac
 in Eusipco’05
, 2005
"... Abstract. Several modifications have been proposed to speed up the alternating least squares (ALS) method of fitting the PARAFAC model. The most widely used is line search, which extrapolates from linear trends in the parameter changes over prior iterations to estimate the parameter values that woul ..."
Abstract

Cited by 30 (8 self)
 Add to MetaCart
Abstract. Several modifications have been proposed to speed up the alternating least squares (ALS) method of fitting the PARAFAC model. The most widely used is line search, which extrapolates from linear trends in the parameter changes over prior iterations to estimate the parameter values that would be obtained after many additional ALS iterations. We propose some extensions of this approach that incorporate a more sophisticated extrapolation, using information on nonlinear trends in the parameters and changing all the parameter sets simultaneously. The new method, called “enhanced line search (ELS), ” can be implemented at different levels of complexity, depending on how many different extrapolation parameters (for different modes) are jointly optimized during each iteration. We report some tests of the simplest parameter version, using simulated data. The performance of this lowestlevel of ELS depends on the nature of the convergence difficulty. It significantly outperforms standard LS when there is a “convergence bottleneck, ” a situation where some modes have almost collinear factors but others do not, but is somewhat less effective in classic “swamp ” situations where factors are highly collinear in all modes. This is illustrated by examples. To demonstrate how ELS can be adapted to different Nway decompositions, we also apply it to a fourway array to perform a blind identification of an underdetermined mixture (UDM). Since analysis of this dataset happens to involve a serious convergence “bottleneck ” (collinear factors in two of the four modes), it provides another example of a situation in which ELS dramatically outperforms standard line search. Key words. PARAFAC, alternating least squares (ALS), line search, enhanced line search (ELS), acceleration, swamps, bottlenecks, collinear factors, degeneracy AMS subject classifications. Authors must provide DOI. 10.1137/06065577 1. Introduction. PARAFAC
Dimensionality reduction in higherorder signal processing and rank(R_1,R__2,...,R_N) reduction in multilinear algebra
, 2004
"... ..."
Blind identification of underdetermined mixtures by simultaneous matrix diagonalization
 IEEE TRANSACTIONS ON SIGNAL PROCESSING
, 2008
"... In this paper, we study simultaneous matrix diagonalizationbased techniques for the estimation of the mixing matrix in underdetermined independent component analysis (ICA). This includes a generalization to underdetermined mixtures of the wellknown SOBI algorithm. The problem is reformulated in t ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
In this paper, we study simultaneous matrix diagonalizationbased techniques for the estimation of the mixing matrix in underdetermined independent component analysis (ICA). This includes a generalization to underdetermined mixtures of the wellknown SOBI algorithm. The problem is reformulated in terms of the parallel factor decomposition (PARAFAC) of a higherorder tensor. We present conditions under which the mixing matrix is unique and discuss several algorithms for its computation.
An Optimization Approach for Fitting Canonical Tensor Decompositions
, 2009
"... Tensor decompositions are higherorder analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
Tensor decompositions are higherorder analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rankone tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be diﬃcult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use
of gradientbased optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed eﬃciently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradientbased optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.
Fourthorder cumulantbased blind identification of underdetermined mixtures
 Signal Processing, IEEE Transactions on
"... Abstract—In this paper we study two fourthorder cumulantbased techniques for the estimation of the mixing matrix in underdetermined independent component analysis. The first method is based on a simultaneous matrix diagonalization. The second is based on a simultaneous offdiagonalization. The numb ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Abstract—In this paper we study two fourthorder cumulantbased techniques for the estimation of the mixing matrix in underdetermined independent component analysis. The first method is based on a simultaneous matrix diagonalization. The second is based on a simultaneous offdiagonalization. The number of sources that can be allowed is roughly quadratic in the number of observations. For both methods, explicit expressions for the maximum number of sources are given. Simulations illustrate the performance of the techniques. Index Terms—Cumulant, higher order statistics, higher order tensor, independent component analysis (ICA), parallel factor analysis, simultaneous diagonalization, underdetermined mixture. I.
A Jacobitype method for computing orthogonal tensor decompositions
 SIAM J. Matrix Anal. Appl
, 2006
"... Abstract. Suppose A =(aijk) ∈ Rn×n×n is a threeway array or thirdorder tensor. Many of the powerful tools of linear algebra such as the singular value decomposition (SVD) do not, unfortunately, extend in a straightforward way to tensors of order three or higher. In the twodimensional case, the SV ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Abstract. Suppose A =(aijk) ∈ Rn×n×n is a threeway array or thirdorder tensor. Many of the powerful tools of linear algebra such as the singular value decomposition (SVD) do not, unfortunately, extend in a straightforward way to tensors of order three or higher. In the twodimensional case, the SVD is particularly illuminating, since it reduces a matrix to diagonal form. Although it is not possible in general to diagonalize a tensor (i.e., aijk = 0 unless i = j = k), our goal is to “condense ” a tensor in fewer nonzero entries using orthogonal transformations. We propose an algorithm for tensors of the form A∈Rn×n×n that is an extension of the Jacobi SVD algorithm for matrices. The resulting tensor decomposition reduces A to a form such that the quantity ∑n i=1 a2 iii or ∑n i=1 aiii is maximized. Key words. multilinear algebra, tensor decomposition, singular value decomposition, multidimensional arrays
A thirdorder generalization of the matrix svd as a product of thirdorder tensors
, 2008
"... Abstract. Traditionally, extending the Singular Value Decomposition (SVD) to thirdorder tensors (multiway arrays) has involved a representation using the outer product of vectors. These outer products can be written in terms of the nmode product, which can also be used to describe a type of multip ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Abstract. Traditionally, extending the Singular Value Decomposition (SVD) to thirdorder tensors (multiway arrays) has involved a representation using the outer product of vectors. These outer products can be written in terms of the nmode product, which can also be used to describe a type of multiplication between two tensors. In this paper, we present a different type of thirdorder generalization of the SVD where an order3 tensor is instead decomposed as a product of order3 tensors. In order to define this new notion, we define tensortensor multiplication in such a way so that it is closed under this operation. This results in new definitions for tensors such as the tensor transpose, inverse, and identity. These definitions have the advantage they can be extended, though in a nontrivial way, to the orderp (p> 3) case [31]. A major motivation for considering this new type of tensor multiplication is to devise new types of factorizations for tensors which could then be used in applications such as data compression. We therefore present two strategies for compressing thirdorder tensors which make use of our new SVD generalization and give some numerical comparisons to existing algorithms on synthetic data.
Report on “Geometry and representation theory of tensors for computer science, statistics and other areas
, 2008
"... This workshop was sponsored by AIM and the NSF and it brought in participants from the US, Canada and the European Union to Palo Alto, CA to work to translate questions from quantum computing, complexity theory, statistical learning theory, signal processing, and data analysis to problems in geometr ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
This workshop was sponsored by AIM and the NSF and it brought in participants from the US, Canada and the European Union to Palo Alto, CA to work to translate questions from quantum computing, complexity theory, statistical learning theory, signal processing, and data analysis to problems in geometry and representation theory.