Results 1  10
of
35
Tensor Decompositions and Applications
 SIAM REVIEW
, 2009
"... This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal proce ..."
Abstract

Cited by 228 (14 self)
 Add to MetaCart
This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, etc. Two particular tensor decompositions can be considered to be higherorder extensions of the matrix singular value decompo
sition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rankone tensors, and the Tucker decomposition is a higherorder form of principal components analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The Nway Toolbox and Tensor Toolbox, both for MATLAB, and the Multilinear Engine are examples of software packages for working with tensors.
Graph Kernels
, 2007
"... We present a unified framework to study graph kernels, special cases of which include the random walk (Gärtner et al., 2003; Borgwardt et al., 2005) and marginalized (Kashima et al., 2003, 2004; Mahé et al., 2004) graph kernels. Through reduction to a Sylvester equation we improve the time complexit ..."
Abstract

Cited by 37 (4 self)
 Add to MetaCart
We present a unified framework to study graph kernels, special cases of which include the random walk (Gärtner et al., 2003; Borgwardt et al., 2005) and marginalized (Kashima et al., 2003, 2004; Mahé et al., 2004) graph kernels. Through reduction to a Sylvester equation we improve the time complexity of kernel computation between unlabeled graphs with n vertices from O(n 6) to O(n 3). We find a spectral decomposition approach even more efficient when computing entire kernel matrices. For labeled graphs we develop conjugate gradient and fixedpoint methods that take O(dn 3) time per iteration, where d is the size of the label set. By extending the necessary linear algebra to Reproducing Kernel Hilbert Spaces (RKHS) we obtain the same result for ddimensional edge kernels, and O(n 4) in the infinitedimensional case; on sparse graphs these algorithms only take O(n 2) time per iteration in all cases. Experiments on graphs from bioinformatics and other application domains show that these techniques can speed up computation of the kernel by an order of magnitude or more. We also show that certain rational kernels (Cortes et al., 2002, 2003, 2004) when specialized to graphs reduce to our random walk graph kernel. Finally, we relate our framework to Rconvolution kernels (Haussler, 1999) and provide a kernel that is close to the optimal assignment kernel of Fröhlich et al. (2006) yet provably positive semidefinite.
Enhanced line search: A novel method to accelerate Parafac
 in Eusipco’05
, 2005
"... Abstract. Several modifications have been proposed to speed up the alternating least squares (ALS) method of fitting the PARAFAC model. The most widely used is line search, which extrapolates from linear trends in the parameter changes over prior iterations to estimate the parameter values that woul ..."
Abstract

Cited by 30 (8 self)
 Add to MetaCart
Abstract. Several modifications have been proposed to speed up the alternating least squares (ALS) method of fitting the PARAFAC model. The most widely used is line search, which extrapolates from linear trends in the parameter changes over prior iterations to estimate the parameter values that would be obtained after many additional ALS iterations. We propose some extensions of this approach that incorporate a more sophisticated extrapolation, using information on nonlinear trends in the parameters and changing all the parameter sets simultaneously. The new method, called “enhanced line search (ELS), ” can be implemented at different levels of complexity, depending on how many different extrapolation parameters (for different modes) are jointly optimized during each iteration. We report some tests of the simplest parameter version, using simulated data. The performance of this lowestlevel of ELS depends on the nature of the convergence difficulty. It significantly outperforms standard LS when there is a “convergence bottleneck, ” a situation where some modes have almost collinear factors but others do not, but is somewhat less effective in classic “swamp ” situations where factors are highly collinear in all modes. This is illustrated by examples. To demonstrate how ELS can be adapted to different Nway decompositions, we also apply it to a fourway array to perform a blind identification of an underdetermined mixture (UDM). Since analysis of this dataset happens to involve a serious convergence “bottleneck ” (collinear factors in two of the four modes), it provides another example of a situation in which ELS dramatically outperforms standard line search. Key words. PARAFAC, alternating least squares (ALS), line search, enhanced line search (ELS), acceleration, swamps, bottlenecks, collinear factors, degeneracy AMS subject classifications. Authors must provide DOI. 10.1137/06065577 1. Introduction. PARAFAC
Dimensionality reduction in higherorder signal processing and rank(R_1,R__2,...,R_N) reduction in multilinear algebra
, 2004
"... ..."
Decompositions of a higherorder tensor in block terms— Part III: Alternating Least Squares algorithms
 SIAM J. Matrix Anal. Appl
"... Abstract. In this paper we introduce a new class of tensor decompositions. Intuitively, we decompose a given tensor block into blocks of smaller size, where the size is characterized by a set of moden ranks. We study different types of such decompositions. For each type we derive conditions under w ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
Abstract. In this paper we introduce a new class of tensor decompositions. Intuitively, we decompose a given tensor block into blocks of smaller size, where the size is characterized by a set of moden ranks. We study different types of such decompositions. For each type we derive conditions under which essential uniqueness is guaranteed. The parallel factor decomposition and Tucker’s decomposition can be considered as special cases in the new framework. The paper sheds new light on fundamental aspects of tensor algebra.
Blind identification of underdetermined mixtures by simultaneous matrix diagonalization
 IEEE Transactions on Signal Processing
"... Abstract—In this paper, we study simultaneous matrix diagonalizationbased techniques for the estimation of the mixing matrix in underdetermined independent component analysis (ICA). This includes a generalization to underdetermined mixtures of the wellknown SOBI algorithm. The problem is reformula ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
Abstract—In this paper, we study simultaneous matrix diagonalizationbased techniques for the estimation of the mixing matrix in underdetermined independent component analysis (ICA). This includes a generalization to underdetermined mixtures of the wellknown SOBI algorithm. The problem is reformulated in terms of the parallel factor decomposition (PARAFAC) of a higherorder tensor. We present conditions under which the mixing matrix is unique and discuss several algorithms for its computation. Index Terms—Canonical decomposition, higher order tensor, independent component analysis (ICA), parallel factor (PARAFAC) analysis, simultaneous diagonalization, underdetermined mixture. I.
An Optimization Approach for Fitting Canonical Tensor Decompositions
, 2009
"... Tensor decompositions are higherorder analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
Tensor decompositions are higherorder analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rankone tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be diﬃcult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use
of gradientbased optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed eﬃciently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradientbased optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.
Fourthorder cumulantbased blind identification of underdetermined mixtures
 Signal Processing, IEEE Transactions on
"... Abstract—In this paper we study two fourthorder cumulantbased techniques for the estimation of the mixing matrix in underdetermined independent component analysis. The first method is based on a simultaneous matrix diagonalization. The second is based on a simultaneous offdiagonalization. The numb ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Abstract—In this paper we study two fourthorder cumulantbased techniques for the estimation of the mixing matrix in underdetermined independent component analysis. The first method is based on a simultaneous matrix diagonalization. The second is based on a simultaneous offdiagonalization. The number of sources that can be allowed is roughly quadratic in the number of observations. For both methods, explicit expressions for the maximum number of sources are given. Simulations illustrate the performance of the techniques. Index Terms—Cumulant, higher order statistics, higher order tensor, independent component analysis (ICA), parallel factor analysis, simultaneous diagonalization, underdetermined mixture. I.
A Jacobitype method for computing orthogonal tensor decompositions
 SIAM J. Matrix Anal. Appl
, 2006
"... Abstract. Suppose A =(aijk) ∈ Rn×n×n is a threeway array or thirdorder tensor. Many of the powerful tools of linear algebra such as the singular value decomposition (SVD) do not, unfortunately, extend in a straightforward way to tensors of order three or higher. In the twodimensional case, the SV ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Abstract. Suppose A =(aijk) ∈ Rn×n×n is a threeway array or thirdorder tensor. Many of the powerful tools of linear algebra such as the singular value decomposition (SVD) do not, unfortunately, extend in a straightforward way to tensors of order three or higher. In the twodimensional case, the SVD is particularly illuminating, since it reduces a matrix to diagonal form. Although it is not possible in general to diagonalize a tensor (i.e., aijk = 0 unless i = j = k), our goal is to “condense ” a tensor in fewer nonzero entries using orthogonal transformations. We propose an algorithm for tensors of the form A∈Rn×n×n that is an extension of the Jacobi SVD algorithm for matrices. The resulting tensor decomposition reduces A to a form such that the quantity ∑n i=1 a2 iii or ∑n i=1 aiii is maximized. Key words. multilinear algebra, tensor decomposition, singular value decomposition, multidimensional arrays
Report on “Geometry and representation theory of tensors for computer science, statistics and other areas
, 2008
"... This workshop was sponsored by AIM and the NSF and it brought in participants from the US, Canada and the European Union to Palo Alto, CA to work to translate questions from quantum computing, complexity theory, statistical learning theory, signal processing, and data analysis to problems in geometr ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
This workshop was sponsored by AIM and the NSF and it brought in participants from the US, Canada and the European Union to Palo Alto, CA to work to translate questions from quantum computing, complexity theory, statistical learning theory, signal processing, and data analysis to problems in geometry and representation theory.