Results 1  10
of
17
Tensor decompositions for signal processing applications. From Twoway to Multiway Component Analysis
 ESATSTADIUS INTERNAL REPORT
, 2014
"... The widespread use of multisensor technology and the emergence of big datasets has highlighted the limitations of standard flatview matrix models and the necessity to move towards more versatile data analysis tools. We show that higherorder tensors (i.e., multiway arrays) enable such a fundame ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
(Show Context)
The widespread use of multisensor technology and the emergence of big datasets has highlighted the limitations of standard flatview matrix models and the necessity to move towards more versatile data analysis tools. We show that higherorder tensors (i.e., multiway arrays) enable such a fundamental paradigm shift towards models that are essentially polynomial and whose uniqueness, unlike the matrix methods, is guaranteed under very mild and natural conditions. Benefiting from the power of multilinear algebra as their mathematical backbone, data analysis techniques using tensor decompositions are shown to have great flexibility in the choice of constraints that match data properties, and to find more general latent components in the data than matrixbased methods. A comprehensive introduction to tensor decompositions is provided from a signal processing perspective, starting from the algebraic foundations, via basic Canonical Polyadic and Tucker models, through to advanced causeeffect and multiview data analysis schemes. We show that tensor decompositions enable natural generalizations of some commonly used signal processing paradigms, such as canonical correlation and subspace techniques, signal separation, linear regression, feature extraction and classification. We also cover computational aspects, and point out how ideas from compressed sensing and scientific computing may be used for addressing the otherwise unmanageable storage and manipulation problems associated with big datasets. The concepts are supported by illustrative real world case studies illuminating the benefits of the tensor framework, as efficient and promising tools for modern signal processing, data analysis and machine learning applications; these benefits also extend to vector/matrix data through tensorization.
MACH: Fast Randomized Tensor Decompositions
, 2010
"... Tensors naturally model many real world processes which generate multiaspect data. Such processes appear in many different research disciplines, e.g, chemometrics, computer vision, psychometrics and neuroimaging analysis. Tensor decompositions such as the Tucker decomposition are used to analyze mu ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
Tensors naturally model many real world processes which generate multiaspect data. Such processes appear in many different research disciplines, e.g, chemometrics, computer vision, psychometrics and neuroimaging analysis. Tensor decompositions such as the Tucker decomposition are used to analyze multiaspect data and extract latent factors, which capture the multilinear data structure. Such decompositions are powerful mining tools for extracting patterns from large data volumes. However, most frequently used algorithms for such decompositions involve the computationally expensive Singular Value Decomposition. In this paper we propose MACH, a new sampling algorithm to compute such decompositions. Our method is of significant practical value for tensor streams, such as environmental monitoring systems, IP traffic matrices over time, where large amounts of data are accumulated and the analysis is computationally intensive but also in “postmortem ” data analysis cases where the tensor does not fit in the available memory. We provide the theoretical analysis of our proposed method and verify its efficacy on synthetic data and two real world monitoring system applications. Categories and Subject Descriptors:
CANONICAL POLYADIC DECOMPOSITION WITH A COLUMNWISE ORTHONORMAL FACTOR MATRIX
"... Abstract. Canonical Polyadic Decomposition (CPD) of a higherorder tensor is an important tool in mathematical engineering. In many applications at least one of the matrix factors is constrained to be columnwise orthonormal. We first derive a relaxed condition that guarantees uniqueness of the CPD ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Canonical Polyadic Decomposition (CPD) of a higherorder tensor is an important tool in mathematical engineering. In many applications at least one of the matrix factors is constrained to be columnwise orthonormal. We first derive a relaxed condition that guarantees uniqueness of the CPD under this constraint. Second, we give a simple proof of the existence of the optimal lowrank approximation of a tensor in the case that a factor matrix is columnwise orthonormal. Third, we derive numerical algorithms for the computation of the constrained CPD. In particular, orthogonalityconstrained versions of the CPD methods based on simultaneous matrix diagonalization and alternating least squares are presented. Numerical experiments are reported. Key words. higherorder tensor, polyadic decomposition, canonical decomposition (CANDECOMP), parallel factor (PARAFAC), simultaneous matrix diagonalization, alternating least squares,
Jacobi algorithm for the best low multilinear rank approximation of symmetric tensors
 SIAM JOURNAL ON MATRIX ANALYSIS AND APPLICATIONS
, 2013
"... The problem discussed in this paper is the symmetric best low multilinear rank approximation of thirdorder symmetric tensors. We propose an algorithm based on Jacobi rotations, for which symmetry is preserved at each iteration. Two numerical examples are provided indicating the need for such algo ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
The problem discussed in this paper is the symmetric best low multilinear rank approximation of thirdorder symmetric tensors. We propose an algorithm based on Jacobi rotations, for which symmetry is preserved at each iteration. Two numerical examples are provided indicating the need for such algorithms. An important part of the paper consists of proving that our algorithm converges to stationary points of the objective function. This can be considered an advantage of the proposed algorithm over existing symmetrypreserving algorithms in the literature.
Y.H.: A framework of constraint preserving update schemes for optimization on Stiefel manifold
 Institue of Computational Mathematics and Scientific/Engineering Computing, Academy of Mathematics and Systems Sciences, Chinese Academy of Sicences (2012
"... ar ..."
(Show Context)
Semidefinite relaxations for best rank1 tensor approximations
 SIAM Jounral on Matrix Analysis and Applications
"... ar ..."
(Show Context)
A Broyden class of quasiNewton methods for Riemannian optimization
 Accepted in SIAM Journal on Optimization
, 2014
"... Abstract. This paper develops and analyzes a generalization of the Broyden class of quasiNewton methods to the problem of minimizing a smooth objective function f on a Riemannian manifold. A condition on vector transport and retraction that guarantees convergence and facilitates efficient computati ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Abstract. This paper develops and analyzes a generalization of the Broyden class of quasiNewton methods to the problem of minimizing a smooth objective function f on a Riemannian manifold. A condition on vector transport and retraction that guarantees convergence and facilitates efficient computation is derived. Experimental evidence is presented demonstrating the value of the extension to the Riemannian Broyden class through superior performance for some problems compared to existing Riemannian BFGS methods, in particular those that depend on differentiated retraction. Key words. Riemannian optimization; manifold optimization; QuasiNewton methods; Broyden methods; Stiefel manifold; AMS subject classifications. 65K05, 90C48, 90C53