Results 1  10
of
33
Tensor Decompositions and Applications
 SIAM REVIEW
, 2009
"... This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal proce ..."
Abstract

Cited by 705 (17 self)
 Add to MetaCart
(Show Context)
This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, etc. Two particular tensor decompositions can be considered to be higherorder extensions of the matrix singular value decompo
sition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rankone tensors, and the Tucker decomposition is a higherorder form of principal components analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The Nway Toolbox and Tensor Toolbox, both for MATLAB, and the Multilinear Engine are examples of software packages for working with tensors.
Stable recovery of sparse overcomplete representations in the presence of noise
 IEEE TRANS. INFORM. THEORY
, 2006
"... Overcomplete representations are attracting interest in signal processing theory, particularly due to their potential to generate sparse representations of signals. However, in general, the problem of finding sparse representations must be unstable in the presence of noise. This paper establishes t ..."
Abstract

Cited by 462 (20 self)
 Add to MetaCart
(Show Context)
Overcomplete representations are attracting interest in signal processing theory, particularly due to their potential to generate sparse representations of signals. However, in general, the problem of finding sparse representations must be unstable in the presence of noise. This paper establishes the possibility of stable recovery under a combination of sufficient sparsity and favorable structure of the overcomplete system. Considering an ideal underlying signal that has a sufficiently sparse representation, it is assumed that only a noisy version of it can be observed. Assuming further that the overcomplete system is incoherent, it is shown that the optimally sparse approximation to the noisy data differs from the optimally sparse decomposition of the ideal noiseless signal by at most a constant multiple of the noise level. As this optimalsparsity method requires heavy (combinatorial) computational effort, approximation algorithms are considered. It is shown that similar stability is also available using the basis and the matching pursuit algorithms. Furthermore, it is shown that these methods result in sparse approximation of the noisy data that contains only terms also appearing in the unique sparsest representation of the ideal noiseless sparse signal.
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images
, 2007
"... A fullrank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combin ..."
Abstract

Cited by 423 (37 self)
 Add to MetaCart
(Show Context)
A fullrank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combinatorial in nature, are there efficient methods for finding the sparsest solution? These questions have been answered positively and constructively in recent years, exposing a wide variety of surprising phenomena; in particular, the existence of easilyverifiable conditions under which optimallysparse solutions can be found by concrete, effective computational methods. Such theoretical results inspire a bold perspective on some important practical problems in signal and image processing. Several wellknown signal and image processing problems can be cast as demanding solutions of undetermined systems of equations. Such problems have previously seemed, to many, intractable. There is considerable evidence that these problems often have sparse solutions. Hence, advances in finding sparse solutions to underdetermined systems energizes research on such signal and image processing problems – to striking effect. In this paper we review the theoretical results on sparse solutions of linear systems, empirical
Kruskal’s permutation lemma and the identification of Candecomp/Parafac and bilinear models with constant modulus constraints
 IEEE Trans. Signal Process
"... Abstract—CANDECOMP/PARAFAC (CP) analysis is an extension of lowrank matrix decomposition to higherway arrays, which are also referred to as tensors. CP extends and unifies several array signal processing tools and has found applications ranging from multidimensional harmonic retrieval and angleca ..."
Abstract

Cited by 51 (6 self)
 Add to MetaCart
(Show Context)
Abstract—CANDECOMP/PARAFAC (CP) analysis is an extension of lowrank matrix decomposition to higherway arrays, which are also referred to as tensors. CP extends and unifies several array signal processing tools and has found applications ranging from multidimensional harmonic retrieval and anglecarrier estimation to blind multiuser detection. The uniqueness of CP decomposition is not fully understood yet, despite its theoretical and practical significance. Toward this end, we first revisit Kruskal’s Permutation Lemma, which is a cornerstone result in the area, using an accessible basic linear algebra and induction approach. The new proof highlights the nature and limits of the identification process. We then derive two equivalent necessary and sufficient uniqueness conditions for the case where one of the component matrices involved in the decomposition is full column rank. These new conditions explain a curious example provided recently in a previous paper by Sidiropoulos, who showed that Kruskal’s condition is in general sufficient but not necessary for uniqueness and that uniqueness depends on the particular joint pattern of zeros in the (possibly pretransformed) component matrices. As another interesting application of the Permutation Lemma, we derive a similar necessary and sufficient condition for unique bilinear factorization under constant modulus (CM) constraints, thus providing an interesting link to (and unification with) CP. Index Terms—CANDECOMP, constant modulus, identifiablity, PARAFAC, SVD, threeway array analysis, uniqueness. I.
Robust iterative fitting of multilinear models
 IEEE Transactions on Signal Processing
, 2005
"... Abstract—Parallel factor (PARAFAC) analysis is an extension of lowrank matrix decomposition to higher way arrays, also referred to as tensors. It decomposes a given array in a sum of multilinear terms, analogous to the familiar bilinear vector outer products that appear in matrix decomposition. PAR ..."
Abstract

Cited by 29 (3 self)
 Add to MetaCart
Abstract—Parallel factor (PARAFAC) analysis is an extension of lowrank matrix decomposition to higher way arrays, also referred to as tensors. It decomposes a given array in a sum of multilinear terms, analogous to the familiar bilinear vector outer products that appear in matrix decomposition. PARAFAC analysis generalizes and unifies common array processing models, like joint diagonalization and ESPRIT; it has found numerous applications from blind multiuser detection and multidimensional harmonic retrieval, to clustering and nuclear magnetic resonance. The prevailing fitting algorithm in all these applications is based on (alternating) least squares, which is optimal for Gaussian noise. In many cases, however, measurement errors are far from being Gaussian. In this paper, we develop two iterative algorithms for the least absolute error fitting of general multilinear models. The first is based on efficient interior point methods for linear programming, employed in an alternating fashion. The second is based on a weighted median filtering iteration, which is particularly appealing from a simplicity viewpoint. Both are guaranteed to converge in terms of absolute error. Performance is illustrated by means of simulations, and compared to the pertinent Cramér–Rao bounds (CRBs). Index Terms—Array signal processing, nonGaussian noise, parallel factor analysis, robust model fitting. I.
Low complexity Damped GaussNewton algorithms for CANDECOMP/PARAFAC
 SIAM Journal on Matrix Analysis and Applications (SIMAX
, 2013
"... ar ..."
(Show Context)
Lowrank decomposition of multiway arrays: A signal processing perspective
 In IEEE SAM
, 2004
"... In many signal processing applications of linear algebra tools, the signal part of a postulated model lies in a socalled signal subspace, while the parameters of interest are in onetoone correspondence with a certain basis of this subspace. The signal subspace can often be reliably estimated from ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
(Show Context)
In many signal processing applications of linear algebra tools, the signal part of a postulated model lies in a socalled signal subspace, while the parameters of interest are in onetoone correspondence with a certain basis of this subspace. The signal subspace can often be reliably estimated from measured data, but the particular basis of interest cannot be identified without additional problemspecific structure. This is a manifestation of rotational indeterminacy, i.e., nonuniqueness of lowrank matrix decomposition. The situation is very different for three or higherway arrays, i.e., arrays indexed by three or more independent variables, for which lowrank decomposition is unique under mild conditions. This has fundamental implications for DSP problems which deal with such data. This paper provides a brief tour of the basic elements of this theory, along with many examples of application in problems of current interest in the signal processing community. Keywords: Threeway analysis, lowrank decomposition, parallel factor analysis (PARAFAC), canonical decomposition (CAN
Lathauwer: Block component modelbased blind DSCDMA receiver
 IEEE Trans. Signal Process
, 2008
"... Abstract—In this paper, we consider the problem of blind multiuser separationequalization in the uplink of a wideband DSCDMA system, in a multipath propagation environment with intersymbolinterference (ISI). To solve this problem, we propose a multilinear algebraic receiver that relies on a new t ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, we consider the problem of blind multiuser separationequalization in the uplink of a wideband DSCDMA system, in a multipath propagation environment with intersymbolinterference (ISI). To solve this problem, we propose a multilinear algebraic receiver that relies on a new thirdorder tensor decomposition and generalizes the parallel factor (PARAFAC) model. Our method is deterministic and exploits the temporal, spatial and spectral diversities to collect the received data in a thirdorder tensor. The specific algebraic structure of this tensor is then used to decompose it in a sum of user’s contributions. The socalled Block Component Model (BCM) receiver does not require knowledge of the spreading codes, the propagation parameters, nor statistical independence of the sources but relies instead on a fundamental uniqueness condition of the decomposition that guarantees identifiability of every user’s contribution. The development of fast and reliable techniques to calculate this decomposition is important. We propose a blind receiver based either on an alternating least squares (ALS) algorithm or on a LevenbergMarquardt (LM) algorithm. Simulations illustrate the performance of the algorithms. Index Terms—Blind signal extraction, block component model
On the Uniqueness of the Canonical Polyadic Decomposition of ThirdOrder Tensors — Part I: Basic Results and Uniqueness of One Factor Matrix
 SIAM J. Matrix Anal. Appl
"... ar ..."
Tensor decompositions for signal processing applications. From Twoway to Multiway Component Analysis
 ESATSTADIUS INTERNAL REPORT
, 2014
"... The widespread use of multisensor technology and the emergence of big datasets has highlighted the limitations of standard flatview matrix models and the necessity to move towards more versatile data analysis tools. We show that higherorder tensors (i.e., multiway arrays) enable such a fundame ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
(Show Context)
The widespread use of multisensor technology and the emergence of big datasets has highlighted the limitations of standard flatview matrix models and the necessity to move towards more versatile data analysis tools. We show that higherorder tensors (i.e., multiway arrays) enable such a fundamental paradigm shift towards models that are essentially polynomial and whose uniqueness, unlike the matrix methods, is guaranteed under very mild and natural conditions. Benefiting from the power of multilinear algebra as their mathematical backbone, data analysis techniques using tensor decompositions are shown to have great flexibility in the choice of constraints that match data properties, and to find more general latent components in the data than matrixbased methods. A comprehensive introduction to tensor decompositions is provided from a signal processing perspective, starting from the algebraic foundations, via basic Canonical Polyadic and Tucker models, through to advanced causeeffect and multiview data analysis schemes. We show that tensor decompositions enable natural generalizations of some commonly used signal processing paradigms, such as canonical correlation and subspace techniques, signal separation, linear regression, feature extraction and classification. We also cover computational aspects, and point out how ideas from compressed sensing and scientific computing may be used for addressing the otherwise unmanageable storage and manipulation problems associated with big datasets. The concepts are supported by illustrative real world case studies illuminating the benefits of the tensor framework, as efficient and promising tools for modern signal processing, data analysis and machine learning applications; these benefits also extend to vector/matrix data through tensorization.