Results 1  10
of
27
Indexing by latent semantic analysis
 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE
, 1990
"... A new method for automatic indexing and retrieval is described. The approach is to take advantage of implicit higherorder structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries. The p ..."
Abstract

Cited by 2703 (32 self)
 Add to MetaCart
A new method for automatic indexing and retrieval is described. The approach is to take advantage of implicit higherorder structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries. The particular technique used is singularvalue decomposition, in which a large term by document matrix is decomposed into a set of ca. 100 orthogonal factors from which the original matrix can be approximated by linear combination. Documents are represented by ca. 100 item vectors of factor weights. Queries are represented as pseudodocument vectors formed from weighted combinations of terms, and documents with suprathreshold cosine values are returned. initial tests find this completely automatic method for retrieval to be promising.
A multilinear singular value decomposition
 SIAM J. Matrix Anal. Appl
, 2000
"... Abstract. We discuss a multilinear generalization of the singular value decomposition. There is a strong analogy between several properties of the matrix and the higherorder tensor decomposition; uniqueness, link with the matrix eigenvalue decomposition, firstorder perturbation effects, etc., are ..."
Abstract

Cited by 234 (14 self)
 Add to MetaCart
Abstract. We discuss a multilinear generalization of the singular value decomposition. There is a strong analogy between several properties of the matrix and the higherorder tensor decomposition; uniqueness, link with the matrix eigenvalue decomposition, firstorder perturbation effects, etc., are analyzed. We investigate how tensor symmetries affect the decomposition and propose a multilinear generalization of the symmetric eigenvalue decomposition for pairwise symmetric tensors.
Blind PARAFAC receivers for DSCDMA systems
 IEEE TRANS. SIGNAL PROCESSING
, 2000
"... This paper links the directsequence codedivision multiple access (DSCDMA) multiuser separationequalizationdetection problem to the parallel factor (PARAFAC) model, which is an analysis tool rooted in psychometrics and chemometrics. Exploiting this link, it derives a deterministic blind PARAFAC ..."
Abstract

Cited by 71 (14 self)
 Add to MetaCart
This paper links the directsequence codedivision multiple access (DSCDMA) multiuser separationequalizationdetection problem to the parallel factor (PARAFAC) model, which is an analysis tool rooted in psychometrics and chemometrics. Exploiting this link, it derives a deterministic blind PARAFAC DSCDMA receiver with performance close to nonblind minimum meansquared error (MMSE). The proposed PARAFAC receiver capitalizes on code, spatial, and temporal diversitycombining, thereby supporting small sample sizes, more users than sensors, and/or less spreading than users. Interestingly, PARAFAC does not require knowledge of spreading codes, the specifics of multipath (interchip interference), DOAcalibration information, finite alphabet/constant modulus, or statistical independence/whiteness to recover the informationbearing signals. Instead, PARAFAC relies on a fundamental result regarding the uniqueness of lowrank threeway array decomposition due to Kruskal (and generalized herein to the complexvalued case) that guarantees identifiability of all relevant signals and propagation parameters. These and other issues are also demonstrated in pertinent simulation experiments.
Parallel Factor Analysis in Sensor Array Processing
 IEEE TRANS. SIGNAL PROCESSING
, 2000
"... This paper links multiple invariance sensor array processing (MISAP) to parallel factor (PARAFAC) analysis, which is a tool rooted in psychometrics and chemometrics. PARAFAC is a common name for lowrank decomposition of three and higher way arrays. This link facilitates the derivation of power ..."
Abstract

Cited by 69 (15 self)
 Add to MetaCart
This paper links multiple invariance sensor array processing (MISAP) to parallel factor (PARAFAC) analysis, which is a tool rooted in psychometrics and chemometrics. PARAFAC is a common name for lowrank decomposition of three and higher way arrays. This link facilitates the derivation of powerful identifiability results for MISAP, shows that the uniqueness of single and multipleinvariance ESPRIT stems from uniqueness of lowrank decomposition of threeway arrays, and allows tapping on the available expertise for fitting the PARAFAC model. The results are applicable to both datadomain and subspace MISAP formulations. The paper also includes a constructive uniqueness proof for a special PARAFAC model.
Computation of the canonical decomposition by means of a simultaneous generalized schur decomposition
 SIAM J. Matrix Anal. Appl
, 2004
"... Abstract. The canonical decomposition of higherorder tensors is a key tool in multilinear algebra. First we review the state of the art. Then we show that, under certain conditions, the problem can be rephrased as the simultaneous diagonalization, by equivalence or congruence, of a set of matrices. ..."
Abstract

Cited by 37 (7 self)
 Add to MetaCart
Abstract. The canonical decomposition of higherorder tensors is a key tool in multilinear algebra. First we review the state of the art. Then we show that, under certain conditions, the problem can be rephrased as the simultaneous diagonalization, by equivalence or congruence, of a set of matrices. Necessary and sufficient conditions for the uniqueness of these simultaneous matrix decompositions are derived. In a next step, the problem can be translated into a simultaneous generalized Schur decomposition, with orthogonal unknowns [A.J. van der Veen and A. Paulraj, IEEE Trans. Signal Process., 44 (1996), pp. 1136–1155]. A firstorder perturbation analysis of the simultaneous generalized Schur decomposition is carried out. We discuss some computational techniques (including a new Jacobi algorithm) and illustrate their behavior by means of a number of numerical experiments.
Enhanced line search: A novel method to accelerate Parafac
 in Eusipco’05
, 2005
"... Abstract. Several modifications have been proposed to speed up the alternating least squares (ALS) method of fitting the PARAFAC model. The most widely used is line search, which extrapolates from linear trends in the parameter changes over prior iterations to estimate the parameter values that woul ..."
Abstract

Cited by 30 (8 self)
 Add to MetaCart
Abstract. Several modifications have been proposed to speed up the alternating least squares (ALS) method of fitting the PARAFAC model. The most widely used is line search, which extrapolates from linear trends in the parameter changes over prior iterations to estimate the parameter values that would be obtained after many additional ALS iterations. We propose some extensions of this approach that incorporate a more sophisticated extrapolation, using information on nonlinear trends in the parameters and changing all the parameter sets simultaneously. The new method, called “enhanced line search (ELS), ” can be implemented at different levels of complexity, depending on how many different extrapolation parameters (for different modes) are jointly optimized during each iteration. We report some tests of the simplest parameter version, using simulated data. The performance of this lowestlevel of ELS depends on the nature of the convergence difficulty. It significantly outperforms standard LS when there is a “convergence bottleneck, ” a situation where some modes have almost collinear factors but others do not, but is somewhat less effective in classic “swamp ” situations where factors are highly collinear in all modes. This is illustrated by examples. To demonstrate how ELS can be adapted to different Nway decompositions, we also apply it to a fourway array to perform a blind identification of an underdetermined mixture (UDM). Since analysis of this dataset happens to involve a serious convergence “bottleneck ” (collinear factors in two of the four modes), it provides another example of a situation in which ELS dramatically outperforms standard line search. Key words. PARAFAC, alternating least squares (ALS), line search, enhanced line search (ELS), acceleration, swamps, bottlenecks, collinear factors, degeneracy AMS subject classifications. Authors must provide DOI. 10.1137/06065577 1. Introduction. PARAFAC
TENSORCUR DECOMPOSITIONS FOR TENSORBASED DATA
 SIAM J. MATRIX ANAL. APPL.
, 2008
"... Motivated by numerous applications in which the data may be modeled by a variable subscripted by three or more indices, we develop a tensorbased extension of the matrix CUR decomposition. The tensorCUR decomposition is most relevant as a data analysis tool when the data consist of one mode that i ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
Motivated by numerous applications in which the data may be modeled by a variable subscripted by three or more indices, we develop a tensorbased extension of the matrix CUR decomposition. The tensorCUR decomposition is most relevant as a data analysis tool when the data consist of one mode that is qualitatively different from the others. In this case, the tensorCUR decomposition approximately expresses the original data tensor in terms of a basis consisting of underlying subtensors that are actual data elements and thus that have a natural interpretation in terms of the processes generating the data. Assume the data may be modeled as a (2+1)tensor, i.e., an m×n×p tensor A in which the first two modes are similar and the third is qualitatively different. We refer to each of the p different m × n matrices as “slabs ” and each of the mn different pvectors as “fibers.” In this case, the tensorCUR algorithm computes an approximation to the data tensor A that is of the form CUR, where C is an m×n×c tensor consisting of a small number c of the slabs, R is an r × p matrix consisting of a small number r of the fibers, and U is an appropriately defined and easily computed c × r encoding matrix. Both C and R may be chosen by randomly sampling either slabs or fibers according to a judiciously chosen and datadependent probability distribution, and both c and r depend on a rank parameter k, an error parameter ɛ, and a failure probability δ. Under
CramérRao Lower Bounds for LowRank Decomposition of Multidimensional Arrays
 IEEE Trans. on Signal Processing
, 2001
"... Unlike lowrank matrix decomposition, which is generically nonunique for rank greater than one, lowrank threeand higher dimensional array decomposition is unique, provided that the array rank is lower than a certain bound, and the correct number of components (equal to array rank) is sought in the ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
Unlike lowrank matrix decomposition, which is generically nonunique for rank greater than one, lowrank threeand higher dimensional array decomposition is unique, provided that the array rank is lower than a certain bound, and the correct number of components (equal to array rank) is sought in the decomposition. Parallel factor (PARAFAC) analysis is a common name for lowrank decomposition of higher dimensional arrays. This paper develops CramrRao Bound (CRB) results for lowrank decomposition of three and fourdimensional (3D and 4D) arrays, illustrates the behavior of the resulting bounds, and compares alternating least squares algorithms that are commonly used to compute such decompositions with the respective CRBs. Simpletocheck necessary conditions for a unique lowrank decomposition are also provided. Index TermsCramrRao bound, least squares method, matrix decomposition, multidimensional signal processing. I.
Nonnegative matrix approximation: algorithms and applications
, 2006
"... Low dimensional data representations are crucial to numerous applications in machine learning, statistics, and signal processing. Nonnegative matrix approximation (NNMA) is a method for dimensionality reduction that respects the nonnegativity of the input data while constructing a lowdimensional ap ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
Low dimensional data representations are crucial to numerous applications in machine learning, statistics, and signal processing. Nonnegative matrix approximation (NNMA) is a method for dimensionality reduction that respects the nonnegativity of the input data while constructing a lowdimensional approximation. NNMA has been used in a multitude of applications, though without commensurate theoretical development. In this report we describe generic methods for minimizing generalized divergences between the input and its low rank approximant. Some of our general methods are even extensible to arbitrary convex penalties. Our methods yield efficient multiplicative iterative schemes for solving the proposed problems. We also consider interesting extensions such as the use of penalty functions, nonlinear relationships via “link ” functions, weighted errors, and multifactor approximations. We present some experiments as an illustration of our algorithms. For completeness, the report also includes a brief literature survey of the various algorithms and the applications of NNMA. Keywords: Nonnegative matrix factorization, weighted approximation, Bregman divergence, multiplicative
Polar polytopes and recovery of sparse representations
, 2005
"... Suppose we have a signal y which we wish to represent using a linear combination of a number of basis atoms ai, y = � i xiai = Ax. The problem of finding the minimum ℓ0 norm representation for y is a hard problem. The Basis Pursuit (BP) approach proposes to find the minimum ℓ1 norm representation in ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
Suppose we have a signal y which we wish to represent using a linear combination of a number of basis atoms ai, y = � i xiai = Ax. The problem of finding the minimum ℓ0 norm representation for y is a hard problem. The Basis Pursuit (BP) approach proposes to find the minimum ℓ1 norm representation instead, which corresponds to a linear program (LP) that can be solved using modern LP techniques, and several recent authors have given conditions for the BP (minimum ℓ1 norm) and sparse (minimum ℓ0 solutions) representations to be identical. In this paper, we explore this sparse representation problem using the geometry of convex polytopes, as recently introduced into the field by Donoho. By considering the dual LP we find that the socalled polar polytope P ∗ of the centrallysymmetric polytope P whose vertices are the atom pairs ±ai is particularly helpful in providing us with geometrical insight into optimality conditions given by Fuchs and Tropp for nonunitnorm atom sets. In exploring this geometry we are able to tighten some of these earlier results, showing for example that the Fuchs condition is both necessary and sufficient for ℓ1uniqueoptimality, and that there are situations where Orthogonal Matching Pursuit (OMP) can eventually find all ℓ1uniqueoptimal solutions with m nonzeros even if ERC fails for m, if allowed to run for more than m steps.