Results 1  10
of
458
Tensor Decompositions and Applications
 SIAM REVIEW
, 2009
"... This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal proce ..."
Abstract

Cited by 722 (18 self)
 Add to MetaCart
(Show Context)
This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, etc. Two particular tensor decompositions can be considered to be higherorder extensions of the matrix singular value decompo
sition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rankone tensors, and the Tucker decomposition is a higherorder form of principal components analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The Nway Toolbox and Tensor Toolbox, both for MATLAB, and the Multilinear Engine are examples of software packages for working with tensors.
The geometry of algorithms with orthogonality constraints
 SIAM J. MATRIX ANAL. APPL
, 1998
"... In this paper we develop new Newton and conjugate gradient algorithms on the Grassmann and Stiefel manifolds. These manifolds represent the constraints that arise in such areas as the symmetric eigenvalue problem, nonlinear eigenvalue problems, electronic structures computations, and signal proces ..."
Abstract

Cited by 638 (1 self)
 Add to MetaCart
(Show Context)
In this paper we develop new Newton and conjugate gradient algorithms on the Grassmann and Stiefel manifolds. These manifolds represent the constraints that arise in such areas as the symmetric eigenvalue problem, nonlinear eigenvalue problems, electronic structures computations, and signal processing. In addition to the new algorithms, we show how the geometrical framework gives penetrating new insights allowing us to create, understand, and compare algorithms. The theory proposed here provides a taxonomy for numerical linear algebra algorithms that provide a top level mathematical view of previously unrelated algorithms. It is our hope that developers of new algorithms and perturbation theories will benefit from the theory, methods, and examples in this paper.
TENSOR RANK AND THE ILLPOSEDNESS OF THE BEST LOWRANK APPROXIMATION PROBLEM
"... There has been continued interest in seeking a theorem describing optimal lowrank approximations to tensors of order 3 or higher, that parallels the Eckart–Young theorem for matrices. In this paper, we argue that the naive approach to this problem is doomed to failure because, unlike matrices, te ..."
Abstract

Cited by 192 (13 self)
 Add to MetaCart
There has been continued interest in seeking a theorem describing optimal lowrank approximations to tensors of order 3 or higher, that parallels the Eckart–Young theorem for matrices. In this paper, we argue that the naive approach to this problem is doomed to failure because, unlike matrices, tensors of order 3 or higher can fail to have best rankr approximations. The phenomenon is much more widespread than one might suspect: examples of this failure can be constructed over a wide range of dimensions, orders and ranks, regardless of the choice of norm (or even Brègman divergence). Moreover, we show that in many instances these counterexamples have positive volume: they cannot be regarded as isolated phenomena. In one extreme case, we exhibit a tensor space in which no rank3 tensor has an optimal rank2 approximation. The notable exceptions to this misbehavior are rank1 tensors and order2 tensors (i.e. matrices). In a more positive spirit, we propose a natural way of overcoming the illposedness of the lowrank approximation problem, by using weak solutions when true solutions do not exist. For this to work, it is necessary to characterize the set of weak solutions, and we do this in the case of rank 2, order 3 (in arbitrary dimensions). In our work we emphasize the importance of closely studying concrete lowdimensional examples as a first step towards more general results. To this end, we present a detailed analysis of equivalence classes of 2 × 2 × 2 tensors, and we develop methods for extending results upwards to higher orders and dimensions. Finally, we link our work to existing studies of tensors from an algebraic geometric point of view. The rank of a tensor can in theory be given a semialgebraic description; in other words, can be determined by a system of polynomial inequalities. We study some of these polynomials in cases of interest to us; in particular we make extensive use of the hyperdeterminant ∆ on R 2×2×2.
Hierarchical singular value decomposition of tensors
 SIAM Journal on Matrix Analysis and Applications
"... Abstract. We define the hierarchical singular value decomposition (SVD) for tensors of order d ≥ 2. This hierarchical SVD has properties like the matrix SVD (and collapses to the SVD in d = 2), and we prove these. In particular, one can find low rank (almost) best approximations in a hierarchical fo ..."
Abstract

Cited by 178 (11 self)
 Add to MetaCart
(Show Context)
Abstract. We define the hierarchical singular value decomposition (SVD) for tensors of order d ≥ 2. This hierarchical SVD has properties like the matrix SVD (and collapses to the SVD in d = 2), and we prove these. In particular, one can find low rank (almost) best approximations in a hierarchical format (HTucker) which requires only O((d − 1)k3 + dnk) parameters, where d is the order of the tensor, n the size of the modes and k the (hierarchical) rank. The HTucker format is a specialization of the Tucker format and it contains as a special case all (canonical) rank k tensors. Based on this new concept of a hierarchical SVD we present algorithms for hierarchical tensor calculations allowing for a rigorous error analysis. The complexity of the truncation (finding lower rank approximations to hierarchical rank k tensors) is in O((d−1)k4+dnk2) and the attainable accuracy is just 2–3 digits less than machine precision.
Contextaware recommender systems.
 In Proceedings of the 2008 ACM Conference on Recommender Systems, RecSys ’08,
, 2008
"... Abstract This chapter aims to provide an overview of the class of multicriteria recommender systems. First, it defines the recommendation problem as a multicriteria decision making (MCDM) problem, and reviews MCDM methods and techniques that can support the implementation of multicriteria recomm ..."
Abstract

Cited by 161 (28 self)
 Add to MetaCart
Abstract This chapter aims to provide an overview of the class of multicriteria recommender systems. First, it defines the recommendation problem as a multicriteria decision making (MCDM) problem, and reviews MCDM methods and techniques that can support the implementation of multicriteria recommenders. Then, it focuses on the category of multicriteria rating recommenders techniques that provide recommendations by modelling a user's utility for an item as a vector of ratings along several criteria. A review of current algorithms that use multicriteria ratings for calculating predictions and generating recommendations is provided. Finally, the chapter concludes with a discussion on open issues and future challenges for the class of multicriteria rating recommenders.
Singular values and eigenvalues of tensors: a variational approach.
 Proceedings of the IEEE International Workshop on Computational Advances in MultiSensor Adaptive Processing, CAMSAP
, 2005
"... ..."
(Show Context)
On the best rank1 and rank(R1, R2,...,RN ) approximation of higherorder tensor
 SIAM Journal on Matrix Analysis and Applications
"... Abstract. In this paper we discuss a multilinear generalization of the best rankR approximation problem for matrices, namely, the approximation of a given higherorder tensor, in an optimal leastsquares sense, by a tensor that has prespecified column rank value, row rank value, etc. For matrices, t ..."
Abstract

Cited by 108 (3 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we discuss a multilinear generalization of the best rankR approximation problem for matrices, namely, the approximation of a given higherorder tensor, in an optimal leastsquares sense, by a tensor that has prespecified column rank value, row rank value, etc. For matrices, the solution is conceptually obtained by truncation of the singular value decomposition (SVD); however, this approach does not have a straightforward multilinear counterpart. We discuss higherorder generalizations of the power method and the orthogonal iteration method.
Symmetric tensors and symmetric tensor rank
 Scientific Computing and Computational Mathematics (SCCM
, 2006
"... Abstract. A symmetric tensor is a higher order generalization of a symmetric matrix. In this paper, we study various properties of symmetric tensors in relation to a decomposition into a symmetric sum of outer product of vectors. A rank1 orderk tensor is the outer product of k nonzero vectors. An ..."
Abstract

Cited by 99 (20 self)
 Add to MetaCart
(Show Context)
Abstract. A symmetric tensor is a higher order generalization of a symmetric matrix. In this paper, we study various properties of symmetric tensors in relation to a decomposition into a symmetric sum of outer product of vectors. A rank1 orderk tensor is the outer product of k nonzero vectors. Any symmetric tensor can be decomposed into a linear combination of rank1 tensors, each of them being symmetric or not. The rank of a symmetric tensor is the minimal number of rank1 tensors that is necessary to reconstruct it. The symmetric rank is obtained when the constituting rank1 tensors are imposed to be themselves symmetric. It is shown that rank and symmetric rank are equal in a number of cases, and that they always exist in an algebraically closed field. We will discuss the notion of the generic symmetric rank, which, due to the work of Alexander and Hirschowitz, is now known for any values of dimension and order. We will also show that the set of symmetric tensors of symmetric rank at most r is not closed, unless r = 1. Key words. Tensors, multiway arrays, outer product decomposition, symmetric outer product decomposition, candecomp, parafac, tensor rank, symmetric rank, symmetric tensor rank, generic symmetric rank, maximal symmetric rank, quantics AMS subject classifications. 15A03, 15A21, 15A72, 15A69, 15A18 1. Introduction. We
Convolutive speech bases and their application to supervised speech separation
 IEEE Transactions on Audio, Speech and Language Processing
, 2007
"... In this paper we present a convolutive basis decomposition method and its application on simultaneous speakers separation from monophonic recordings. The model we propose is a convolutive version of the nonnegative matrix factorization algorithm. Due to the nonnegativity constraint this type of co ..."
Abstract

Cited by 93 (7 self)
 Add to MetaCart
(Show Context)
In this paper we present a convolutive basis decomposition method and its application on simultaneous speakers separation from monophonic recordings. The model we propose is a convolutive version of the nonnegative matrix factorization algorithm. Due to the nonnegativity constraint this type of coding is very well suited for intuitively and efficiently representing magnitude spectra. We present results that reveal the nature of these basis functions and we introduce their utility in separating monophonic mixtures of known speakers.