Results 1  10
of
18
Tensor Decompositions and Applications
 SIAM REVIEW
, 2009
"... This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal proce ..."
Abstract

Cited by 265 (15 self)
 Add to MetaCart
(Show Context)
This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, etc. Two particular tensor decompositions can be considered to be higherorder extensions of the matrix singular value decompo
sition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rankone tensors, and the Tucker decomposition is a higherorder form of principal components analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The Nway Toolbox and Tensor Toolbox, both for MATLAB, and the Multilinear Engine are examples of software packages for working with tensors.
Secant varieties of SegreVeronese varieties P m × P n embedded by O(1,2)
, 2008
"... Let Xm,n be the SegreVeronese variety P m × P n embedded by the morphism given by O(1,2). In this paper, we provide two functions s(m, n) ≤ s(m, n) such that the s th secant variety of Xm,n has the expected dimension if s ≤ s(m,n) or s(m, n) ≤ s. We also present a conjecturally complete list of ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Let Xm,n be the SegreVeronese variety P m × P n embedded by the morphism given by O(1,2). In this paper, we provide two functions s(m, n) ≤ s(m, n) such that the s th secant variety of Xm,n has the expected dimension if s ≤ s(m,n) or s(m, n) ≤ s. We also present a conjecturally complete list of defective secant varieties of such SegreVeronese varieties.
SYMMETRIC TENSOR DECOMPOSITION
, 2009
"... We present an algorithm for decomposing a symmetric tensor of dimension n and order d as a sum of of rank1 symmetric tensors, extending the algorithm of Sylvester devised in 1886 for symmetric tensors of dimension 2. We exploit the known fact that every symmetric tensor is equivalently represented ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
We present an algorithm for decomposing a symmetric tensor of dimension n and order d as a sum of of rank1 symmetric tensors, extending the algorithm of Sylvester devised in 1886 for symmetric tensors of dimension 2. We exploit the known fact that every symmetric tensor is equivalently represented by a homogeneous polynomial in n variables of total degree d. Thus the decomposition corresponds to a sum of powers of linear forms. The impact of this contribution is twofold. First it permits an efficient computation of the decomposition of any tensor of subgeneric rank, as opposed to widely used iterative algorithms with unproved convergence (e.g. Alternate Least Squares or gradient descents). Second, it gives tools for understanding uniqueness conditions, and for detecting the tensor rank.
A note on compressed sensing and the complexity of matrix multiplication
 Inf. Process. Lett 109
, 2009
"... We consider the conjectured O(N2+) time complexity of multiplying any two N × N matrices A and B. Our main result is a deterministic Compressed Sensing (CS) algorithm that both rapidly and accurately computes A · B provided that the resulting matrix product is sparse/compressible. As a consequence ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We consider the conjectured O(N2+) time complexity of multiplying any two N × N matrices A and B. Our main result is a deterministic Compressed Sensing (CS) algorithm that both rapidly and accurately computes A · B provided that the resulting matrix product is sparse/compressible. As a consequence of our main result we increase the class of matrices A, for any given N × N matrix B, which allows the exact computation of A · B to be carried out using the conjectured O(N2+) operations. Additionally, in the process of developing our matrix multiplication procedure, we present a modified version of Indyk’s recently proposed extractorbased CS algorithm [12] which is resilient to noise. Key words: algorithms, analysis of algorithms, approximation algorithms, computational complexity 1
The I/O complexity of sparse matrix dense matrix multiplication
 In Proceedings of LATIN’10
, 2010
"... We consider the multiplication of a sparse N × N matrix A with a dense N × N matrix B in the I/O model. We determine the worstcase nonuniform complexity of this task up to a constant factor for all meaningful choices of the parameters N (dimension of the matrices), k (average number of nonzero en ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
We consider the multiplication of a sparse N × N matrix A with a dense N × N matrix B in the I/O model. We determine the worstcase nonuniform complexity of this task up to a constant factor for all meaningful choices of the parameters N (dimension of the matrices), k (average number of nonzero entries per column or row in A, i.e., there are in total kN nonzero entries), M (main memory size), and B (block size), as long as M ≥ B 2 (tall cache assumption). For large and small k, the structure of the algorithm does not need to depend on the structure of the sparse matrix A, whereas for intermediate densities it is possible and necessary to find submatrices that fit in memory and are slightly denser than on average. The focus of this work is asymptotic worstcase complexity, i.e., the existence of matrices that require a certain number of I/Os and the existence of algorithms (sometimes depending on the shape of the sparse matrix) that use only a constant factor more I/Os. 1
GENERATION AND SYZYGIES OF THE FIRST SECANT VARIETY
, 2009
"... Under certain effective positivity conditions, we show that the secant variety to a smooth variety satisfies N3,p. For smooth curves, we provide the best possible effective bound on the degree d of the embedding, d ≥ 2g + 3 + p. ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Under certain effective positivity conditions, we show that the secant variety to a smooth variety satisfies N3,p. For smooth curves, we provide the best possible effective bound on the degree d of the embedding, d ≥ 2g + 3 + p.
RANKS OF TENSORS AND AND A GENERALIZATION OF SECANT VARIETIES
"... Abstract. We investigate differences between Xrank and Xborder rank, focusing on the cases of tensors and partially symmetric tensors. As an aid to our study, and as an object of interest in its own right, we define notions of Xrank and border rank for a linear subspace. Results include determini ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We investigate differences between Xrank and Xborder rank, focusing on the cases of tensors and partially symmetric tensors. As an aid to our study, and as an object of interest in its own right, we define notions of Xrank and border rank for a linear subspace. Results include determining and bounding the maximum Xrank of points in several cases of interest. 1.