Results 1 
9 of
9
Tensor Decompositions and Applications
 SIAM REVIEW
, 2009
"... This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal proce ..."
Abstract

Cited by 237 (14 self)
 Add to MetaCart
This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, etc. Two particular tensor decompositions can be considered to be higherorder extensions of the matrix singular value decompo
sition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rankone tensors, and the Tucker decomposition is a higherorder form of principal components analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The Nway Toolbox and Tensor Toolbox, both for MATLAB, and the Multilinear Engine are examples of software packages for working with tensors.
TENSOR RANK AND THE ILLPOSEDNESS OF THE BEST LOWRANK APPROXIMATION PROBLEM
"... There has been continued interest in seeking a theorem describing optimal lowrank approximations to tensors of order 3 or higher, that parallels the Eckart–Young theorem for matrices. In this paper, we argue that the naive approach to this problem is doomed to failure because, unlike matrices, te ..."
Abstract

Cited by 71 (10 self)
 Add to MetaCart
There has been continued interest in seeking a theorem describing optimal lowrank approximations to tensors of order 3 or higher, that parallels the Eckart–Young theorem for matrices. In this paper, we argue that the naive approach to this problem is doomed to failure because, unlike matrices, tensors of order 3 or higher can fail to have best rankr approximations. The phenomenon is much more widespread than one might suspect: examples of this failure can be constructed over a wide range of dimensions, orders and ranks, regardless of the choice of norm (or even Brègman divergence). Moreover, we show that in many instances these counterexamples have positive volume: they cannot be regarded as isolated phenomena. In one extreme case, we exhibit a tensor space in which no rank3 tensor has an optimal rank2 approximation. The notable exceptions to this misbehavior are rank1 tensors and order2 tensors (i.e. matrices). In a more positive spirit, we propose a natural way of overcoming the illposedness of the lowrank approximation problem, by using weak solutions when true solutions do not exist. For this to work, it is necessary to characterize the set of weak solutions, and we do this in the case of rank 2, order 3 (in arbitrary dimensions). In our work we emphasize the importance of closely studying concrete lowdimensional examples as a first step towards more general results. To this end, we present a detailed analysis of equivalence classes of 2 × 2 × 2 tensors, and we develop methods for extending results upwards to higher orders and dimensions. Finally, we link our work to existing studies of tensors from an algebraic geometric point of view. The rank of a tensor can in theory be given a semialgebraic description; in other words, can be determined by a system of polynomial inequalities. We study some of these polynomials in cases of interest to us; in particular we make extensive use of the hyperdeterminant ∆ on R 2×2×2.
Multiplying matrices faster than coppersmithwinograd
 In Proc. 44th ACM Symposium on Theory of Computation
, 2012
"... We develop new tools for analyzing matrix multiplication constructions similar to the CoppersmithWinograd construction, and obtain a new improved bound on ω < 2.3727. 1 ..."
Abstract

Cited by 41 (4 self)
 Add to MetaCart
We develop new tools for analyzing matrix multiplication constructions similar to the CoppersmithWinograd construction, and obtain a new improved bound on ω < 2.3727. 1
Symmetric tensors and symmetric tensor rank
 Scientific Computing and Computational Mathematics (SCCM
, 2006
"... Abstract. A symmetric tensor is a higher order generalization of a symmetric matrix. In this paper, we study various properties of symmetric tensors in relation to a decomposition into a symmetric sum of outer product of vectors. A rank1 orderk tensor is the outer product of k nonzero vectors. An ..."
Abstract

Cited by 40 (18 self)
 Add to MetaCart
Abstract. A symmetric tensor is a higher order generalization of a symmetric matrix. In this paper, we study various properties of symmetric tensors in relation to a decomposition into a symmetric sum of outer product of vectors. A rank1 orderk tensor is the outer product of k nonzero vectors. Any symmetric tensor can be decomposed into a linear combination of rank1 tensors, each of them being symmetric or not. The rank of a symmetric tensor is the minimal number of rank1 tensors that is necessary to reconstruct it. The symmetric rank is obtained when the constituting rank1 tensors are imposed to be themselves symmetric. It is shown that rank and symmetric rank are equal in a number of cases, and that they always exist in an algebraically closed field. We will discuss the notion of the generic symmetric rank, which, due to the work of Alexander and Hirschowitz, is now known for any values of dimension and order. We will also show that the set of symmetric tensors of symmetric rank at most r is not closed, unless r = 1. Key words. Tensors, multiway arrays, outer product decomposition, symmetric outer product decomposition, candecomp, parafac, tensor rank, symmetric rank, symmetric tensor rank, generic symmetric rank, maximal symmetric rank, quantics AMS subject classifications. 15A03, 15A21, 15A72, 15A69, 15A18 1. Introduction. We
Geometry and the complexity of matrix multiplication
, 2007
"... Abstract. We survey results in algebraic complexity theory, focusing on matrix multiplication. Our goals are (i) to show how open questions in algebraic complexity theory are naturally posed as questions in geometry and representation theory, (ii) to motivate researchers to work on these questions, ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
Abstract. We survey results in algebraic complexity theory, focusing on matrix multiplication. Our goals are (i) to show how open questions in algebraic complexity theory are naturally posed as questions in geometry and representation theory, (ii) to motivate researchers to work on these questions, and (iii) to point out relations with more general problems in geometry. The key geometric objects for our study are the secant varieties of Segre varieties. We explain how these varieties are also useful for algebraic statistics, the study of phylogenetic invariants, and quantum computing.
The border rank of the multiplication of 2 × 2 matrices is seven
 J. Amer. Math. Soc
"... One of the leading problems of algebraic complexity theory is matrix multiplication. The naïve multiplication of two n × n matrices uses n 3 multiplications. In 1969, Strassen [20] presented an explicit algorithm for multiplying 2 × 2 matrices using seven multiplications. In the opposite direction, ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
One of the leading problems of algebraic complexity theory is matrix multiplication. The naïve multiplication of two n × n matrices uses n 3 multiplications. In 1969, Strassen [20] presented an explicit algorithm for multiplying 2 × 2 matrices using seven multiplications. In the opposite direction, Hopcroft and Kerr [12] and
1 On the Arithmetic Complexity of StrassenLike Matrix Multiplications
"... The Strassen algorithm for multiplying 2 × 2 matrices requires seven multiplications and 18 additions. The recursive use of this algorithm for matrices of dimension n yields a total arithmetic complexity of (7n2.81 − 6n2) for n = 2k. Winograd showed that using seven multiplications for this kind of ..."
Abstract
 Add to MetaCart
The Strassen algorithm for multiplying 2 × 2 matrices requires seven multiplications and 18 additions. The recursive use of this algorithm for matrices of dimension n yields a total arithmetic complexity of (7n2.81 − 6n2) for n = 2k. Winograd showed that using seven multiplications for this kind of multiplications is optimal, so any algorithm for multiplying 2 × 2 matrices with seven multiplications is therefore called a Strassenlike algorithm. Winograd also discovered an additively optimal Strassenlike algorithm with 15 additions. This algorithm is called the Winograd’s variant, whose arithmetic complexity is (6n2.81 − 5n2) for n = 2k and (3.73n2.81 − 5n2) for n = 8 · 2k, which is the bestknown bound for Strassenlike multiplications. This paper proposes a method that reduces the complexity of Winograd’s variant to (5n2.81 + 0.5n2.59 + 2n2.32 − 6.5n2) for n = 2k. It is also shown that the total arithmetic complexity can be improved to (3.55n2.81 + 0.148n2.59 + 1.02n2.32 − 6.5n2) for n = 8 · 2k, which, to the best of our knowledge, improves the bestknown bound for a Strassenlike matrix multiplication algorithm.
Cryptography from tensor problems
, 2012
"... This manuscript describes a proposal for a new trapdoor oneway function of the multivariatequadratic type. It was first posted to the IACR preprint server in May 2012. Subsequently, Enrico Thomae and Christopher Wolf were able to to determine that a smallminors MinRank attack works against this s ..."
Abstract
 Add to MetaCart
This manuscript describes a proposal for a new trapdoor oneway function of the multivariatequadratic type. It was first posted to the IACR preprint server in May 2012. Subsequently, Enrico Thomae and Christopher Wolf were able to to determine that a smallminors MinRank attack works against this scheme. I would like to thank them for their close study of the proposal. The manuscript follows as originally posted, with the addition of a few references and a brief description of the successful attack (end of Section 4.1). Keywords: cryptography. Multivariate quadratic cryptosystem, MinRank, tensor rank, postquantum 1
Exact and Approximation Algorithms for the Maximum Constraint Satisfaction Problem over the Point Algebra
"... We study the constraint satisfaction problem over the point algebra. In this problem, an instance consists of a set of variables and a set of binary constraints of forms (x < y), (x ≤ y), (x = y) or (x = y). Then, the objective is to assign integers to variables so as to satisfy as many constraints ..."
Abstract
 Add to MetaCart
We study the constraint satisfaction problem over the point algebra. In this problem, an instance consists of a set of variables and a set of binary constraints of forms (x < y), (x ≤ y), (x = y) or (x = y). Then, the objective is to assign integers to variables so as to satisfy as many constraints as possible. This problem contains many important problems such as Correlation Clustering, Maximum Acyclic Subgraph, and Feedback Arc Set. We first give an exact algorithm that runs in O ∗ log 5 (3 log 6 n) time, which improves the previous best O ∗ (3 n) obtained by a standard dynamic programming. Our algorithm combines the dynamic programming with the splitandlist technique. The splitandlist technique involves matrix products and we make use of sparsity of matrices to speed up the computation. As for approximation, we give a 0.4586approximation algorithm when the objective is maximizing the number of satisfied constraints, and give an O(log n log log n)approximation algorithm when the objective is minimizing the number of unsatisfied constraints.