Results 1  10
of
52
Tensor Decompositions and Applications
 SIAM REVIEW
, 2009
"... This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal proce ..."
Abstract

Cited by 714 (17 self)
 Add to MetaCart
(Show Context)
This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, etc. Two particular tensor decompositions can be considered to be higherorder extensions of the matrix singular value decompo
sition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rankone tensors, and the Tucker decomposition is a higherorder form of principal components analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The Nway Toolbox and Tensor Toolbox, both for MATLAB, and the Multilinear Engine are examples of software packages for working with tensors.
Multiplying matrices faster than coppersmithwinograd
 In Proc. 44th ACM Symposium on Theory of Computation
, 2012
"... We develop new tools for analyzing matrix multiplication constructions similar to the CoppersmithWinograd construction, and obtain a new improved bound on ω < 2.3727. 1 ..."
Abstract

Cited by 148 (8 self)
 Add to MetaCart
(Show Context)
We develop new tools for analyzing matrix multiplication constructions similar to the CoppersmithWinograd construction, and obtain a new improved bound on ω < 2.3727. 1
Geometry and the complexity of matrix multiplication
, 2007
"... Abstract. We survey results in algebraic complexity theory, focusing on matrix multiplication. Our goals are (i) to show how open questions in algebraic complexity theory are naturally posed as questions in geometry and representation theory, (ii) to motivate researchers to work on these questions, ..."
Abstract

Cited by 35 (5 self)
 Add to MetaCart
(Show Context)
Abstract. We survey results in algebraic complexity theory, focusing on matrix multiplication. Our goals are (i) to show how open questions in algebraic complexity theory are naturally posed as questions in geometry and representation theory, (ii) to motivate researchers to work on these questions, and (iii) to point out relations with more general problems in geometry. The key geometric objects for our study are the secant varieties of Segre varieties. We explain how these varieties are also useful for algebraic statistics, the study of phylogenetic invariants, and quantum computing.
Breaking the CoppersmithWinograd barrier
, 2011
"... We develop new tools for analyzing matrix multiplication constructions similar to the CoppersmithWinograd construction, and obtain a new improved bound on ω < 2.3727. ..."
Abstract

Cited by 33 (0 self)
 Add to MetaCart
We develop new tools for analyzing matrix multiplication constructions similar to the CoppersmithWinograd construction, and obtain a new improved bound on ω < 2.3727.
Communicationoptimal parallel algorithm for Strassen’s matrix multiplication
 In Proceedings of the 24th ACM Symposium on Parallelism in Algorithms and Architectures, SPAA ’12
, 2012
"... Parallel matrix multiplication is one of the most studied fundamental problems in distributed and high performance computing. We obtain a new parallel algorithm that is based on Strassen’s fast matrix multiplication and minimizes communication. The algorithm outperforms all known parallel matrix mul ..."
Abstract

Cited by 32 (21 self)
 Add to MetaCart
(Show Context)
Parallel matrix multiplication is one of the most studied fundamental problems in distributed and high performance computing. We obtain a new parallel algorithm that is based on Strassen’s fast matrix multiplication and minimizes communication. The algorithm outperforms all known parallel matrix multiplication algorithms, classical and Strassenbased, both asymptotically and in practice. A critical bottleneck in parallelizing Strassen’s algorithm is the communication between the processors. Ballard, Demmel, Holtz, and Schwartz (SPAA’11) prove lower bounds on these communication costs, using expansion properties of the underlying computation graph. Our algorithm matches these lower bounds, and so is communicationoptimal. It exhibits perfect strong scaling within the maximum possible range.
Graph Expansion and Communication Costs of Fast Matrix Multiplication
"... The communication cost of algorithms (also known as I/Ocomplexity) is shown to be closely related to the expansion properties of the corresponding computation graphs. We demonstrate this on Strassen’s and other fast matrix multiplication algorithms, and obtain the first lower bounds on their communi ..."
Abstract

Cited by 32 (18 self)
 Add to MetaCart
The communication cost of algorithms (also known as I/Ocomplexity) is shown to be closely related to the expansion properties of the corresponding computation graphs. We demonstrate this on Strassen’s and other fast matrix multiplication algorithms, and obtain the first lower bounds on their communication costs. For sequential algorithms these bounds are attainable and so optimal. 1.
Determinantal equations for secant varieties and the EisenbudKohStillman conjecture. arXiv:1007.0192v3
, 2010
"... We sketch how to construct an example of a smoothable scheme R ⊂ PV and a smooth variety X ⊂ PV, such that R ∩X is locally Gorenstein, but not smoothable. Such an example illustrates that in the course of the proof of Theorem 1.1.1 in [BGL10] one really needs to treat this special case. We wrote dow ..."
Abstract

Cited by 28 (6 self)
 Add to MetaCart
(Show Context)
We sketch how to construct an example of a smoothable scheme R ⊂ PV and a smooth variety X ⊂ PV, such that R ∩X is locally Gorenstein, but not smoothable. Such an example illustrates that in the course of the proof of Theorem 1.1.1 in [BGL10] one really needs to treat this special case. We wrote down this example upon a request of an anonymous referee of this paper, and also motivated by questions of audiences during the author’s presentation in Grenoble and Berlin. To begin with note that unless R∩X = R, or R∩X is “small enough ” (so that all schemes of given degree and embedding dimension are smoothable), there is no obvious reason, why should R∩X be smoothable. In general, smoothability issues are very delicate and often rely on a case by case study, rather than general statements, see for instance proofs in [CEVV09] or [CN09]. Thus even if X ∩ R was always smoothable, for some weird reason, then the proof would be much more complicated than the proof of Theorem 1.1.1 in [BGL10]. Below we present a series of steps how we one can construct R and X for which R ∩ X is nonsmoothable, but without giving all the details. By the very nature of the smoothability issue, both R and X will be quite large. We keep in mind that it is desirable to construct R and X such that R ∩ X is locally