Results 1  10
of
28
Geometry and the complexity of matrix multiplication
, 2007
"... Abstract. We survey results in algebraic complexity theory, focusing on matrix multiplication. Our goals are (i) to show how open questions in algebraic complexity theory are naturally posed as questions in geometry and representation theory, (ii) to motivate researchers to work on these questions, ..."
Abstract

Cited by 35 (5 self)
 Add to MetaCart
(Show Context)
Abstract. We survey results in algebraic complexity theory, focusing on matrix multiplication. Our goals are (i) to show how open questions in algebraic complexity theory are naturally posed as questions in geometry and representation theory, (ii) to motivate researchers to work on these questions, and (iii) to point out relations with more general problems in geometry. The key geometric objects for our study are the secant varieties of Segre varieties. We explain how these varieties are also useful for algebraic statistics, the study of phylogenetic invariants, and quantum computing.
Fast Modular Transforms
, 1974
"... It is shown that if division and multiplication in a Euclidean domain can be performed in O(N log ~ N) steps, then the residues of an N precision element in the domain can be computed in O(N log a+l N) steps. A special case of this result is that the residues of an N precision integer can be compute ..."
Abstract

Cited by 34 (0 self)
 Add to MetaCart
It is shown that if division and multiplication in a Euclidean domain can be performed in O(N log ~ N) steps, then the residues of an N precision element in the domain can be computed in O(N log a+l N) steps. A special case of this result is that the residues of an N precision integer can be computed in O(N log S N log log N) total operations. Using a polynomial division algorithm due to Strassen [24], it is shown that a polynomial of degree N 1 can be evaluated at N points in O(N log 2 N) total operations or O(N log N) multiplications. Using the methods of Horowitz [10] and Heindel [9], it is shown that if division and multiplication in a Euclidean domain can be performed in O(N log ~ N) steps, then the Chinese Remainder Algorithm (CRA) can be performed in O(Nlog ~+x N) steps. Special cases are: (a) the integer CRA can be performed in O(N log S N log log N) total operations, and (b) a polynomial of degree N 1 can be interpolated in O(N log 2 N) total operations or O(Nlog N) multiplications. Using these results, it is shown that a polynomial of degree N and all its derivatives can be evaluated at a point in O(N log s N) total operations.
Graph Expansion and Communication Costs of Fast Matrix Multiplication
"... The communication cost of algorithms (also known as I/Ocomplexity) is shown to be closely related to the expansion properties of the corresponding computation graphs. We demonstrate this on Strassen’s and other fast matrix multiplication algorithms, and obtain the first lower bounds on their communi ..."
Abstract

Cited by 32 (18 self)
 Add to MetaCart
The communication cost of algorithms (also known as I/Ocomplexity) is shown to be closely related to the expansion properties of the corresponding computation graphs. We demonstrate this on Strassen’s and other fast matrix multiplication algorithms, and obtain the first lower bounds on their communication costs. For sequential algorithms these bounds are attainable and so optimal. 1.
A Lower Bound for Matrix Multiplication
 SIAM J. Comput
, 1988
"... We prove that computing the product of two n × n matrices over the binary field requires at least 2.5 n 2  o ( n 2 ) multiplications. Key Words : matrix multiplication, arithmetic complexity, lower bounds, linear codes. 1. INTRODUCTION Let x = ( x 1 , . . . , x n ) T and y = ( y 1 , . . . , y ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
(Show Context)
We prove that computing the product of two n × n matrices over the binary field requires at least 2.5 n 2  o ( n 2 ) multiplications. Key Words : matrix multiplication, arithmetic complexity, lower bounds, linear codes. 1. INTRODUCTION Let x = ( x 1 , . . . , x n ) T and y = ( y 1 , . . . , y m ) T be column vectors of indeterminates. A straightline algorithm for computing a set of bilinear forms in x and y is called quadratic ( respectively bilinear ), if all its nonscalar multiplication are of the shape l ( x , y ) . l ( x , y ) , (respectively l ( x ) . l ( y ) ) where l and l are linear forms of the indeterminates. 1 In this paper we establish the new 2.5 n 2  o ( n 2 ) lower bound on the multiplicative complexity of quadratic algorithms for multiplying n × n matrices over the binary field Z 2 . Let M F ( n , m , k ) and M ## F ( n , m , k ) denote the number of multiplications required to compute the product of n ×m and m ×k matrices by means of quadratic ...
Duality applied to the complexity of matrix multiplication
 SIAM J. COMPUT
, 1973
"... The paper considers the complexity of bilinear forms in a noncommutative ring. The dual of a computation is defined and applied to matrix multiplication and other bilinear forms. It is shown that the dual of an optimal computation gives an optimal computation for a dual problem. An nxm by mxp matri ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
(Show Context)
The paper considers the complexity of bilinear forms in a noncommutative ring. The dual of a computation is defined and applied to matrix multiplication and other bilinear forms. It is shown that the dual of an optimal computation gives an optimal computation for a dual problem. An nxm by mxp matrix product is shown to be the dual of an nxp by pxm or an mxn by nxp matrix product implying that each of the matrix products requires the same number of multiplications to compute. Finally an algorithm for computing a single bilinear form over a noncommutative ring with a minimum number of multiplications is derived by considering a dual problem.
The Design and Analysis of BulkSynchronous Parallel Algorithms
, 1998
"... The model of bulksynchronous parallel (BSP) computation is an emerging paradigm of generalpurpose parallel computing. This thesis presents a systematic approach to the design and analysis of BSP algorithms. We introduce an extension of the BSP model, called BSPRAM, which reconciles sharedmemory s ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
(Show Context)
The model of bulksynchronous parallel (BSP) computation is an emerging paradigm of generalpurpose parallel computing. This thesis presents a systematic approach to the design and analysis of BSP algorithms. We introduce an extension of the BSP model, called BSPRAM, which reconciles sharedmemory style programming with efficient exploitation of data locality. The BSPRAM model can be optimally simulated by a BSP computer for a broad range of algorithms possessing certain characteristic properties: obliviousness, slackness, granularity. We use BSPRAM to design BSP algorithms for problems from three large, partially overlapping domains: combinatorial computation, dense matrix computation, graph computation. Some of the presented algorithms are adapted from known BSP algorithms (butterfly dag computation, cube dag computation, matrix multiplication). Other algorithms are obtained by application of established nonBSP techniques (sorting, randomised list contraction, Gaussian elimination without pivoting and with column pivoting, algebraic path computation), or use original techniques specific to the BSP model (deterministic list contraction, Gaussian elimination with nested block pivoting, communicationefficient multiplication of Boolean matrices, synchronisationefficient shortest paths computation). The asymptotic BSP cost of each algorithm is established, along with its BSPRAM characteristics. We conclude by outlining some directions for future research.
The border rank of the multiplication of 2 × 2 matrices is seven
 J. Amer. Math. Soc
"... One of the leading problems of algebraic complexity theory is matrix multiplication. The naïve multiplication of two n × n matrices uses n 3 multiplications. In 1969, Strassen [20] presented an explicit algorithm for multiplying 2 × 2 matrices using seven multiplications. In the opposite direction, ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
(Show Context)
One of the leading problems of algebraic complexity theory is matrix multiplication. The naïve multiplication of two n × n matrices uses n 3 multiplications. In 1969, Strassen [20] presented an explicit algorithm for multiplying 2 × 2 matrices using seven multiplications. In the opposite direction, Hopcroft and Kerr [12] and
CommunicationOptimal Parallel Recursive Rectangular Matrix Multiplication
, 2012
"... ..."
(Show Context)
On the algebraic structure of certain partially observable finitestate Markov processes”, Inform
 Maslen Susquehanna International Group LLP 401 City Avenue, Suite 220 Bala Cynwyd, PA 19004 david@maslen.net Daniel N. Rockmore Department of Mathematics Dartmouth College Hanover, NH 03755 rockmore@cs.dartmouth.edu
, 1978
"... We consider a class of nonlinear estimation problems possessing certain algebraic properties, and we exploit these properties in order to study the computational complexity of nonlinear estimation algorithms. Specifically, we define a class of finitestate Markov processes evolving on finite groups ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
We consider a class of nonlinear estimation problems possessing certain algebraic properties, and we exploit these properties in order to study the computational complexity of nonlinear estimation algorithms. Specifically, we define a class of finitestate Markov processes evolving on finite groups and consider noisy observations of these processes. By introducing some concepts from the theory of representations of finite groups, we are able to define a pair of "dual " filtering algorithms. We then study several specific classes of groups in detail, and, by developing a generalization of the fast Fourier transform algorithm, we derive an efficient nonlinear filtering algorithm. A continuoustime version of these ideas is developed for cyclic groups. 1.