Results 1  10
of
22
On The Complexity Of Computing Determinants
 COMPUTATIONAL COMPLEXITY
, 2001
"... We present new baby steps/giant steps algorithms of asymptotically fast running time for dense matrix problems. Our algorithms compute the determinant, characteristic polynomial, Frobenius normal form and Smith normal form of a dense n n matrix A with integer entries in (n and (n bi ..."
Abstract

Cited by 47 (17 self)
 Add to MetaCart
We present new baby steps/giant steps algorithms of asymptotically fast running time for dense matrix problems. Our algorithms compute the determinant, characteristic polynomial, Frobenius normal form and Smith normal form of a dense n n matrix A with integer entries in (n and (n bit operations; here denotes the largest entry in absolute value and the exponent adjustment by "+o(1)" captures additional factors for positive real constants C 1 , C 2 , C 3 . The bit complexity (n results from using the classical cubic matrix multiplication algorithm. Our algorithms are randomized, and we can certify that the output is the determinant of A in a Las Vegas fashion. The second category of problems deals with the setting where the matrix A has elements from an abstract commutative ring, that is, when no divisions in the domain of entries are possible. We present algorithms that deterministically compute the determinant, characteristic polynomial and adjoint of A with n and O(n ) ring additions, subtractions and multiplications.
Descriptive Complexity Theory over the Real Numbers
 LECTURES IN APPLIED MATHEMATICS
, 1996
"... We present a logical approach to complexity over the real numbers with respect to the model of Blum, Shub and Smale. The logics under consideration are interpreted over a special class of twosorted structures, called Rstructures: They consist of a finite structure together with the ordered field ..."
Abstract

Cited by 24 (8 self)
 Add to MetaCart
We present a logical approach to complexity over the real numbers with respect to the model of Blum, Shub and Smale. The logics under consideration are interpreted over a special class of twosorted structures, called Rstructures: They consist of a finite structure together with the ordered field of reals and a finite set of functions from the finite structure into R. They are a special case of the metafinite structures introduced recently by Grädel and Gurevich. We argue that Rstructures provide the right class of structures to develop a descriptive complexity theory over R. We substantiate this claim by a number of results that relate logical definability on Rstructures with complexity of computations of BSSmachines.
On the TimeSpace Complexity of Geometric Elimination Procedures
, 1999
"... In [25] and [22] a new algorithmic concept was introduced for the symbolic solution of a zero dimensional complete intersection polynomial equation system satisfying a certain generic smoothness condition. The main innovative point of this algorithmic concept consists in the introduction of a new ge ..."
Abstract

Cited by 23 (16 self)
 Add to MetaCart
In [25] and [22] a new algorithmic concept was introduced for the symbolic solution of a zero dimensional complete intersection polynomial equation system satisfying a certain generic smoothness condition. The main innovative point of this algorithmic concept consists in the introduction of a new geometric invariant, called the degree of the input system, and the proof that the most common elimination problems have time complexity which is polynomial in this degree and the length of the input.
Fast ContextFree Grammar Parsing Requires Fast Boolean Matrix Multiplication
, 2002
"... In 1975, Valiant showed that Boolean matrix multiplication can be used for parsing contextfree grammars (CFGs), yielding the asympotically fastest (although not practical) CFG parsing algorithm known. We prove a dual result: any CFG parser with time complexity $O(g n^{3  \epsilson})$, where $g$ is ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
In 1975, Valiant showed that Boolean matrix multiplication can be used for parsing contextfree grammars (CFGs), yielding the asympotically fastest (although not practical) CFG parsing algorithm known. We prove a dual result: any CFG parser with time complexity $O(g n^{3  \epsilson})$, where $g$ is the size of the grammar and $n$ is the length of the input string, can be efficiently converted into an algorithm to multiply $m \times m$ Boolean matrices in time $O(m^{3  \epsilon/3})$. Given that practical, substantially subcubic Boolean matrix multiplication algorithms have been quite difficult to find, we thus explain why there has been little progress in developing practical, substantially subcubic general CFG parsers. In proving this result, we also develop a formalization of the notion of parsing.
On Arithmetic Branching Programs
 IN PROC. OF THE 13TH ANNUAL IEEE CONFERENCE ON COMPUTATIONAL COMPLEXITY
, 1998
"... The model of arithmetic branching programs is an algebraic model of computation generalizing the model of modular branching programs. We show that, up to a polynomial factor in size, arithmetic branching programs are equivalent to complements of dependency programs, a model introduced by Pudl'ak ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
The model of arithmetic branching programs is an algebraic model of computation generalizing the model of modular branching programs. We show that, up to a polynomial factor in size, arithmetic branching programs are equivalent to complements of dependency programs, a model introduced by Pudl'ak and Sgall [20]. Using this equivalence we prove that dependency programs are closed under conjunction over every field, answering an open problem of [20]. Furthermore, we show that span programs, an algebraic model of computation introduced by Karchmer and Wigderson [16], are at least as strong as arithmetic programs; every arithmetic program can be simulated by a span program of size not more than twice the size of the arithmetic program. Using the above results we give a new proof that NL/poly ` \PhiL/poly, first proved by Wigderson [25]. Our simulation of NL/poly is more efficient, and it holds for logspace counting classes over every field.
Arithmetic Circuits: a survey of recent results and open questions
"... A large class of problems in symbolic computation can be expressed as the task of computing some polynomials; and arithmetic circuits form the most standard model for studying the complexity of such computations. This algebraic model of computation attracted a large amount of research in the last fi ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
A large class of problems in symbolic computation can be expressed as the task of computing some polynomials; and arithmetic circuits form the most standard model for studying the complexity of such computations. This algebraic model of computation attracted a large amount of research in the last five decades, partially due to its simplicity and elegance. Being a more structured model than Boolean circuits, one could hope that the fundamental problems of theoretical computer science, such as separating P from NP, will be easier to solve for arithmetic circuits. However, in spite of the appearing simplicity and the vast amount of mathematical tools available, no major breakthrough has been seen. In fact, all the fundamental questions are still open for this model as well. Nevertheless, there has been a lot of progress in the area and beautiful results have been found, some in the last few years. As examples we mention the connection between polynomial identity testing and lower bounds of Kabanets and Impagliazzo, the lower bounds of Raz for multilinear formulas, and two new approaches for proving lower bounds: Geometric Complexity Theory and Elusive Functions. The goal of this monograph is to survey the field of arithmetic circuit complexity, focusing mainly on what we find to be the most interesting and accessible research directions. We aim to cover the main results and techniques, with an emphasis on works from the last two decades. In particular, we
On defining integers and proving arithmetic circuit lower bounds
 Computational Complexity
"... Abstract. Let τ(n) denote the minimum number of arithmetic operations sufficient to build the integer n from the constant 1. We prove that if there are arithmetic circuits of size polynomial in n for computing the permanent of n by n matrices, then τ(n!) is polynomially bounded in log n. Under the s ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Abstract. Let τ(n) denote the minimum number of arithmetic operations sufficient to build the integer n from the constant 1. We prove that if there are arithmetic circuits of size polynomial in n for computing the permanent of n by n matrices, then τ(n!) is polynomially bounded in log n. Under the same assumption on the permanent, we conclude that the PochhammerWilkinson polynomials ∏n k=1 (X − k) and the Taylor approximations ∑n k=0 1 k! Xk and ∑n k=1 1 k Xk of exp and log, respectively, can be computed by arithmetic circuits of size polynomial in log n (allowing divisions). This connects several so far unrelated conjectures in algebraic complexity.
On the complexity of Halfspace Area Queries
 Proc. 17th Annu ACM Sympos. Comput. Geom
, 2001
"... Given a non convex simple polygon P , is it possible to construct a data structure which after preprocessing can answer halfspace area queries (i.e. ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Given a non convex simple polygon P , is it possible to construct a data structure which after preprocessing can answer halfspace area queries (i.e.
On defining integers in the counting hierarchy and proving lower bounds in algebraic complexity
 In Proc. STACS 2007
, 2007
"... in algebraic complexity ..."
A New Method to Obtain Lower Bounds for Polynomial Evaluation
, 1999
"... We present a new method to obtain lower bounds for the time complexity of polynomial evaluation procedures. Time, denoted by L, is measured in terms of nonscalar arithmetic operations. In contrast with known methods for proving lower complexity bounds, our method is purely combinatorial and does not ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
We present a new method to obtain lower bounds for the time complexity of polynomial evaluation procedures. Time, denoted by L, is measured in terms of nonscalar arithmetic operations. In contrast with known methods for proving lower complexity bounds, our method is purely combinatorial and does not require powerful tools from algebraic or diophantine geometry.