Results 1  10
of
18
On the complexity of numerical analysis
 IN PROC. 21ST ANN. IEEE CONF. ON COMPUTATIONAL COMPLEXITY (CCC ’06
, 2006
"... We study two quite different approaches to understanding the complexity of fundamental problems in numerical analysis: • The BlumShubSmale model of computation over the reals. • A problem we call the “Generic Task of Numerical Computation, ” which captures an aspect of doing numerical computation ..."
Abstract

Cited by 45 (5 self)
 Add to MetaCart
We study two quite different approaches to understanding the complexity of fundamental problems in numerical analysis: • The BlumShubSmale model of computation over the reals. • A problem we call the “Generic Task of Numerical Computation, ” which captures an aspect of doing numerical computation in floating point, similar to the “long exponent model ” that has been studied in the numerical computing community. We show that both of these approaches hinge on the question of understanding the complexity of the following problem, which we call PosSLP: Given a divisionfree straightline program producing an integer N, decide whether N> 0. • In the BlumShubSmale model, polynomial time computation over the reals (on discrete inputs) is polynomialtime equivalent to PosSLP, when there are only algebraic constants. We conjecture that using transcendental constants provides no additional power, beyond nonuniform reductions to PosSLP, and we present some preliminary results supporting this conjecture. • The Generic Task of Numerical Computation is also polynomialtime equivalent to PosSLP. We prove that PosSLP lies in the counting hierarchy. Combining this with work of Tiwari, we obtain that the Euclidean Traveling Salesman Problem lies in the counting hierarchy – the previous best upper bound for this important problem (in terms of classical complexity classes) being PSPACE. In the course of developing the context for our results on arithmetic circuits, we present some new observations on the complexity of ACIT: the Arithmetic Circuit Identity Testing problem. In particular, we show that if n! is not ultimately easy, then ACIT has subexponential complexity.
The complexity of constructing pseudorandom generators from hard functions
 Computational Complexity
, 2004
"... Abstract. We study the complexity of constructing pseudorandom generators (PRGs) from hard functions, focussing on constantdepth circuits. We show that, starting from a function f: {0, 1} l → {0, 1} computable in alternating time O(l) with O(1) alternations that is hard on average (i.e. there is a ..."
Abstract

Cited by 37 (9 self)
 Add to MetaCart
Abstract. We study the complexity of constructing pseudorandom generators (PRGs) from hard functions, focussing on constantdepth circuits. We show that, starting from a function f: {0, 1} l → {0, 1} computable in alternating time O(l) with O(1) alternations that is hard on average (i.e. there is a constant ɛ> 0 such that every circuit of size 2 ɛl fails to compute f on at least a 1/poly(l) fraction of inputs) we can construct a PRG: {0, 1} O(log n) → {0, 1} n computable by DLOGTIMEuniform constantdepth circuits of size polynomial in n. Such a PRG implies BP · AC 0 = AC 0 under DLOGTIMEuniformity. On the negative side, we prove that starting from a worstcase hard function f: {0, 1} l → {0, 1} (i.e. there is a constant ɛ> 0 such that every circuit of size 2 ɛl fails to compute f on some input) for every positive constant δ < 1 there is no blackbox construction of a PRG: {0, 1} δn → {0, 1} n computable by constantdepth circuits of size polynomial in n. We also study worstcase hardness amplification, which is the related problem of producing an averagecase hard function starting from a worstcase hard one. In particular, we deduce that there is no blackbox worstcase hardness amplification within the polynomial time hierarchy. These negative results are obtained by showing that polynomialsize constantdepth circuits cannot compute good extractors and listdecodable codes.
Depth Reduction for Circuits of Unbounded FanIn
, 1991
"... We prove that constant depth circuits of size n over the basis AND, OR, PARITY are no more powerful than circuits of this size with depth four. Similar techniques are used to obtain several other depth reduction theorems; in particular, we show every set in AC can be recognized by a family of ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
We prove that constant depth circuits of size n over the basis AND, OR, PARITY are no more powerful than circuits of this size with depth four. Similar techniques are used to obtain several other depth reduction theorems; in particular, we show every set in AC can be recognized by a family of depth three . The size bound n is optimal when considering depth reduction over AND, OR, and PARITY. Most of our results hold both for the uniform and the nonuniform case.
Oracles versus Proof Techniques that Do Not Relativize
 In Proc. 1st Annual International Symposium on Algorithms and Computation
, 1990
"... Oracle constructions have long been used to provide evidence that certain questions in complexity theory cannot be resolved using the usual techniques of simulation and diagonalization. However, the existence of nonrelativizing proof techniques seems to call this practice into question. This paper r ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
Oracle constructions have long been used to provide evidence that certain questions in complexity theory cannot be resolved using the usual techniques of simulation and diagonalization. However, the existence of nonrelativizing proof techniques seems to call this practice into question. This paper reviews the status of nonrelativizing proof techniques, and argues that many oracle constructions still yield valuable information about problems in complexity theory.
Hardness vs. Randomness within Alternating Time
, 2003
"... We study the complexity of building pseudorandom generators (PRGs) with logarithmic seed length from hard functions. We show that, starting from a function f: {0, 1} l → {0, 1} that is mildly hard on average, i.e. every circuit of size 2 Ω(l) fails to compute f on at least a 1/poly(l) fraction of in ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
We study the complexity of building pseudorandom generators (PRGs) with logarithmic seed length from hard functions. We show that, starting from a function f: {0, 1} l → {0, 1} that is mildly hard on average, i.e. every circuit of size 2 Ω(l) fails to compute f on at least a 1/poly(l) fraction of inputs, we can build a PRG: {0, 1} O(log n) → {0, 1} n computable in ATIME(O(1), log n) = alternating time O(log n) with O(1) alternations. Such a PRG implies BP · AC0 = AC0 under DLOGTIMEuniformity. On the negative side, we prove a tight lower bound on blackbox PRG constructions that are based on worstcase hard functions. We also prove a tight lower bound on blackbox worstcase hardness amplification, which is the problem of producing an averagecase hard function starting from a worstcase hard one. These lower bounds are obtained by showing that constant depth circuits cannot compute extractors and listdecodable codes.
On defining integers and proving arithmetic circuit lower bounds
 Computational Complexity
"... Abstract. Let τ(n) denote the minimum number of arithmetic operations sufficient to build the integer n from the constant 1. We prove that if there are arithmetic circuits of size polynomial in n for computing the permanent of n by n matrices, then τ(n!) is polynomially bounded in log n. Under the s ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Abstract. Let τ(n) denote the minimum number of arithmetic operations sufficient to build the integer n from the constant 1. We prove that if there are arithmetic circuits of size polynomial in n for computing the permanent of n by n matrices, then τ(n!) is polynomially bounded in log n. Under the same assumption on the permanent, we conclude that the PochhammerWilkinson polynomials ∏n k=1 (X − k) and the Taylor approximations ∑n k=0 1 k! Xk and ∑n k=1 1 k Xk of exp and log, respectively, can be computed by arithmetic circuits of size polynomial in log n (allowing divisions). This connects several so far unrelated conjectures in algebraic complexity.
On defining integers in the counting hierarchy and proving lower bounds in algebraic complexity
 In Proc. STACS 2007
, 2007
"... in algebraic complexity ..."
Rational Proofs
"... We study a new type of proof system, where an unbounded prover and a polynomial time verifier interact, on inputs a string x and a function f, so that the Verifier may learn f(x). The novelty of our setting is that there no longer are “good” or “malicious ” provers, but only rational ones. In essenc ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
We study a new type of proof system, where an unbounded prover and a polynomial time verifier interact, on inputs a string x and a function f, so that the Verifier may learn f(x). The novelty of our setting is that there no longer are “good” or “malicious ” provers, but only rational ones. In essence, the Verifier has a budget c and gives the Prover a reward r ∈ [0, c] determined by the transcript of their interaction; the prover wishes to maximize his expected reward; and his reward is maximized only if he the verifier correctly learns f(x). Rational proof systems are as powerful as their classical counterparts for polynomially many rounds of interaction, but are much more powerful when we only allow a constant number of rounds. Indeed, we prove that if f ∈ #P, then f is computable by a oneround rational MerlinArthur game, where, on input x, Merlin’s single message actually consists of sending just the value f(x). Further, we prove that CH, the counting hierarchy, coincides with the class of languages computable by a constantround rational MerlinArthur game. Our results rely on a basic and crucial connection between rational proof systems and proper scoring rules, a tool developed to elicit truthful information from experts.
Monomials in arithmetic circuits: Complete problems in the counting hierarchy
, 2012
"... We consider the complexity of two questions on polynomials given by arithmetic circuits: testing whether a monomial is present and counting the number of monomials. We show that these problems are complete for subclasses of the counting hierarchy which had few or no known natural complete problems b ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We consider the complexity of two questions on polynomials given by arithmetic circuits: testing whether a monomial is present and counting the number of monomials. We show that these problems are complete for subclasses of the counting hierarchy which had few or no known natural complete problems before. We also study these questions for circuits computing multilinear polynomials. 1.
The Chain Method to Separate Counting Classes
, 1998
"... We introduce a new method to separate counting classes of a special type by oracles. Among the classes, for which this method is applicable, are NP, coNP, US (also called 1NP), \PhiP and all other MODclasses, PP and C= P, classes of Boolean Hierarchies over the named classes, classes of finite ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
We introduce a new method to separate counting classes of a special type by oracles. Among the classes, for which this method is applicable, are NP, coNP, US (also called 1NP), \PhiP and all other MODclasses, PP and C= P, classes of Boolean Hierarchies over the named classes, classes of finite acceptance type, and many more. As an important special case, we completely characterize all relativizable inclusions between classes NP(k) from the Boolean Hierarchy over NP and other classes defined by what we will call bounded counting.