Results 1  10
of
46
NonDeterministic Exponential Time has TwoProver Interactive Protocols
"... We determine the exact power of twoprover interactive proof systems introduced by BenOr, Goldwasser, Kilian, and Wigderson (1988). In this system, two allpowerful noncommunicating provers convince a randomizing polynomial time verifier in polynomial time that the input z belongs to the language ..."
Abstract

Cited by 402 (40 self)
 Add to MetaCart
We determine the exact power of twoprover interactive proof systems introduced by BenOr, Goldwasser, Kilian, and Wigderson (1988). In this system, two allpowerful noncommunicating provers convince a randomizing polynomial time verifier in polynomial time that the input z belongs to the language L. It was previously suspected (and proved in a relativized sense) that coNPcomplete languages do not admit such proof systems. In sharp contrast, we show that the class of languages having twoprover interactive proof systems is nondeterministic exponential time. After the recent results that all languages in PSPACE have single prover interactive proofs (Lund, Fortnow, Karloff, Nisan, and Shamir), this represents a further step demonstrating the unexpectedly immense power of randomization and interaction in efficient provability. Indeed, it follows that multiple provers with coins are strictly stronger than without, since NEXP # NP. In particular, for the first time, provably polynomial time intractable languages turn out to admit “efficient proof systems’’ since NEXP # P. We show that to prove membership in languages in EXP, the honest provers need the power of EXP only. A consequence, linking more standard concepts of structural complexity, states that if EX P has polynomial size circuits then EXP = Cg = MA. The first part of the proof of the main result extends recent techniques of polynomial extrapolation of truth values used in the single prover case. The second part is a verification scheme for multilinearity of an nvariable function held by an oracle and can be viewed as an independent result on program verification. Its proof rests on combinatorial techniques including the estimation of the expansion rate of a graph.
An Improved Lower Bound for the Elementary Theories of Trees
, 1996
"... . The firstorder theories of finite and rational, constructor and feature trees possess complete axiomatizations and are decidable by quantifier elimination [15, 13, 14, 5, 10, 3, 20, 4, 2]. By using the uniform inseparability lower bounds techniques due to Compton and Henson [6], based on repr ..."
Abstract

Cited by 29 (3 self)
 Add to MetaCart
. The firstorder theories of finite and rational, constructor and feature trees possess complete axiomatizations and are decidable by quantifier elimination [15, 13, 14, 5, 10, 3, 20, 4, 2]. By using the uniform inseparability lower bounds techniques due to Compton and Henson [6], based on representing large binary relations by means of short formulas manipulating with high trees, we prove that all the above theories, as well as all their subtheories, are nonelementary in the sense of Kalmar, i.e., cannot be decided within time bounded by a k story exponential function 1 exp k (n) for any fixed k. Moreover, for some constant d ? 0 these decision problems require nondeterministic time exceeding exp 1 (bdnc) infinitely often. 1 Introduction Trees are fundamental in Computer Science. Different tree structures are used as underlying domains in automated theorem proving, term rewriting, functional and logic programming, constraint solving, symbolic computation, knowledge re...
The Permanent Requires Large Uniform Threshold Circuits
, 1999
"... We show that the permanent cannot be computed by uniform constantdepth threshold circuits of size T (n) for any function T such that for all k, T (k) (n) = o(2 n ). More generally, we show that any problem that is hard for the complexity class C=P requires circuits of this size (on the unif ..."
Abstract

Cited by 27 (8 self)
 Add to MetaCart
We show that the permanent cannot be computed by uniform constantdepth threshold circuits of size T (n) for any function T such that for all k, T (k) (n) = o(2 n ). More generally, we show that any problem that is hard for the complexity class C=P requires circuits of this size (on the uniform constantdepth threshold circuit model). In particular, this lower bound applies to any problem that is hard for the complexity classes PP or #P.
TimeSpace Tradeoffs for Nondeterministic Computation
 In Proceedings of the 15th IEEE Conference on Computational Complexity
, 2000
"... We show new tradeoffs for satisfiability and nondeterministic linear time. Satisfiability cannot be solved on general purpose randomaccess Turing machines in time n 1.618 and space n o(1) . This improves recent results of Fortnow and of Lipton and Viglas. In general, for any constant a less tha ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
We show new tradeoffs for satisfiability and nondeterministic linear time. Satisfiability cannot be solved on general purpose randomaccess Turing machines in time n 1.618 and space n o(1) . This improves recent results of Fortnow and of Lipton and Viglas. In general, for any constant a less than the golden ratio, we prove that satisfiability cannot be solved in time n a and space n b for some positive constant b. Our techniques allow us to establish this result for b < 1 2 ( a+2 a 2  a). We can do better for a close to the golden ratio, for example, satisfiability cannot be solved by a randomaccess Turing machine using n 1.46 time and n .11 space. We also show tradeoffs for nondeterministic linear time computations using sublinear space. For example, there exists a language computable in nondeterministic linear time and n .619 space that cannot be computed in deterministic n 1.618 time and n o(1) space. Higher up the polynomialtime hierarchy we can get be...
ComplexityTheoretic Aspects of Interactive Proof Systems
, 1989
"... In 1985, Goldwasser, Micali and Rackoff formulated interactive proof systems as a tool for developing cryptographic protocols. Indeed, many exciting cryptographic results followed from studying interactive proof systems and the related concept of zeroknowledge. Interactive proof systems also have a ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
In 1985, Goldwasser, Micali and Rackoff formulated interactive proof systems as a tool for developing cryptographic protocols. Indeed, many exciting cryptographic results followed from studying interactive proof systems and the related concept of zeroknowledge. Interactive proof systems also have an important part in complexity theory merging the well established concepts of probabilistic and nondeterministic computation. This thesis will study the complexity of various models of interactive proof systems. A perfect zeroknowledge interactive protocol convinces a verifier that a string is in a language without revealing any additional knowledge in an information theoretic sense. This thesis will show that for any language that has a perfect zeroknowledge proof system, its complement has a short interactive protocol. This result implies that there are not any perfect zeroknowledge protocols for NPcomplete languages unless the polynomialtime hierarchy collapses. Thus knowledge comp...
Email and the unexpected power of interaction
 Structure in Complexity theory
, 1988
"... This is a true fable about Merlin, the infinitely intelligent but never trusted magician; and Arthur, the reasonable but impatient sovereign with an occasional unorthodox request; about the concept of an efficient proof; about polynomials and interpolation, electronic mail, coin flipping, and the in ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
This is a true fable about Merlin, the infinitely intelligent but never trusted magician; and Arthur, the reasonable but impatient sovereign with an occasional unorthodox request; about the concept of an efficient proof; about polynomials and interpolation, electronic mail, coin flipping, and the incredible power of interaction. About MIP, IP, #P, P SP ACE, NEXP T IME, and new techniques that do not relativize. About fast progress, fierce competition, and email ethics. 1 How did Merlin end up in the cave? In the court of King Arthur1 there lived 150 knights and 150 ladies. “Why not 150 married couples, ” the King contemplated one rainy afternoon, and action followed the thought. He asked the Royal Secret Agent (RSA) to draw up a diagram with all the 300 names, indicating bonds of mutual interest between lady and knight by a red line; and the lack thereof, by a blue line. The diagram, with its 1502 = 22, 500 colored lines, looked somewhat confusing, yet it should not confuse Merlin, the court magician, to whom it was subsequently presented by Arthur with the express order to find a perfect matching consisting exclusively of red lines. Merlin walked away, looked at the diagram, and, with his unlimited intellectual ability, immediately recognized that none of the 150! possibilities gave an allred perfect matching. He quickly completed the 150! diagrams, highlighting the wrong blue line in
Improving Exhaustive Search Implies Superpolynomial Lower Bounds
, 2009
"... The P vs NP problem arose from the question of whether exhaustive search is necessary for problems with short verifiable solutions. We do not know if even a slight algorithmic improvement over exhaustive search is universally possible for all NP problems, and to date no major consequences have been ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
The P vs NP problem arose from the question of whether exhaustive search is necessary for problems with short verifiable solutions. We do not know if even a slight algorithmic improvement over exhaustive search is universally possible for all NP problems, and to date no major consequences have been derived from the assumption that an improvement exists. We show that there are natural NP and BPP problems for which minor algorithmic improvements over the trivial deterministic simulation already entail lower bounds such as NEXP ̸ ⊆ P/poly and LOGSPACE ̸ = NP. These results are especially interesting given that similar improvements have been found for many other hard problems. Optimistically, one might hope our results suggest a new path to lower bounds; pessimistically, they show that carrying out the seemingly modest program of finding slightly better algorithms for all search problems may be extremely difficult (if not impossible). We also prove unconditional superpolynomial timespace lower bounds for improving on exhaustive search: there is a problem verifiable with k(n) length witnesses in O(n a) time (for some a and some function k(n) ≤ n) that cannot be solved in k(n) c n a+o(1) time and k(n) c n o(1) space, for every c ≥ 1. While such problems can always be solved by exhaustive search in O(2 k(n) n a) time and O(k(n) + n a) space, we can prove a superpolynomial lower bound in the parameter k(n) when space usage is restricted.
Nonuniform ACC circuit lower bounds
, 2010
"... The class ACC consists of circuit families with constant depth over unbounded fanin AND, OR, NOT, and MODm gates, where m> 1 is an arbitrary constant. We prove: • NTIME[2 n] does not have nonuniform ACC circuits of polynomial size. The size lower bound can be slightly strengthened to quasipolynom ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
The class ACC consists of circuit families with constant depth over unbounded fanin AND, OR, NOT, and MODm gates, where m> 1 is an arbitrary constant. We prove: • NTIME[2 n] does not have nonuniform ACC circuits of polynomial size. The size lower bound can be slightly strengthened to quasipolynomials and other less natural functions. • ENP, the class of languages recognized in 2O(n) time with an NP oracle, doesn’t have nonuniform ACC circuits of 2no(1) size. The lower bound gives an exponential sizedepth tradeoff: for every d there is a δ> 0 such that ENP doesn’t have depthd ACC circuits of size 2nδ. Previously, it was not known whether EXP NP had depth3 polynomial size circuits made out of only MOD6 gates. The highlevel strategy is to design faster algorithms for the circuit satisfiability problem over ACC circuits, then prove that such algorithms entail the above lower bounds. The algorithm combines known properties of ACC with fast rectangular matrix multiplication and dynamic programming, while the second step requires a subtle strengthening of the author’s prior work [STOC’10]. Supported by the Josef Raviv Memorial Fellowship.
On Godel's theorems on lengths of proofs II: Lower bounds for recognizing k symbol provability
 in Feasible Mathematics II, P. Clote and
, 1995
"... ..."
A Survey of Lower Bounds for Satisfiability and Related Problems
 Foundations and Trends in Theoretical Computer Science
, 2007
"... Ever since the fundamental work of Cook from 1971, satisfiability has been recognized as a central problem in computational complexity. It is widely believed to be intractable, and yet till recently even a lineartime, logarithmicspace algorithm for satisfiability was not ruled out. In 1997 Fortnow ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
Ever since the fundamental work of Cook from 1971, satisfiability has been recognized as a central problem in computational complexity. It is widely believed to be intractable, and yet till recently even a lineartime, logarithmicspace algorithm for satisfiability was not ruled out. In 1997 Fortnow, building on earlier work by Kannan, ruled out such an algorithm. Since then there has been a significant amount of progress giving nontrivial lower bounds on the computational complexity of satisfiability. In this article we survey the known lower bounds for the time and space complexity of satisfiability and closely related problems on deterministic, randomized, and quantum models with random access. We discuss the stateoftheart results and present the underlying arguments in a unified framework. 1