Results 1  10
of
35
Proving lower bounds via pseudorandom generators
 FSTTCS 2005: Foundations of Software Technology and Theoretical Computer Science, 25th International Conference, Hyderabad, India, December 1518, 2005, Proceedings, volume 3821 of Lecture
, 2005
"... Abstract. In this paper, we formalize two stepwise approaches, based on pseudorandom generators, for proving P � = NP and its arithmetic analog: Permanent requires superpolynomial sized arithmetic circuits. 1 ..."
Abstract

Cited by 31 (1 self)
 Add to MetaCart
Abstract. In this paper, we formalize two stepwise approaches, based on pseudorandom generators, for proving P � = NP and its arithmetic analog: Permanent requires superpolynomial sized arithmetic circuits. 1
Algebrization: A new barrier in complexity theory
 MIT Theory of Computing Colloquium
, 2007
"... Any proof of P � = NP will have to overcome two barriers: relativization and natural proofs. Yet over the last decade, we have seen circuit lower bounds (for example, that PP does not have linearsize circuits) that overcome both barriers simultaneously. So the question arises of whether there is a ..."
Abstract

Cited by 30 (2 self)
 Add to MetaCart
Any proof of P � = NP will have to overcome two barriers: relativization and natural proofs. Yet over the last decade, we have seen circuit lower bounds (for example, that PP does not have linearsize circuits) that overcome both barriers simultaneously. So the question arises of whether there is a third barrier to progress on the central questions in complexity theory. In this paper we present such a barrier, which we call algebraic relativization or algebrization. The idea is that, when we relativize some complexity class inclusion, we should give the simulating machine access not only to an oracle A, but also to a lowdegree extension of A over a finite field or ring. We systematically go through basic results and open problems in complexity theory to delineate the power of the new algebrization barrier. First, we show that all known nonrelativizing results based on arithmetization—both inclusions such as IP = PSPACE and MIP = NEXP, and separations such as MAEXP � ⊂ P/poly —do indeed algebrize. Second, we show that almost all of the major open problems—including P versus NP, P versus RP, and NEXP versus P/poly—will require nonalgebrizing techniques. In some cases algebrization seems to explain exactly why progress stopped where it did: for example, why we have superlinear circuit lower bounds for PromiseMA but not for NP. Our second set of results follows from lower bounds in a new model of algebraic query complexity, which we introduce in this paper and which is interesting in its own right. Some of our lower bounds use direct combinatorial and algebraic arguments, while others stem from a surprising connection between our model and communication complexity. Using this connection, we are also able to give an MAprotocol for the Inner Product function with O ( √ n log n) communication (essentially matching a lower bound of Klauck), as well as a communication complexity conjecture whose truth would imply NL � = NP. 1
TimeSpace Tradeoffs for Satisfiability
 Journal of Computer and System Sciences
, 1997
"... We give the first nontrivial modelindependent timespace tradeoffs for satisfiability. Namely, we show that SAT cannot be solved simultaneously in n 1+o(1) time and n 1\Gammaffl space for any ffl ? 0 on general randomaccess nondeterministic Turing machines. In particular, SAT cannot be solved ..."
Abstract

Cited by 29 (1 self)
 Add to MetaCart
We give the first nontrivial modelindependent timespace tradeoffs for satisfiability. Namely, we show that SAT cannot be solved simultaneously in n 1+o(1) time and n 1\Gammaffl space for any ffl ? 0 on general randomaccess nondeterministic Turing machines. In particular, SAT cannot be solved deterministically by a Turing machine using quasilinear time and p n space. We also give lower bounds for logspace uniform NC 1 circuits and branching programs. Our proof uses two basic ideas. First we show that if SAT can be solved nondeterministically with a small amount of time then we can collapse a nonconstant number of levels of the polynomialtime hierarchy. We combine this work with a result of Nepomnjascii that shows that a nondeterministic computation of super linear time and sublinear space can be simulated in alternating linear time. A simple diagonalization yields our main result. We discuss how these bounds lead to a new approach to separating the complexity classes NL a...
NP Might Not Be As Easy As Detecting Unique Solutions
, 1998
"... We construct an oracle A such that P A = \PhiP A and NP A = EXP A : This relativized world has several amazing properties: ffl The oracle A gives the first relativized world where one can solve satisfiability on formulae with at most one assignment yet P 6= NP. ffl The oracle A is the fi ..."
Abstract

Cited by 23 (6 self)
 Add to MetaCart
We construct an oracle A such that P A = \PhiP A and NP A = EXP A : This relativized world has several amazing properties: ffl The oracle A gives the first relativized world where one can solve satisfiability on formulae with at most one assignment yet P 6= NP. ffl The oracle A is the first where P A = UP A 6= NP A = coNP A : ffl The construction gives a much simpler proof than Fenner, Fortnow and Kurtz of a relativized world where all NPcomplete sets are polynomialtime isomorphic. It is the first such computable oracle. ffl Relative to A we have a collapse of \PhiEXP A ` ZPP A ` P A /poly. We also create a different relativized world where there exists a set L in NP that is NP complete under reductions that make one query to L but not under traditional manyone reductions. This contrasts with the result of Buhrman, Spaan and Torenvliet showing that these two completeness notions for NEXP coincide. 1 Introduction Valiant and Vazirani [VV86] show the sur...
One Complexity Theorist's View of Quantum Computing
 THEORETICAL COMPUTER SCIENCE
, 2000
"... The complexity of quantum computation remains poorly understood. While physicists attempt to find ways to create quantum computers, we still do not have much evidence one way or the other as to how useful these machines will be. The tools of computational complexity theory should come to bear on ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
The complexity of quantum computation remains poorly understood. While physicists attempt to find ways to create quantum computers, we still do not have much evidence one way or the other as to how useful these machines will be. The tools of computational complexity theory should come to bear on these important questions. Quantum Computing
Nondeterministic Polynomial Time versus Nondeterministic Logarithmic Space
 In Proceedings, Twelfth Annual IEEE Conference on Computational Complexity
, 1996
"... We discuss the possibility of using the relatively old technique of diagonalization to separate complexity classes, in particular NL from NP. We show several results in this direction. ffl Any nonconstant level of the polynomialtime hierarchy strictly contains NL. ffl SAT is not simultaneously in ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
We discuss the possibility of using the relatively old technique of diagonalization to separate complexity classes, in particular NL from NP. We show several results in this direction. ffl Any nonconstant level of the polynomialtime hierarchy strictly contains NL. ffl SAT is not simultaneously in NL and deterministic n log j n time for any j. ffl On the negative side, we present a relativized world where P = NP but any nonconstant level of the polynomialtime hierarchy differs from P. 1 Introduction Separating complexity classes remains the most important and difficult of problems in theoretical computer science. Circuit complexity and other techniques on finite functions have seen some exciting early successes (see the survey of Boppana and Sipser [BS90]) but have yet to achieve their promise of separating complexity classes above logarithmic space. Other techniques based on logic and geometry also have given us separations only on very restricted models. We should turn back to...
Easy sets and hard certificate schemes
 Acta Informatica
, 1997
"... Can easy sets only have easy certificate schemes? In this paper, we study the class of sets that, for all NP certificate schemes (i.e., NP machines), always have easy acceptance certificates (i.e., accepting paths) that can be computed in polynomial time. We also study the class of sets that, for al ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
Can easy sets only have easy certificate schemes? In this paper, we study the class of sets that, for all NP certificate schemes (i.e., NP machines), always have easy acceptance certificates (i.e., accepting paths) that can be computed in polynomial time. We also study the class of sets that, for all NP certificate schemes, infinitely often have easy acceptance certificates. In particular, we provide equivalent characterizations of these classes in terms of relative generalized Kolmogorov complexity, showing that they are robust. We also provide structural conditions—regarding immunity and class collapses—that put upper and lower bounds on the sizes of these two classes. Finally, we provide negative results showing that some of our positive claims are optimal with regard to being relativizable. Our negative results are proven using a novel observation: we show that the classical “wide spacing ” oracle construction technique yields instant nonbiimmunity results. Furthermore, we establish a result that improves upon Baker, Gill, and Solovay’s classical result that NP = P = NP ∩ coNP holds in some relativized world.
Relativizing versus nonrelativizing techniques: The role of local checkability
 UNIVERSITY OF CALIFORNIA, BERKELEY
, 1992
"... Contradictory oracle results have traditionally been interpreted as giving some evidence that resolving a complexity issue is difficult. However, for quite a while, there have been a few known complexity results that do not hold for every oracle, at least in the most obvious way of relativizing the ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
Contradictory oracle results have traditionally been interpreted as giving some evidence that resolving a complexity issue is difficult. However, for quite a while, there have been a few known complexity results that do not hold for every oracle, at least in the most obvious way of relativizing the results. In the early 1990’s, a sequence of important nonrelativizing results concerning “noncontroversially relativizable ” complexity classes has been proved, mainly using algebraic techniques. Although the techniques used to obtain these results seem similar in flavor, it is not clear what common features of complexity they are exploiting. It is also not clear to what extent oracle results should be trusted as a guide to estimating the difficulty of proving complexity statements, in light of these nonrelativizing techniques. The results in this paper are intended to shed some light on these issues. First, we give a list of simple axioms based on Cobham’s machineless characterization of P [Cob64]. We show that a complexity statement (provably) holds relative to all oracles if and only if it is a consequence of these axioms. Thus, these axioms in some sense capture the set of techniques that relativize. Oracle results, while not necessarily showing that resolving a complexity conjecture is “beyond current technology” at least show that the result is
ProofCarrying Data and Hearsay Arguments from Signature Cards
"... Design of secure systems can often be expressed as ensuring that some property is maintained at every step of a distributed computation among mutuallyuntrusting parties. Special cases include integrity of programs running on untrusted platforms, various forms of confidentiality and sidechannel res ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
Design of secure systems can often be expressed as ensuring that some property is maintained at every step of a distributed computation among mutuallyuntrusting parties. Special cases include integrity of programs running on untrusted platforms, various forms of confidentiality and sidechannel resilience, and domainspecific invariants. We propose a new approach, proofcarrying data (PCD), which circumnavigates the threat of faults and leakage by reasoning about properties of the output data, independently of the preceding computation. In PCD, the system designer prescribes the desired properties of the computation’s outputs. Corresponding proofs are attached to every message flowing through the system, and are mutually verified by the system’s components. Each such proof attests that the message’s data and all of its history comply with the specified properties. We construct a general protocol compiler that generates, propagates and verifies such proofs of compliance, while preserving the dynamics and efficiency of the original computation. Our main technical tool is the cryptographic construction of short noninteractive arguments (computationallysound proofs) for statements whose truth depends on “hearsay evidence”: previous arguments about other statements. To this end, we attain a particularly strong proof of knowledge. We realize the above, under standard cryptographic assumptions, in a model where the prover has blackbox access to some simple functionality — essentially, a signature card.
Separating Complexity Classes using Autoreducibility
, 1998
"... A set is autoreducible if it can be reduced to itself by a Turing machine that does not ask its own input to the oracle. We use autoreducibility to separate the polynomialtime hierarchy from exponential space by showing that all Turingcomplete sets for certain levels of the exponentialtime hie ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
A set is autoreducible if it can be reduced to itself by a Turing machine that does not ask its own input to the oracle. We use autoreducibility to separate the polynomialtime hierarchy from exponential space by showing that all Turingcomplete sets for certain levels of the exponentialtime hierarchy are autoreducible but there exists some Turingcomplete set for doubly exponential space that is not. Although we already knew how to separate these classes using diagonalization, our proofs separate classes solely by showing they have dierent structural properties, thus applying Post's Program to complexity theory. We feel such techniques may prove unknown separations in the future. In particular, if we could settle the question as to whether all Turingcomplete sets for doubly exponential time are autoreducible, we would separate either polynomial time from polynomial space, and nondeterministic logarithmic space from nondeterministic polynomial time, or else the polynomial...