Results 1  10
of
22
COMPUTATIONALLY SOUND PROOFS
, 2000
"... This paper puts forward a new notion of a proof based on computational complexity and explores its implications for computation at large. Computationally sound proofs provide, in a novel and meaningful framework, answers to old and new questions in complexity theory. In particular, given a random o ..."
Abstract

Cited by 92 (3 self)
 Add to MetaCart
This paper puts forward a new notion of a proof based on computational complexity and explores its implications for computation at large. Computationally sound proofs provide, in a novel and meaningful framework, answers to old and new questions in complexity theory. In particular, given a random oracle or a new complexity assumption, they enable us to 1. prove that verifying is easier than deciding for all theorems; 2. provide a quite effective way to prove membership in computationally hard languages (such as CoNPcomplete ones); and 3. show that every computation possesses a short certificate vouching its correctness. Finally, if a special type of computationally sound proof exists, we show that Blum’s notion of program checking can be meaningfully broadened so as to prove that NPcomplete languages are checkable.
The quantitative structure of exponential time
 Complexity theory retrospective II
, 1997
"... ABSTRACT Recent results on the internal, measuretheoretic structure of the exponential time complexity classes E and EXP are surveyed. The measure structure of these classes is seen to interact in informative ways with biimmunity, complexity cores, polynomialtime reductions, completeness, circuit ..."
Abstract

Cited by 90 (13 self)
 Add to MetaCart
ABSTRACT Recent results on the internal, measuretheoretic structure of the exponential time complexity classes E and EXP are surveyed. The measure structure of these classes is seen to interact in informative ways with biimmunity, complexity cores, polynomialtime reductions, completeness, circuitsize complexity, Kolmogorov complexity, natural proofs, pseudorandom generators, the density of hard languages, randomized complexity, and lowness. Possible implications for the structure of NP are also discussed. 1
Cook versus KarpLevin: Separating Completeness Notions If NP Is Not Small
 Theoretical Computer Science
, 1992
"... Under the hypothesis that NP does not have pmeasure 0 (roughly, that NP contains more than a negligible subset of exponential time), it is show n that there is a language that is P T complete ("Cook complete "), but not P m complete ("KarpLevin complete"), for NP. This conclusion, widely be ..."
Abstract

Cited by 56 (12 self)
 Add to MetaCart
Under the hypothesis that NP does not have pmeasure 0 (roughly, that NP contains more than a negligible subset of exponential time), it is show n that there is a language that is P T complete ("Cook complete "), but not P m complete ("KarpLevin complete"), for NP. This conclusion, widely believed to be true, is not known to follow from P 6= NP or other traditional complexitytheoretic hypotheses. Evidence is presented that "NP does not have pmeasure 0" is a reasonable hypothesis with many credible consequences. Additional such consequences proven here include the separation of many truthtable reducibilities in NP (e.g., k queries versus k+1 queries), the class separation E 6= NE, and the existence of NP search problems that are not reducible to the corresponding decision problems. This research was supported in part by National Science Foundation Grant CCR9157382, with matching funds from Rockwell International. 1 Introduction The NPcompleteness of decision problems has...
On Pseudorandomness and ResourceBounded Measure
 Theoretical Computer Science
, 1997
"... In this paper we extend a key result of Nisan and Wigderson [17] to the nondeterministic setting: for all ff ? 0 we show that if there is a language in E = DTIME(2 O(n) ) that is hard to approximate by nondeterministic circuits of size 2 ffn , then there is a pseudorandom generator that can be u ..."
Abstract

Cited by 42 (3 self)
 Add to MetaCart
In this paper we extend a key result of Nisan and Wigderson [17] to the nondeterministic setting: for all ff ? 0 we show that if there is a language in E = DTIME(2 O(n) ) that is hard to approximate by nondeterministic circuits of size 2 ffn , then there is a pseudorandom generator that can be used to derandomize BP \Delta NP (in symbols, BP \Delta NP = NP). By applying this extension we are able to answer some open questions in [14] regarding the derandomization of the classes BP \Delta \Sigma P k and BP \Delta \Theta P k under plausible measure theoretic assumptions. As a consequence, if \Theta P 2 does not have pmeasure 0, then AM " coAM is low for \Theta P 2 . Thus, in this case, the graph isomorphism problem is low for \Theta P 2 . By using the NisanWigderson design of a pseudorandom generator we unconditionally show the inclusion MA ` ZPP NP and that MA " coMA is low for ZPP NP . 1 Introduction In recent years, following the development of resourcebounded meas...
PSelective Sets, and Reducing Search to Decision vs. SelfReducibility
, 1993
"... We obtain several results that distinguish selfreducibility of a language L with the question of whether search reduces to decision for L. These include: (i) If NE 6= E, then there exists a set L in NP \Gamma P such that search reduces to decision for L, search does not nonadaptively reduces to de ..."
Abstract

Cited by 39 (9 self)
 Add to MetaCart
We obtain several results that distinguish selfreducibility of a language L with the question of whether search reduces to decision for L. These include: (i) If NE 6= E, then there exists a set L in NP \Gamma P such that search reduces to decision for L, search does not nonadaptively reduces to decision for L, and L is not selfreducible. Funding for this research was provided by the National Science Foundation under grant CCR9002292. y Department of Computer Science, State University of New York at Buffalo, 226 Bell Hall, Buffalo, NY 14260 z Department of Computer Science, State University of New York at Buffalo, 226 Bell Hall, Buffalo, NY 14260 x Research performed while visiting the Department of Computer Science, State University of New York at Buffalo, Jan. 1992Dec. 1992. Current address: Department of Computer Science, University of ElectroCommunications, Chofushi, Tokyo 182, Japan.  Department of Computer Science, State University of New York at Buffalo, 226...
Statistical zeroknowledge proofs with efficient provers: Lattice problems and more
 In CRYPTO
, 2003
"... Abstract. We construct several new statistical zeroknowledge proofs with efficient provers, i.e. ones where the prover strategy runs in probabilistic polynomial time given an NP witness for the input string. Our first proof systems are for approximate versions of the Shortest Vector Problem (SVP) a ..."
Abstract

Cited by 39 (9 self)
 Add to MetaCart
Abstract. We construct several new statistical zeroknowledge proofs with efficient provers, i.e. ones where the prover strategy runs in probabilistic polynomial time given an NP witness for the input string. Our first proof systems are for approximate versions of the Shortest Vector Problem (SVP) and Closest Vector Problem (CVP), where the witness is simply a short vector in the lattice or a lattice vector close to the target, respectively. Our proof systems are in fact proofs of knowledge, and as a result, we immediately obtain efficient latticebased identification schemes which can be implemented with arbitrary families of lattices in which the approximate SVP or CVP are hard. We then turn to the general question of whether all problems in SZK ∩ NP admit statistical zeroknowledge proofs with efficient provers. Towards this end, we give a statistical zeroknowledge proof system with an efficient prover for a natural restriction of Statistical Difference, a complete problem for SZK. We also suggest a plausible approach to resolving the general question in the positive. 1
Subexponential Parameterized Algorithms Collapse the Whierarchy (Extended Abstract)
, 2001
"... It is shown that for essentially all MAX SNPhard optimization problems finding exact solutions in subexponential time is not possible unless W [1] = FPT . In particular, we show that O(2 o(k) p(n)) parameterized algorithms do not exist for Vertex Cover, Max Cut, Max cSat, and a number of pr ..."
Abstract

Cited by 36 (2 self)
 Add to MetaCart
It is shown that for essentially all MAX SNPhard optimization problems finding exact solutions in subexponential time is not possible unless W [1] = FPT . In particular, we show that O(2 o(k) p(n)) parameterized algorithms do not exist for Vertex Cover, Max Cut, Max cSat, and a number of problems on bounded degree graphs such as Dominating Set and Independent Set, unless W [1] = FPT . Our results are derived via an approach that uses an extended parameterization of optimization problems and associated techniques to relate the parameterized complexity of problems in FPT to the parameterized complexity of extended versions that are W [1]hard.
Relative to a random oracle, NP is not small
 In Proc. 9th Structures
, 1994
"... Resourcebounded measure as originated by Lutz is an extension of classical measure theory which provides a probabilistic means of describing the relative sizes of complexity classes. Lutz has proposed the hypothesis that NP does not have pmeasure zero, meaning loosely that NP contains a nonneglig ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
Resourcebounded measure as originated by Lutz is an extension of classical measure theory which provides a probabilistic means of describing the relative sizes of complexity classes. Lutz has proposed the hypothesis that NP does not have pmeasure zero, meaning loosely that NP contains a nonnegligible subset of exponential time. This hypothesis implies a strong separation of P from NP and is supported by a growing body of plausible consequences which are not known to follow from the weaker assertion P ̸ = NP. It is shown in this paper that relative to a random oracle, NP does not have pmeasure zero. The proof exploits the following independence property of algorithmically random sequences: if A is an algorithmically random sequence and a subsequence A0 is chosen by means of a bounded KolmogorovLoveland
Hardness hypotheses, derandomization, and circuit complexity
 In Proceedings of the 24th Conference on Foundations of Software Technology and Theoretical Computer Science
, 2004
"... Abstract We consider hypotheses about nondeterministic computation that have been studied in different contexts and shown to have interesting consequences: * The measure hypothesis: NP does not have pmeasure 0.* The pseudoNP hypothesis: there is an NP language that can be distinguished from anyDT ..."
Abstract

Cited by 18 (5 self)
 Add to MetaCart
Abstract We consider hypotheses about nondeterministic computation that have been studied in different contexts and shown to have interesting consequences: * The measure hypothesis: NP does not have pmeasure 0.* The pseudoNP hypothesis: there is an NP language that can be distinguished from anyDTIME(2 nffl) language by an NP refuter. * The NPmachine hypothesis: there is an NP machine accepting 0 * for which no 2n ffltime machine can find infinitely many accepting computations. We show that the NPmachine hypothesis is implied by each of the first two. Previously, norelationships were known among these three hypotheses. Moreover, we unify previous work by showing that several derandomizations and circuitsize lower bounds that are known to followfrom the first two hypotheses also follow from the NPmachine hypothesis. In particular, the NPmachine hypothesis becomes the weakest known uniform hardness hypothesis that derandomizesAM. We also consider UP versions of the above hypotheses as well as related immunity and scaled dimension hypotheses. 1 Introduction The following uniform hardness hypotheses are known to imply full derandomization of ArthurMerlin games (NP = AM): * The measure hypothesis: NP does not have pmeasure 0 [24].
A Taxonomy of Proof Systems
 BASIC RESEARCH IN COMPUTER SCIENCE, CENTER OF THE DANISH NATIONAL RESEARCH FOUNDATION
, 1997
"... Several alternative formulations of the concept of an efficient proof system are nowadays coexisting in our field. These systems include the classical formulation of NP , interactive proof systems (giving rise to the class IP), computationallysound proof systems, and probabilistically checkable pro ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
Several alternative formulations of the concept of an efficient proof system are nowadays coexisting in our field. These systems include the classical formulation of NP , interactive proof systems (giving rise to the class IP), computationallysound proof systems, and probabilistically checkable proofs (PCP), which are closely related to multiprover interactive proofs (MIP). Although these notions are sometimes introduced using the same generic phrases, they are actually very different in motivation, applications and expressive power. The main objective of this essay is to try to clarify these differences.