Results 1  10
of
27
Nonuniform ACC circuit lower bounds
, 2010
"... The class ACC consists of circuit families with constant depth over unbounded fanin AND, OR, NOT, and MODm gates, where m> 1 is an arbitrary constant. We prove: • NTIME[2 n] does not have nonuniform ACC circuits of polynomial size. The size lower bound can be slightly strengthened to quasipoly ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
(Show Context)
The class ACC consists of circuit families with constant depth over unbounded fanin AND, OR, NOT, and MODm gates, where m> 1 is an arbitrary constant. We prove: • NTIME[2 n] does not have nonuniform ACC circuits of polynomial size. The size lower bound can be slightly strengthened to quasipolynomials and other less natural functions. • ENP, the class of languages recognized in 2O(n) time with an NP oracle, doesn’t have nonuniform ACC circuits of 2no(1) size. The lower bound gives an exponential sizedepth tradeoff: for every d there is a δ> 0 such that ENP doesn’t have depthd ACC circuits of size 2nδ. Previously, it was not known whether EXP NP had depth3 polynomial size circuits made out of only MOD6 gates. The highlevel strategy is to design faster algorithms for the circuit satisfiability problem over ACC circuits, then prove that such algorithms entail the above lower bounds. The algorithm combines known properties of ACC with fast rectangular matrix multiplication and dynamic programming, while the second step requires a subtle strengthening of the author’s prior work [STOC’10]. Supported by the Josef Raviv Memorial Fellowship.
Hardness hypotheses, derandomization, and circuit complexity
"... We consider hypotheses about nondeterministic computation that have been studied in different contexts and shown to have interesting consequences: • The measure hypothesis: NP does not have pmeasure 0. • The pseudoNP hypothesis: there is an NP language that can be distinguished from any DTIME(2nǫ) ..."
Abstract

Cited by 18 (5 self)
 Add to MetaCart
(Show Context)
We consider hypotheses about nondeterministic computation that have been studied in different contexts and shown to have interesting consequences: • The measure hypothesis: NP does not have pmeasure 0. • The pseudoNP hypothesis: there is an NP language that can be distinguished from any DTIME(2nǫ) language by an NP refuter. • The NPmachine hypothesis: there is an NP machine accepting 0 ∗ for which no 2nǫtime machine can find infinitely many accepting computations. We show that the NPmachine hypothesis is implied by each of the first two. Previously, no relationships were known among these three hypotheses. Moreover, we unify previous work by showing that several derandomizations and circuitsize lower bounds that are known to follow from the first two hypotheses also follow from the NPmachine hypothesis. In particular, the NPmachine hypothesis becomes the weakest known uniform hardness hypothesis that derandomizes AM. We also consider UP versions of the above hypotheses as well as related immunity and scaled dimension hypotheses.
Improving Exhaustive Search Implies Superpolynomial Lower Bounds
, 2009
"... The P vs NP problem arose from the question of whether exhaustive search is necessary for problems with short verifiable solutions. We do not know if even a slight algorithmic improvement over exhaustive search is universally possible for all NP problems, and to date no major consequences have been ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
The P vs NP problem arose from the question of whether exhaustive search is necessary for problems with short verifiable solutions. We do not know if even a slight algorithmic improvement over exhaustive search is universally possible for all NP problems, and to date no major consequences have been derived from the assumption that an improvement exists. We show that there are natural NP and BPP problems for which minor algorithmic improvements over the trivial deterministic simulation already entail lower bounds such as NEXP ̸ ⊆ P/poly and LOGSPACE ̸ = NP. These results are especially interesting given that similar improvements have been found for many other hard problems. Optimistically, one might hope our results suggest a new path to lower bounds; pessimistically, they show that carrying out the seemingly modest program of finding slightly better algorithms for all search problems may be extremely difficult (if not impossible). We also prove unconditional superpolynomial timespace lower bounds for improving on exhaustive search: there is a problem verifiable with k(n) length witnesses in O(n a) time (for some a and some function k(n) ≤ n) that cannot be solved in k(n) c n a+o(1) time and k(n) c n o(1) space, for every c ≥ 1. While such problems can always be solved by exhaustive search in O(2 k(n) n a) time and O(k(n) + n a) space, we can prove a superpolynomial lower bound in the parameter k(n) when space usage is restricted.
If NP languages are hard on the worstcase then it is easy to find their hard instances
 PROCEEDINGS OF THE 20TH ANNUAL CONFERENCE ON COMPUTATIONAL COMPLEXITY, (CCC)
, 2005
"... We prove that if NP 6t, BPP, i.e., if some NPcomplete language is worstcase hard, then for every probabilistic algorithm trying to decide the language,there exists some polynomially samplable distribution that is hard for it. That is, the algorithm often errson inputs from this distribution. This ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
We prove that if NP 6t, BPP, i.e., if some NPcomplete language is worstcase hard, then for every probabilistic algorithm trying to decide the language,there exists some polynomially samplable distribution that is hard for it. That is, the algorithm often errson inputs from this distribution. This is the first worstcase to averagecase reduction for NP of any kind.We stress however, that this does not mean that there exists one fixed samplable distribution that is hard for all probabilistic polynomial time algorithms, which isa prerequisite assumption needed for OWF and cryptography (even if not a sufficient assumption). Nevertheless, we do show that there is a fixed distribution on instances of NPcomplete languages, that is samplable in quasipolynomial time and is hard for all probabilistic polynomial time algorithms (unless NP is easy in the worstcase). Our results are based on the following lemma that may be of independent interest: Given the description of an efficient (probabilistic) algorithm that failsto solve SAT in the worstcase, we can efficiently generate at most three Boolean formulas (of increasing
Verifying and decoding in constant depth
 In Proceedings of the ThirtyNinth Annual ACM Symposium on Theory of Computing
, 2007
"... We develop a general approach for improving the efficiency of a computationally bounded receiver interacting with a powerful and possibly malicious sender. The key idea we use is that of delegating some of the receiver’s computation to the (potentially malicious) sender. This idea was recently intro ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
(Show Context)
We develop a general approach for improving the efficiency of a computationally bounded receiver interacting with a powerful and possibly malicious sender. The key idea we use is that of delegating some of the receiver’s computation to the (potentially malicious) sender. This idea was recently introduced by Goldwasser et al. [14] in the area of program checking. A classic example of such a senderreceiver setting is interactive proof systems. By taking the sender to be a (potentially malicious) prover and the receiver to be a verifier, we show that (pprover) interactive proofs with k rounds of interaction are equivalent to (pprover) interactive proofs with k + O(1) rounds, where the verifier is in NC 0. That is, each round of the verifier’s computation can be implemented in constant parallel time. As a corollary, we obtain interactive proof systems, with (optimally) constant soundness, for languages in AM and NEXP, where the verifier runs in constant paralleltime. Another, less immediate senderreceiver setting arises in considering error correcting codes. By taking the sender to be a (potentially corrupted) codeword and the receiver to be a decoder, we obtain explicit families of codes that are locally (list)decodable by constantdepth circuits of size polylogarithmic in the length of the codeword. Using the tight connection between locally listdecodable codes and averagecase complexity, we obtain a new, more efficient, worstcase to averagecase reduction for languages in EXP.
A Short History of Computational Complexity
 IEEE CONFERENCE ON COMPUTATIONAL COMPLEXITY
, 2002
"... this article mention all of the amazing research in computational complexity theory. We survey various areas in complexity choosing papers more for their historical value than necessarily the importance of the results. We hope that this gives an insight into the richness and depth of this still quit ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
this article mention all of the amazing research in computational complexity theory. We survey various areas in complexity choosing papers more for their historical value than necessarily the importance of the results. We hope that this gives an insight into the richness and depth of this still quite young eld
Graph Isomorphism is Low for ZPP(NP) and other Lowness results
, 2000
"... We show the following new lowness results for the probabilistic class ZPP NP . { The class AM \ coAM is low for ZPP NP . As a consequence it follows that Graph Isomorphism and several grouptheoretic problems known to be in AM \ coAM are low for ZPP NP . { The class IP[P=poly], consisting of sets th ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
We show the following new lowness results for the probabilistic class ZPP NP . { The class AM \ coAM is low for ZPP NP . As a consequence it follows that Graph Isomorphism and several grouptheoretic problems known to be in AM \ coAM are low for ZPP NP . { The class IP[P=poly], consisting of sets that have interactive proof systems with honest provers in P=poly, is also low for ZPP NP . We consider lowness properties of nonuniform function classes, namely, NPMV=poly, NPSV=poly, NPMV t =poly, and NPSV t =poly. Specifically, we show that { Sets whose characteristic functions are in NPSV=poly and that have program checkers (in the sense of Blum and Kannan [8]) are low for AM and ZPP NP . { Sets whose characteristic functions are in NPMV t =poly are low for p 2 .
WorstCase to AverageCase Reductions Revisited
"... Abstract. A fundamental goal of computational complexity (and foundations of cryptography) is to find a polynomialtime samplable distribution (e.g., the uniform distribution) and a language in NTIME(f(n)) for some polynomial function f, such that the language is hard on the average with respect to ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Abstract. A fundamental goal of computational complexity (and foundations of cryptography) is to find a polynomialtime samplable distribution (e.g., the uniform distribution) and a language in NTIME(f(n)) for some polynomial function f, such that the language is hard on the average with respect to this distribution, given that NP is worstcase hard (i.e. NP ̸ = P, or NP ̸ ⊆ BPP). Currently, no such result is known even if we relax the language to be in nondeterministic subexponential time. There has been a long line of research trying to explain our failure in proving such worstcase/averagecase connections [FF93,Vio03,BT03,AGGM06]. The bottom line of this research is essentially that (under plausible assumptions) nonadaptive Turing reductions cannot prove such results. In this paper we revisit the problem. Our first observation is that the above mentioned negative arguments extend to a nonstandard notion of averagecase complexity, in which the distribution on the inputs with respect to which we measure the averagecase complexity of the language, is only samplable in superpolynomial time. The significance of this result stems from the fact that in this nonstandard setting, [GSTS05] did show a worstcase/averagecase connection. In other words, their techniques give a way to bypass the impossibility arguments. By taking a closer look at the proof of [GSTS05], we discover that the worstcase/averagecase connection is proven by a reduction that ”almost ” falls under the category ruled out by the negative result. This gives rise to an intriguing new notion of (almost blackbox) reductions. After extending the negative results to the nonstandard averagecase setting of [GSTS05], we ask whether their positive result can be extended to the standard setting, to prove some new worstcase/averagecase connections. While we can not do that unconditionally, we are able to show that under a mild derandomization assumption, the worstcase hardness of NP implies the averagecase hardness of NTIME(f(n)) (under the uniform distribution) where f is computable in quasipolynomial time. 1
New Lowness Results for ZPP^NP and other Complexity Classes
, 2000
"... We show that the class AM\coAM is low for ZPP . As a consequence, it follows that Graph Isomorphism and several grouptheoretic problems are low for ZPP . We also ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
We show that the class AM\coAM is low for ZPP . As a consequence, it follows that Graph Isomorphism and several grouptheoretic problems are low for ZPP . We also
Results on ResourceBounded Measure
, 1997
"... . We construct an oracle relative to which NP has pmeasure 0 but D p has measure 1 in EXP. This gives a strong relativized negative answer to a question posed by Lutz [Lut96]. Secondly, we give strong evidence that BPP is small. We show that BPP has pmeasure 0 unless EXP = MA and thus the polyn ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
. We construct an oracle relative to which NP has pmeasure 0 but D p has measure 1 in EXP. This gives a strong relativized negative answer to a question posed by Lutz [Lut96]. Secondly, we give strong evidence that BPP is small. We show that BPP has pmeasure 0 unless EXP = MA and thus the polynomialtime hierarchy collapses. This contrasts with the work of Regan et. al. [RSC95], where it is shown that P=poly does not have pmeasure 0 if exponentially strong pseudorandom generators exist. 1 Introduction Since the introduction of resourcebounded measure by Lutz [Lut92], many researchers investigated the size (measure) of complexity classes in exponential time (EXP). A particular point of interest is the hypothesis that NP does not have pmeasure 0. Recent results have shown that many reasonable conjectures in computational complexity theory follow from the hypothesis that NP is not small (i.e., ¯ p (NP) 6= 0), and hence it seems to be a plausible scientific hypothesis [LM96, Lut96...