Results 1  10
of
16
Pseudorandomness and averagecase complexity via uniform reductions
 In Proceedings of the 17th Annual IEEE Conference on Computational Complexity
, 2002
"... Abstract. Impagliazzo and Wigderson (36th FOCS, 1998) gave the first construction of pseudorandom generators from a uniform complexity assumption on EXP (namely EXP � = BPP). Unlike results in the nonuniform setting, their result does not provide a continuous tradeoff between worstcase hardness an ..."
Abstract

Cited by 55 (8 self)
 Add to MetaCart
(Show Context)
Abstract. Impagliazzo and Wigderson (36th FOCS, 1998) gave the first construction of pseudorandom generators from a uniform complexity assumption on EXP (namely EXP � = BPP). Unlike results in the nonuniform setting, their result does not provide a continuous tradeoff between worstcase hardness and pseudorandomness, nor does it explicitly establish an averagecase hardness result. In this paper: ◦ We obtain an optimal worstcase to averagecase connection for EXP: if EXP � ⊆ BPTIME(t(n)), then EXP has problems that cannot be solved on a fraction 1/2 + 1/t ′ (n) of the inputs by BPTIME(t ′ (n)) algorithms, for t ′ = t Ω(1). ◦ We exhibit a PSPACEcomplete selfcorrectible and downward selfreducible problem. This slightly simplifies and strengthens the proof of Impagliazzo and Wigderson, which used a #Pcomplete problem with these properties. ◦ We argue that the results of Impagliazzo and Wigderson, and the ones in this paper, cannot be proved via “blackbox ” uniform reductions.
Nonuniform ACC circuit lower bounds
, 2010
"... The class ACC consists of circuit families with constant depth over unbounded fanin AND, OR, NOT, and MODm gates, where m> 1 is an arbitrary constant. We prove: • NTIME[2 n] does not have nonuniform ACC circuits of polynomial size. The size lower bound can be slightly strengthened to quasipoly ..."
Abstract

Cited by 36 (4 self)
 Add to MetaCart
(Show Context)
The class ACC consists of circuit families with constant depth over unbounded fanin AND, OR, NOT, and MODm gates, where m> 1 is an arbitrary constant. We prove: • NTIME[2 n] does not have nonuniform ACC circuits of polynomial size. The size lower bound can be slightly strengthened to quasipolynomials and other less natural functions. • ENP, the class of languages recognized in 2O(n) time with an NP oracle, doesn’t have nonuniform ACC circuits of 2no(1) size. The lower bound gives an exponential sizedepth tradeoff: for every d there is a δ> 0 such that ENP doesn’t have depthd ACC circuits of size 2nδ. Previously, it was not known whether EXP NP had depth3 polynomial size circuits made out of only MOD6 gates. The highlevel strategy is to design faster algorithms for the circuit satisfiability problem over ACC circuits, then prove that such algorithms entail the above lower bounds. The algorithm combines known properties of ACC with fast rectangular matrix multiplication and dynamic programming, while the second step requires a subtle strengthening of the author’s prior work [STOC’10]. Supported by the Josef Raviv Memorial Fellowship.
Circuit Minimization Problem
 In ACM Symposium on Theory of Computing (STOC
, 1999
"... We study the complexity of the circuit minimization problem: given the truth table of a Boolean function f and a parameter s, decide whether f can be realized by a Boolean circuit of size at most s. We argue why this problem is unlikely to be in P (or even in P=poly) by giving a number of surpris ..."
Abstract

Cited by 32 (4 self)
 Add to MetaCart
We study the complexity of the circuit minimization problem: given the truth table of a Boolean function f and a parameter s, decide whether f can be realized by a Boolean circuit of size at most s. We argue why this problem is unlikely to be in P (or even in P=poly) by giving a number of surprising consequences of such an assumption. We also argue that proving this problem to be NPcomplete (if it is indeed true) would imply proving strong circuit lower bounds for the class E, which appears beyond the currently known techniques. Keywords: hard Boolean functions, derandomization, natural properties, NPcompleteness. 1 Introduction An nvariable Boolean function f n : f0; 1g n ! f0; 1g can be given by either its truth table of size 2 n , or a Boolean circuit whose size may be significantly smaller than 2 n . It is well known that most Boolean functions on n variables have circuit complexity at least 2 n =n [Sha49], but so far no family of sufficiently hard functions has ...
On Proving Circuit Lower Bounds Against the Polynomialtime Hierarchy: Positive and Negative Results
, 2008
"... We consider the problem of proving circuit lower bounds against the polynomialtime hierarchy. We give both positive and negative results. For the positive side, for any fixed integer k> 0, we give an explicit Σ p 2 language, acceptable by a Σp2machine with running time O(nk2 +k), that requires ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
We consider the problem of proving circuit lower bounds against the polynomialtime hierarchy. We give both positive and negative results. For the positive side, for any fixed integer k> 0, we give an explicit Σ p 2 language, acceptable by a Σp2machine with running time O(nk2 +k), that requires circuit size> nk. This provides a constructive version of an existence theorem of Kannan [Kan82]. Our main theorem is on the negative side. We give evidence that it is infeasible to give relativizable proofs that any single language in the polynomialtime hierarchy requires super polynomial circuit size. Our proof techniques are based on the decision tree version of the Switching Lemma for constant depth circuits and NisanWigderson pseudorandom generator.
Is P versus NP formally independent
 Bulletin of the European Association for Theoretical Computer Science
, 2003
"... I have moved back to the University of Chicago and so has the web page for this column. See above for new URL and contact informaion. This issue Scott Aaronson writes quite an interesting (and opinionated) column on whether the P = NP question is independent of the usual axiom systems. Enjoy! ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
I have moved back to the University of Chicago and so has the web page for this column. See above for new URL and contact informaion. This issue Scott Aaronson writes quite an interesting (and opinionated) column on whether the P = NP question is independent of the usual axiom systems. Enjoy!
Oracles are subtle but not malicious
 In Proc. IEEE Conference on Computational Complexity
, 2006
"... Theoretical computer scientists have been debating the role of oracles since the 1970’s. This paper illustrates both that oracles can give us nontrivial insights about the barrier problems in circuit complexity, and that they need not prevent us from trying to solve those problems. First, we give an ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
(Show Context)
Theoretical computer scientists have been debating the role of oracles since the 1970’s. This paper illustrates both that oracles can give us nontrivial insights about the barrier problems in circuit complexity, and that they need not prevent us from trying to solve those problems. First, we give an oracle relative to which PP has linearsized circuits, by proving a new lower bound for perceptrons and lowdegree threshold polynomials. This oracle settles a longstanding open question, and generalizes earlier results due to Beigel and to Buhrman, Fortnow, and Thierauf. More importantly, it implies the first nonrelativizing separation of “traditional ” complexity classes, as opposed to interactive proof classes such as MIP and MAEXP. For Vinodchandran showed, by a nonrelativizing argument, that PP does not have circuits of size n k for any fixed k. We present an alternative proof of this fact, which shows that PP does not even have quantum circuits of size n k with quantum advice. To our knowledge, this is the first nontrivial lower bound on quantum circuit size. Second, we study a beautiful algorithm of Bshouty et al. for learning Boolean circuits in ZPP NP. We show that the NP queries in this algorithm cannot be parallelized by any relativizing technique, by giving an oracle relative to which ZPP NP   and even BPP NP   have linearsize circuits. On the other hand, we also show that the NP queries could be parallelized if P = NP. Thus, classes such as ZPP NP inhabit a “twilight zone, ” where we need to distinguish between relativizing and blackbox techniques. Our results on this subject have implications for computational learning theory as well as for the circuit minimization problem. 1
Geometric Complexity Theory V: Equivalence between blackbox derandomization of polynomial identity testing and derandomization of Noether’s Normalization Lemma
"... It is shown that blackbox derandomization of polynomial identity testing (PIT) is essentially equivalent to derandomization of Noether’s Normalization Lemma for explicit algebraic varieties, the problem that lies at the heart of the foundational classification problem of algebraic geometry. Specif ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
It is shown that blackbox derandomization of polynomial identity testing (PIT) is essentially equivalent to derandomization of Noether’s Normalization Lemma for explicit algebraic varieties, the problem that lies at the heart of the foundational classification problem of algebraic geometry. Specifically: (1) It is shown that in characteristic zero blackbox derandomization of the symbolic trace identity testing (STIT) brings the problem of derandomizing Noether’s Normalization Lemma for the ring of invariants of the adjoint action of the general linear group on a tuple of matrices from EXPSPACE (where it is currently) to P. Next it is shown that assuming the Generalized Riemann Hypothesis (GRH), instead of the blackbox derandomization hypothesis, brings the problem from EXPSPACE to quasiPH, instead of P. Thus blackbox derandomization
Derandomizing ArthurMerlin games and approximate counting implies exponentialsize lower bounds
 In Proceedings of the IEEE Conference on Computational Complexity
, 2010
"... Abstract. We show that if ArthurMerlin protocols can be derandomized, then there is a language computable in deterministic exponentialtime with access to an NP oracle, that requires circuits of exponential size. More formally, if every promise problem in prAM, the class of promise problems that hav ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Abstract. We show that if ArthurMerlin protocols can be derandomized, then there is a language computable in deterministic exponentialtime with access to an NP oracle, that requires circuits of exponential size. More formally, if every promise problem in prAM, the class of promise problems that have ArthurMerlin protocols, can be computed by a deterministic polynomialtime algorithm with access to an NP oracle then there is a language in ENP that requires circuits of size Ω(2n /n). The lower bound in the conclusion of our theorem suffices to construct pseudorandom generators with exponential stretch. We also show that the same conclusion holds if the following two related problems can be computed in polynomial time with access to an NPoracle: (i) approximately counting the number of accepted inputs of a circuit, up to multiplicative factors; and (ii) recognizing an approximate lower bound on the number of accepted inputs of a circuit, up to multiplicative factors.
Natural Proofs Versus Derandomization
"... We study connections between Natural Proofs, derandomization, and the problem of proving “weak” circuit lower bounds such as NEXP ⊂ TC 0, which are still wide open. Natural Proofs have three properties: they are constructive (an efficient algorithm A is embedded in them), have largeness (A accepts ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
We study connections between Natural Proofs, derandomization, and the problem of proving “weak” circuit lower bounds such as NEXP ⊂ TC 0, which are still wide open. Natural Proofs have three properties: they are constructive (an efficient algorithm A is embedded in them), have largeness (A accepts a large fraction of strings), and are useful (A rejects all strings which are truth tables of small circuits). Strong circuit lower bounds that are “naturalizing ” would contradict present cryptographic understanding, yet the vast majority of known circuit lower bound proofs are naturalizing. So it is imperative to understand how to pursue unNatural Proofs. Some heuristic arguments say constructivity should be circumventable. Largeness is inherent in many proof techniques, and it is probably our presently weak techniques that yield constructivity. We prove: • Constructivity is unavoidable, even for NEXP lower bounds. Informally, we prove for all “typical” nonuniform circuit classes C, NEXP ⊂ C if and only if there is a polynomialtime algorithm distinguishing some function from all functions computable by Ccircuits. Hence NEXP ⊂ C is equivalent to exhibiting a constructive property useful against C. • There are no Pnatural properties useful against C if and only if randomized exponential time can be “derandomized ” using truth tables of circuits from C as random seeds. Therefore the task of proving there are no Pnatural properties is inherently a derandomization problem, weaker than but implied by the existence of strong pseudorandom functions. These characterizations are applied to yield several new results. The two main applications are that NEXP ∩ coNEXP does not have n log n size ACC circuits, and a mild derandomization result for RP. 1
Some results on averagecase hardness within the polynomial hierarchy
 In Proceedings of the 26th Conference on Foundations of Software Technology and Theoretical Computer Science
, 2006
"... Abstract. We prove several results about the averagecase complexity of problems in the Polynomial Hierarchy (PH). We give a connection among averagecase, worstcase, and nonuniform complexity of optimization problems. Specifically, we show that if P NP is hard in the worstcase then it is either ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We prove several results about the averagecase complexity of problems in the Polynomial Hierarchy (PH). We give a connection among averagecase, worstcase, and nonuniform complexity of optimization problems. Specifically, we show that if P NP is hard in the worstcase then it is either hard on the average (in the sense of Levin) or it is nonuniformly hard (i.e. it does not have small circuits). Recently, Gutfreund, Shaltiel and TaShma (IEEE Conference on Computational Complexity, 2005) showed an interesting worstcase to averagecase connection for languages in NP, under a notion of averagecase hardness defined using uniform adversaries. We show that extending their connection to hardness against quasipolynomial time would imply that NEXP doesn’t have polynomialsize circuits. Finally we prove an unconditional averagecase hardness result. We show that for each k, there is an explicit language in P Σ2 which is hard on average for circuits of size n k. 1