Results 1  10
of
37
Nonuniform ACC circuit lower bounds
, 2010
"... The class ACC consists of circuit families with constant depth over unbounded fanin AND, OR, NOT, and MODm gates, where m> 1 is an arbitrary constant. We prove: • NTIME[2 n] does not have nonuniform ACC circuits of polynomial size. The size lower bound can be slightly strengthened to quasipoly ..."
Abstract

Cited by 51 (8 self)
 Add to MetaCart
(Show Context)
The class ACC consists of circuit families with constant depth over unbounded fanin AND, OR, NOT, and MODm gates, where m> 1 is an arbitrary constant. We prove: • NTIME[2 n] does not have nonuniform ACC circuits of polynomial size. The size lower bound can be slightly strengthened to quasipolynomials and other less natural functions. • ENP, the class of languages recognized in 2O(n) time with an NP oracle, doesn’t have nonuniform ACC circuits of 2no(1) size. The lower bound gives an exponential sizedepth tradeoff: for every d there is a δ> 0 such that ENP doesn’t have depthd ACC circuits of size 2nδ. Previously, it was not known whether EXP NP had depth3 polynomial size circuits made out of only MOD6 gates. The highlevel strategy is to design faster algorithms for the circuit satisfiability problem over ACC circuits, then prove that such algorithms entail the above lower bounds. The algorithm combines known properties of ACC with fast rectangular matrix multiplication and dynamic programming, while the second step requires a subtle strengthening of the author’s prior work [STOC’10]. Supported by the Josef Raviv Memorial Fellowship.
Pseudorandomness from shrinkage
 In Proceedings of the FiftyThird Annual IEEE Symposium on Foundations of Computer Science
, 2012
"... One powerful theme in complexity theory and pseudorandomness in the past few decades has been the use lower bounds to give pseudorandom generators (PRGs). However, the general results using this hardness vs. randomness paradigm suffer a quantitative loss in parameters, and hence do not give nontrivi ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
One powerful theme in complexity theory and pseudorandomness in the past few decades has been the use lower bounds to give pseudorandom generators (PRGs). However, the general results using this hardness vs. randomness paradigm suffer a quantitative loss in parameters, and hence do not give nontrivial implications for models where we don’t know superpolynomial lower bounds but do know lower bounds of a fixed polynomial. We show that when such lower bounds are proved using random restrictions, we can construct PRGs which are essentially best possible without in turn improving the lower bounds. More specifically, say that a circuit family has shrinkage exponent Γ if a random restriction leaving a p fraction of variables unset shrinks the size of any circuit in the family by a factor of pΓ+o(1). Our PRG uses a seed of length s1/(Γ+1)+o(1) to fool circuits in the family of size s. By using this generic construction, we get PRGs with polynomially small error for the following classes of circuits of size s and with the following seed lengths: 1. For de Morgan formulas, seed length s1/3+o(1); 2. For formulas over an arbitrary basis, seed length s1/2+o(1); 3. For readonce de Morgan formulas, seed length s.234...; 4. For branching programs of size s, seed length s1/2+o(1). The previous best PRGs known for these classes used seeds of length bigger than n/2 to output n bits, and worked only when the size s = O(n) [BPW11].
A Satisfiability Algorithm for AC 0
, 2011
"... We consider the problem of efficiently enumerating the satisfying assignments to AC 0 circuits. We give a zeroerror randomized algorithm which takes an AC 0 circuit as input and constructs a set of restrictions which partitions {0, 1} n so that under each restriction the value of the circuit is cons ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
(Show Context)
We consider the problem of efficiently enumerating the satisfying assignments to AC 0 circuits. We give a zeroerror randomized algorithm which takes an AC 0 circuit as input and constructs a set of restrictions which partitions {0, 1} n so that under each restriction the value of the circuit is constant. Let d denote the depth of the circuit and cn denote the number of gates. This algorithm runs in time C2 n(1−µc,d) where C  is the size of the circuit for µc,d ≥ 1/O[lg c + d lg d] d−1 with probability at least 1 − 2 −n. As a result, we get improved exponential time algorithms for AC 0 circuit satisfiability and for counting solutions. In addition, we get an improved bound on the correlation of AC 0 circuits with parity. As an important component of our analysis, we extend the H˚astad Switching Lemma to handle multiple kcnfs and kdnfs. 1
Fighting perebor: new and improved algorithms for formula and QBF satisfiability
, 2010
"... ..."
(Show Context)
Mining circuit lower bound proofs for metaalgorithms
, 2013
"... We show that circuit lower bound proofs based on the method of random restrictions yield nontrivial compression algorithms for “easy ” Boolean functions from the corresponding circuit classes. The compression problem is defined as follows: given the truth table of an nvariate Boolean function f co ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
We show that circuit lower bound proofs based on the method of random restrictions yield nontrivial compression algorithms for “easy ” Boolean functions from the corresponding circuit classes. The compression problem is defined as follows: given the truth table of an nvariate Boolean function f computable by some unknown small circuit from a known class of circuits, find in deterministic time poly(2n) a circuit C (no restriction on the type of C) computing f so that the size of C is less than the trivial circuit size 2n/n. We get nontrivial compression for functions computable by AC0 circuits, (de Morgan) formulas, and (readonce) branching programs of the size for which the lower bounds for the corresponding circuit class are known. These compression algorithms rely on the structural characterizations of “easy ” functions, which are useful both for proving circuit lower bounds and for designing “metaalgorithms” (such as CircuitSAT). For (de Morgan) formulas, such structural characterization is provided by the “shrinkage under random restrictions ” results [Sub61, H̊as98], strengthened to the “highprobability ” version by [San10, IMZ12, KR13]. We give a new, simple proof of the “highprobability ” version of the shrinkage result for (de Morgan) formulas, with improved parameters. We use this shrinkage result to get both compression and #SAT algorithms for (de Morgan) formulas of size about n2. We also use this shrinkage result to get an alternative proof of the recent result by Komargodski and Raz [KR13] of the averagecase lower bound against small (de Morgan) formulas. Finally, we show that the existence of any nontrivial compression algorithm for a circuit class C ⊆ P/poly would imply the circuit lower bound NEXP 6 ⊆ C; a similar implication is independently proved also by Williams [Wil13]. This complements Williams’s result [Wil10] that any nontrivial CircuitSAT algorithm for a circuit class C would imply a superpolynomial lower bound against C for a language in NEXP.
SubLinear Root Detection, and New Hardness Results, for Sparse Polynomials Over Finite Fields
, 2013
"... We present a deterministic 2 O(t) q t−2 t−1 +o(1) algorithm to decide whether a univariate polynomial f, with exactly t monomial terms and degree <q, has a root in Fq. Our method is the first with complexity sublinear in q when t is fixed. We also prove a structural property for the nonzero root ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
We present a deterministic 2 O(t) q t−2 t−1 +o(1) algorithm to decide whether a univariate polynomial f, with exactly t monomial terms and degree <q, has a root in Fq. Our method is the first with complexity sublinear in q when t is fixed. We also prove a structural property for the nonzero roots in Fq of any tnomial: the nonzero roots always admit a partition into no more than 2 √ t−1(q−1) t−2 t−1 cosets of two subgroups S1 ⊆ S2 of F ∗ q. This can be thought of as a finite field analogue of Descartes ’ Rule. A corollary of our results is the first deterministic sublinear algorithm for detecting common degree one factors of ktuples of tnomials in Fq[x] when k and t are fixed. When t is not fixed we show that, for p prime, detecting roots in Fp for f is NPhard with respect to BPPreductions. Finally, we prove that if the complexity of root detection is sublinear (in a refined sense), relative to the straightline program encoding, then NEXP⊆P/poly.
Approximating AC0 by small height decision trees and a deterministic algorithm for #AC0SAT
 In Proceedings of the TwentySeventh Annual IEEE Conference on Computational Complexity
, 2012
"... We show how to approximate any function in AC0 by decision trees of much smaller height than its number of variables. More precisely, we show that any function in n variables computable by an unbounded fanin circuit of AND, OR, and NOT gates that has size S and depth d can be approximated by a deci ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
We show how to approximate any function in AC0 by decision trees of much smaller height than its number of variables. More precisely, we show that any function in n variables computable by an unbounded fanin circuit of AND, OR, and NOT gates that has size S and depth d can be approximated by a decision tree of height n − βn to within error exp(−βn), where β = β(S, d) = 2−O(d log4/5 S). Our proof is constructive and we use its constructivity to derive a deterministic algorithm for #AC0SAT with multiplicative factor savings over the naive 2nS algorithm of 2−Ω(βn), when applied to any ninput AC0 circuit of size S and depth d. Indeed, in the same running time we can deterministically construct a decision tree of size at most 2n−βn that exactly computes the function given by such a circuit. Recently, Impagliazzo, Matthews, and Paturi derived an algorithm for #AC0SAT with greater savings over the naive algorithm but their algorithm is only randomized rather than deterministic. The main technical result we prove to show the above is that for every family F of kDNF formulas in n variables and every 1 < C = C(n) ≤ logpoly(k) F, one can construct a distribution on restrictions that each set at most n/C variables such that, except with probability at most 2−n/(2 O(k)C log F), after application of the restriction, all formulas in F simultaneously reduce to logpoly(k) Fjuntas where an sjunta is a function whose value depends on only s of its inputs. Previously, Ajtai showed simultaneous approximations for kDNF formulas by juntas related to the one we show but with a dependence on exp(k) rather than poly(k), resulting in a weaker heightapproximation tradeoff than ours.
Robust simulations and significant separations
, 2010
"... We define and study a new notion of “robust simulations” between complexity classes which is intermediate between the traditional notions of infinitelyoften and almosteverywhere,as well as a corresponding notion of “significant separations”. A language L has a robust simulation in a complexity cla ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We define and study a new notion of “robust simulations” between complexity classes which is intermediate between the traditional notions of infinitelyoften and almosteverywhere,as well as a corresponding notion of “significant separations”. A language L has a robust simulation in a complexity class C if there is a language in C which agrees with L on arbitrarily large polynomial stretches of input lengths. There is a significant separation of L from C if there is no robust simulation of L ∈ C. The new notion of simulation is a cleaner and more natural notion of simulation than the infinitelyoften notion. We show that various implications in complexity theory such as the collapse of PH if NP = P and the KarpLipton theorem have analogues for robust simulations. We then use these results to prove that most known separations in complexity theory, such as hierarchy theorems, fixed polynomial circuit lower bounds, timespace tradeoffs, and the recent theorem of Williams, can be strengthened to significant separations, though in each case, an almost everywhere separation is unknown. Proving our results requires several new ideas, including a completely different proof of the
A satisfiability algorithm for sparse depth two threshold circuits
 In Proceedings of the 54th Annual Symposium on the Foundations of Computer Science (FOCS 2013
, 2013
"... ar ..."
(Show Context)
Natural Proofs Versus Derandomization
"... We study connections between Natural Proofs, derandomization, and the problem of proving “weak” circuit lower bounds such as NEXP ⊂ TC 0, which are still wide open. Natural Proofs have three properties: they are constructive (an efficient algorithm A is embedded in them), have largeness (A accepts ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
We study connections between Natural Proofs, derandomization, and the problem of proving “weak” circuit lower bounds such as NEXP ⊂ TC 0, which are still wide open. Natural Proofs have three properties: they are constructive (an efficient algorithm A is embedded in them), have largeness (A accepts a large fraction of strings), and are useful (A rejects all strings which are truth tables of small circuits). Strong circuit lower bounds that are “naturalizing ” would contradict present cryptographic understanding, yet the vast majority of known circuit lower bound proofs are naturalizing. So it is imperative to understand how to pursue unNatural Proofs. Some heuristic arguments say constructivity should be circumventable. Largeness is inherent in many proof techniques, and it is probably our presently weak techniques that yield constructivity. We prove: • Constructivity is unavoidable, even for NEXP lower bounds. Informally, we prove for all “typical” nonuniform circuit classes C, NEXP ⊂ C if and only if there is a polynomialtime algorithm distinguishing some function from all functions computable by Ccircuits. Hence NEXP ⊂ C is equivalent to exhibiting a constructive property useful against C. • There are no Pnatural properties useful against C if and only if randomized exponential time can be “derandomized ” using truth tables of circuits from C as random seeds. Therefore the task of proving there are no Pnatural properties is inherently a derandomization problem, weaker than but implied by the existence of strong pseudorandom functions. These characterizations are applied to yield several new results. The two main applications are that NEXP ∩ coNEXP does not have n log n size ACC circuits, and a mild derandomization result for RP. 1