Results 1  10
of
12
Cracks in the Defenses: Scouting Out Approaches on Circuit Lower Bounds
"... Razborov and Rudich identified an imposing barrier that stands in the way of progress toward the goal of proving superpolynomial lower bounds on circuit size. Their work on “natural proofs” applies to a large class of arguments that have been used in complexity theory, and shows that no such argum ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Razborov and Rudich identified an imposing barrier that stands in the way of progress toward the goal of proving superpolynomial lower bounds on circuit size. Their work on “natural proofs” applies to a large class of arguments that have been used in complexity theory, and shows that no such argument can prove that a problem requires circuits of superpolynomial size, even for some very restricted classes of circuits (under reasonable cryptographic assumptions). This barrier is so daunting, that some researchers have decided to focus their attentions elsewhere. Yet the goal of proving circuit lower bounds is of such importance, that some in the community have proposed concrete strategies for surmounting the obstacle. This lecture will discuss some of these strategies, and will dwell at length on a recent approach proposed by Michal Koucky and the author.
A new characterization of ACC 0 and probabilistic CC 0
"... that the Boolean AND function can not be computed by polynomial size constant depth circuits built from modular counting gates, i.e., by CC 0 circuits. In this work we show that the AND function can be computed by uniform probabilistic CC 0 circuits that use only O(log n) random bits. This may be vi ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
that the Boolean AND function can not be computed by polynomial size constant depth circuits built from modular counting gates, i.e., by CC 0 circuits. In this work we show that the AND function can be computed by uniform probabilistic CC 0 circuits that use only O(log n) random bits. This may be viewed as evidence contrary to the conjecture. As a consequence of our construction we get that all of ACC 0 can be computed by probabilistic CC 0 circuits that use only O(log n) random bits. Thus, if one were able to derandomize such circuits, we would obtain a collapse of circuit classes giving ACC 0 = CC 0. We present a derandomization of probabilistic CC 0 circuits using AND and OR gates to obtain ACC 0 = AND ◦ OR ◦ CC 0 = OR ◦ AND ◦ CC 0. AND and OR gates of sublinear fanin suffice. Both these results hold for uniform as well as nonuniform circuit classes. For nonuniform circuits we obtain the stronger conclusion that ACC 0 = rand − ACC 0 = rand − CC 0 = rand(log n)−CC 0, i.e., probabilistic ACC 0 circuits can be simulated by probabilistic CC 0 circuits using only O(log n) random bits. As an application of our results we obtain a characterization of ACC 0 by constant width planar nondeterministic branching programs, improving a previous characterization for the quasipolynomial size setting. I.
Uniform Derandomization from Pathetic Lower Bounds
, 2009
"... A recurring theme in the literature on derandomization is that probabilistic algorithms can be simulated quickly by deterministic algorithms, if one can obtain impressive (i.e., superpolynomial, or even nearlyexponential) circuit size lower bounds for certain problems. In contrast to what is needed ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
A recurring theme in the literature on derandomization is that probabilistic algorithms can be simulated quickly by deterministic algorithms, if one can obtain impressive (i.e., superpolynomial, or even nearlyexponential) circuit size lower bounds for certain problems. In contrast to what is needed for derandomization, existing lower bounds seem rather pathetic (linearsize lower bounds for general circuits [IM02], nearly cubic lower bounds for formula size [H˚as98], nearly n log log n size lower bounds for branching programs [BSSV03], n 1+cd for depth d threshold circuits [IPS97]). Here, we present two instances where “pathetic ” lower bounds of the form n 1+ɛ would suffice to derandomize interesting classes of probabilistic algorithms. We show: • If the word problem over S5 requires constantdepth threshold circuits of size n1+ɛ for some ɛ> 0, then any language accepted by uniform polynomialsize probabilistic threshold circuits is accepted by a uniform family of deterministic constantdepth threshold circuits of subexponential size. • If there are no constantdepth arithmetic circuits of size n1+ɛ for the problem of multiplying a sequence of n 3by3 matrices, then for every constant d, blackbox identity testing for depthd arithmetic circuits with bounded individual degree can be performed by a uniform family of deterministic constantdepth AC0 circuits of subexponential size.
A Status Report on the P versus NP Question
"... We survey some of the history of the most famous open question in computing: the P versus NP question. We summarize some of the progress that has been made to date, and assess the current situation. ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We survey some of the history of the most famous open question in computing: the P versus NP question. We summarize some of the progress that has been made to date, and assess the current situation.
Amplifying Circuit Lower Bounds Against Polynomial Time With Applications
 In IEEE Conference on Computational Complexity
"... We give a selfreduction for the Circuit Evaluation problem (CircEval), and prove the following consequences. • Amplifying SizeDepth Lower Bounds. If CircEval has Boolean circuits of n k size and n1−δ depth for some k and δ, then for every ε> 0, there is a δ ′> 0 such that CircEval has circuits of ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We give a selfreduction for the Circuit Evaluation problem (CircEval), and prove the following consequences. • Amplifying SizeDepth Lower Bounds. If CircEval has Boolean circuits of n k size and n1−δ depth for some k and δ, then for every ε> 0, there is a δ ′> 0 such that CircEval has circuits of n1+ε size and n1−δ ′ depth. Moreover, the resulting circuits require only Õ(nε) bits of nonuniformity to construct. As a consequence, strong enough depth lower bounds for Circuit Evaluation imply a full separation of P and NC (even with a weak size lower bound). • Lower Bounds for Quantified Boolean Formulas. Let c, d> 1 and e < 1 satisfy c < (1 − e + d)/d. Either the problem of recognizing valid quantified Boolean formulas (QBF) is not solvable in TIME[n c], or the Circuit Evaluation problem cannot be solved with circuits of n d size and n e depth. This implies unconditional polynomialtime uniform circuit lower bounds for solving QBF. We also prove that QBF does not have n ctime uniform NC circuits, for all c < 2. 1
New Surprises from SelfReducibility
"... Abstract. Selfreducibility continues to give us new angles on attacking some of the fundamental questions about computation and complexity. 1 ..."
Abstract
 Add to MetaCart
Abstract. Selfreducibility continues to give us new angles on attacking some of the fundamental questions about computation and complexity. 1
unknown title
, 2012
"... The notion of probabilistic computation dates back at least to Turing, and he also wrestled with the practical problems of how to implement probabilistic algorithms on machines with, at best, very limited access to randomness. A more recent line of research, known as derandomization, studies the ext ..."
Abstract
 Add to MetaCart
The notion of probabilistic computation dates back at least to Turing, and he also wrestled with the practical problems of how to implement probabilistic algorithms on machines with, at best, very limited access to randomness. A more recent line of research, known as derandomization, studies the extent to which randomness is superfluous. A recurring theme in the literature on derandomization is that probabilistic algorithms can be simulated quickly by deterministic algorithms, if one can obtain impressive (i.e., superpolynomial, or even nearlyexponential) circuit size lower bounds for certain problems. In contrast to what is needed for derandomization, existing lower bounds seem rather pathetic (linearsize lower bounds for general circuits [IM02], nearly cubic lower bounds for formula size [H˚as98], nearly quadratic size lower bounds for branching programs [Nec66], n 1+cd for depth d threshold circuits [IPS97]). Here, we present two instances where “pathetic ” lower bounds of the form n 1+ɛ would suffice to derandomize interesting classes of probabilistic algorithms. We show: • If the word problem over S5 requires constantdepth threshold circuits of size n1+ɛ for some ɛ> 0, then any language accepted by uniform polynomialsize probabilistic threshold circuits can be solved in subexponential time (and more strongly, can be accepted by a uniform family of deterministic constantdepth
Local reductions
, 2013
"... We reduce nondeterministic time T ≥ 2 n to a 3SAT instance φ of size φ  = T ·log O(1) T such that there is an explicit circuit C that on input an index i of logφ bits outputs the ith clause, and each output bit of C depends on O(1) inputs bits. The previous best result was C in NC 1. Even in th ..."
Abstract
 Add to MetaCart
We reduce nondeterministic time T ≥ 2 n to a 3SAT instance φ of size φ  = T ·log O(1) T such that there is an explicit circuit C that on input an index i of logφ bits outputs the ith clause, and each output bit of C depends on O(1) inputs bits. The previous best result was C in NC 1. Even in the simpler setting of φ  = poly(T) the previous best result was C in AC 0. More generally, for any time T ≥ n and parameter r ≤ n we obtain log 2φ  = max(logT,n/r)+O(logn)+O(loglogT) and each output bit of C is a decision tree of depth O(logr). As an application, we simplify the proof of Williams ’ ACC 0 lower bound, and tighten his connection between satisfiability algorithms and lower bounds.
Parallel computation using active selfassembly ⋆
"... Abstract. We study the computational complexity of the recently proposed nubots model of molecularscale selfassembly. The model generalizes asynchronous cellular automaton to have nonlocal movement where large assemblies of molecules can be moved around, analogous to millions of molecular motors ..."
Abstract
 Add to MetaCart
Abstract. We study the computational complexity of the recently proposed nubots model of molecularscale selfassembly. The model generalizes asynchronous cellular automaton to have nonlocal movement where large assemblies of molecules can be moved around, analogous to millions of molecular motors in animal muscle effecting the rapid movement of large arms and legs. We show that nubots is capable of simulating Boolean circuits of polylogarithmic depth and polynomial size, in only polylogarithmic expected time. In computational complexity terms, any problem from the complexity class NC is solved in polylogarithmic expected time on nubots that use a polynomial amount of workspace. Along the way, we give fast parallel algorithms for a number of problems including line growth, sorting, Boolean matrix multiplication and spacebounded Turing machine simulation, all using a constant number of nubot states (monomer types). Circuit depth is a wellstudied notion of parallel time, and our result implies that nubots is a highly parallel model of computation in a formal sense. Thus, adding a movement primitive to an asynchronous nondeterministic cellular automation, as in nubots, drastically increases its parallel processing abilities. 1