Results 1  10
of
20
Pseudorandomness from shrinkage
 In Proceedings of the FiftyThird Annual IEEE Symposium on Foundations of Computer Science
, 2012
"... One powerful theme in complexity theory and pseudorandomness in the past few decades has been the use lower bounds to give pseudorandom generators (PRGs). However, the general results using this hardness vs. randomness paradigm suffer a quantitative loss in parameters, and hence do not give nontrivi ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
One powerful theme in complexity theory and pseudorandomness in the past few decades has been the use lower bounds to give pseudorandom generators (PRGs). However, the general results using this hardness vs. randomness paradigm suffer a quantitative loss in parameters, and hence do not give nontrivial implications for models where we don’t know superpolynomial lower bounds but do know lower bounds of a fixed polynomial. We show that when such lower bounds are proved using random restrictions, we can construct PRGs which are essentially best possible without in turn improving the lower bounds. More specifically, say that a circuit family has shrinkage exponent Γ if a random restriction leaving a p fraction of variables unset shrinks the size of any circuit in the family by a factor of pΓ+o(1). Our PRG uses a seed of length s1/(Γ+1)+o(1) to fool circuits in the family of size s. By using this generic construction, we get PRGs with polynomially small error for the following classes of circuits of size s and with the following seed lengths: 1. For de Morgan formulas, seed length s1/3+o(1); 2. For formulas over an arbitrary basis, seed length s1/2+o(1); 3. For readonce de Morgan formulas, seed length s.234...; 4. For branching programs of size s, seed length s1/2+o(1). The previous best PRGs known for these classes used seeds of length bigger than n/2 to output n bits, and worked only when the size s = O(n) [BPW11].
The communication complexity of addition
, 2011
"... Suppose each of k ≤ no(1) players holds an nbit number xi in its hand. The players wish to determine if ∑ i≤k xi = s. We give a publiccoin protocol with error 1% and communication O(k lg k). The communication bound is independent of n, and for k ≥ 3 improves on the O(k lg n) bound by Nisan (Bolyai ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
Suppose each of k ≤ no(1) players holds an nbit number xi in its hand. The players wish to determine if ∑ i≤k xi = s. We give a publiccoin protocol with error 1% and communication O(k lg k). The communication bound is independent of n, and for k ≥ 3 improves on the O(k lg n) bound by Nisan (Bolyai Soc. Math. Studies; 1993). Our protocol also applies to addition modulo m. In this case we give a matching (publiccoin) Ω(k lg k) lower bound for various m. We also obtain some lower bounds over the integers, including Ω(k lg lg k) for protocols that are oneway, like ours. We give a protocol to determine if ∑ xi> s with error 1 % and communication O(k lg k) lg n. For k ≥ 3 this improves on Nisan’s O(k lg 2 n) bound. A similar improvement holds for computing degree(k − 1) polynomialthreshold functions in the numberonforehead model. We give a (publiccoin, 2player, tight) Ω(lg n) lower bound to determine if x1> x2. This improves on the Ω ( √ lg n) bound by Smirnov (1988).
Delegating computation reliably: Paradigms and Constructions
, 2009
"... In an emerging computing paradigm, computational capabilities, from processing power to storage capacities, are offered to users over communication networks as a service. This new paradigm holds enormous promise for increasing the utility of computationally weak devices. A natural approach is for we ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
In an emerging computing paradigm, computational capabilities, from processing power to storage capacities, are offered to users over communication networks as a service. This new paradigm holds enormous promise for increasing the utility of computationally weak devices. A natural approach is for weak devices to delegate expensive tasks, such as storing a large file or running a complex computation, to more powerful entities (say servers) connected to the same network. While the delegation approach seems promising, it raises an immediate concern: when and how can a weak device verify that a computational task was completed correctly? This practically motivated question touches on foundational questions in cryptography and complexity theory. The focus of this thesis is verifying the correctness of delegated computations. We construct efficient protocols (interactive proofs) for delegating computational tasks. In particular, we present: e A protocol for delegating any computation, where the work needed to verify the correctness of the output is linear in the input length, polynomial in the computation's
Natural Proofs Versus Derandomization
"... We study connections between Natural Proofs, derandomization, and the problem of proving “weak” circuit lower bounds such as NEXP ⊂ TC 0, which are still wide open. Natural Proofs have three properties: they are constructive (an efficient algorithm A is embedded in them), have largeness (A accepts ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
We study connections between Natural Proofs, derandomization, and the problem of proving “weak” circuit lower bounds such as NEXP ⊂ TC 0, which are still wide open. Natural Proofs have three properties: they are constructive (an efficient algorithm A is embedded in them), have largeness (A accepts a large fraction of strings), and are useful (A rejects all strings which are truth tables of small circuits). Strong circuit lower bounds that are “naturalizing ” would contradict present cryptographic understanding, yet the vast majority of known circuit lower bound proofs are naturalizing. So it is imperative to understand how to pursue unNatural Proofs. Some heuristic arguments say constructivity should be circumventable. Largeness is inherent in many proof techniques, and it is probably our presently weak techniques that yield constructivity. We prove: • Constructivity is unavoidable, even for NEXP lower bounds. Informally, we prove for all “typical” nonuniform circuit classes C, NEXP ⊂ C if and only if there is a polynomialtime algorithm distinguishing some function from all functions computable by Ccircuits. Hence NEXP ⊂ C is equivalent to exhibiting a constructive property useful against C. • There are no Pnatural properties useful against C if and only if randomized exponential time can be “derandomized ” using truth tables of circuits from C as random seeds. Therefore the task of proving there are no Pnatural properties is inherently a derandomization problem, weaker than but implied by the existence of strong pseudorandom functions. These characterizations are applied to yield several new results. The two main applications are that NEXP ∩ coNEXP does not have n log n size ACC circuits, and a mild derandomization result for RP. 1
The Complexity of Local List Decoding
"... We study the complexity of locally listdecoding binary error correcting codes with good parameters (that are polynomially related to information theoretic bounds). We show that computing majority over Θ(1/ǫ) bits is essentially equivalent to locally listdecoding binary codes from relative distance ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
We study the complexity of locally listdecoding binary error correcting codes with good parameters (that are polynomially related to information theoretic bounds). We show that computing majority over Θ(1/ǫ) bits is essentially equivalent to locally listdecoding binary codes from relative distance 1/2 − ǫ with list size at most poly(1/ǫ). That is, a localdecoder for such a code can be used to construct a circuit of roughly the same size and depth that computes majority on Θ(1/ǫ) bits. On the other hand, there is an explicit locally listdecodable code with these parameters that has a very efficient (in terms of circuit size and depth) localdecoder that uses majority gates of fanin Θ(1/ǫ). Using known lower bounds for computing majority by constant depth circuits, our results imply that every constantdepth decoder for such a code must have size almost exponential in 1/ǫ (this extends even to subexponential list sizes). This shows that the listdecoding radius of the constantdepth locallistdecoders of Goldwasser et al. [STOC07] is essentially optimal. Using the tight connection between locallylistdecodable codes and hardness amplification, we obtain similar limitations on the complexity of uniform (and even somewhat nonuniform) fullyblackbox worstcase to averagecase reductions. Very recently, Shaltiel and Viola [SV08] independently obtained similar limitations for completely nonuniform fullyblackbox worstcase to averagecase reductions, but only for the special case that the reduction is nonadaptive. Our results apply also to adaptive reductions. 1
On the size of depththree boolean circuits for computing multilinear functions
 Electronic Coll. on Computational Complexity (ECCC
"... We propose that multilinear functions of relatively low degree over GF(2) may be good candidates for obtaining exponential1 lower bounds on the size of constantdepth Boolean circuits (computing explicit functions). Specifically, we propose to move gradually from linear functions to multilinear on ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We propose that multilinear functions of relatively low degree over GF(2) may be good candidates for obtaining exponential1 lower bounds on the size of constantdepth Boolean circuits (computing explicit functions). Specifically, we propose to move gradually from linear functions to multilinear ones, and conjecture that, for any t ≥ 2, some explicit tlinear functions F: ({0, 1}n)t → {0, 1} require depththree circuits of size exp(Ω(tnt/(t+1))). Towards studying this conjecture, we suggest to study two frameworks for the design of depththree Boolean circuits computing multilinear functions, yielding restricted models for which lower bounds may be easier to prove. Both correspond to constructing a circuit by expressing the target polynomial as a composition of simpler polynomials. The first framework corresponds to a direct composition, whereas the second (and stronger) framework corresponds to nested composition and yields depththree Boolean circuits via a ”guessandverify ” paradigm. The corresponding restricted models of circuits are called Dcanonical and NDcanonical, respectively. Our main results are (1) a generic upper bound on the size of depththree Dcanonical
Lower bounds on the query complexity of nonuniform and adaptive reductions showing hardness amplification
, 2012
"... Hardness amplification results show that for every Boolean function f there exists a Boolean function Amp(f) such that the following holds: if every circuit of size s computes f correctly on at most a 1 − δ fraction of inputs, then every circuit of size s ′ computes Amp(f) correctly on at most a 1/2 ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Hardness amplification results show that for every Boolean function f there exists a Boolean function Amp(f) such that the following holds: if every circuit of size s computes f correctly on at most a 1 − δ fraction of inputs, then every circuit of size s ′ computes Amp(f) correctly on at most a 1/2+ϵ fraction of inputs. All hardness amplification results in the literature suffer from “size loss ” meaning that s ′ ≤ ϵ · s. In this paper we show that proofs using “nonuniform reductions ” must suffer from such size loss. To the best of our knowledge, all proofs in the literature are by nonuniform reductions. Our result is the first lower bound that applies to nonuniform reductions that are adaptive. A reduction is an oracle circuit R (·) such that when given oracle access to any function D that computes Amp(f) correctly on a 1/2 + ϵ fraction of inputs, R D computes f correctly on a 1 − δ fraction of inputs. A nonuniform reduction is allowed to also receive a short advice string that may depend on both f and D in an arbitrary way. The well known connection between hardness amplification and listdecodable errorcorrecting codes implies that reductions showing hardness amplification cannot be uniform for δ, ϵ < 1/4. A reduction is nonadaptive if it makes nonadaptive queries to its oracle. Shaltiel and Viola (SICOMP 2010) showed lower bounds on the number of queries made by nonuniform
Cellprobe lower bounds for prefix sums
, 2009
"... We prove that to store n bits x ∈ {0,1} n so that each prefix sum (a.k.a. rank) query Sum(i): = ∑ k≤i xk can be answered by nonadaptively probing q cells of lg n bits, one needs memory n + n/log O(q) n. This matches a recent upper bound of n + n/log Ω(q) n by Pǎtra¸scu (FOCS 2008), also nonadaptiv ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
We prove that to store n bits x ∈ {0,1} n so that each prefix sum (a.k.a. rank) query Sum(i): = ∑ k≤i xk can be answered by nonadaptively probing q cells of lg n bits, one needs memory n + n/log O(q) n. This matches a recent upper bound of n + n/log Ω(q) n by Pǎtra¸scu (FOCS 2008), also nonadaptive. We also obtain a n + n/log 2O(q) n lower bound for storing a string of balanced brackets so that each Match(i) query can be answered by nonadaptively probing q cells. To obtain these bounds we show that a too efficient data structure allows us to break the correlations between query answers. Supported by NSF grant CCF0845003.
Query Complexity in Errorless Hardness Amplification
, 2010
"... An errorless circuit for a boolean function is one that outputs the correct answer or “don’t know ” on each input (and never outputs the wrong answer). The goal of errorless hardness amplification is to show that if f has no size s errorless circuit that outputs “don’t know ” on at most a δ fraction ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
An errorless circuit for a boolean function is one that outputs the correct answer or “don’t know ” on each input (and never outputs the wrong answer). The goal of errorless hardness amplification is to show that if f has no size s errorless circuit that outputs “don’t know ” on at most a δ fraction of inputs, then some f ′ related to f has no size s ′ errorless circuit that outputs “don’t know ” on at most a 1 − ǫ fraction of inputs. Thus the hardness is “amplified” from δ to 1 −ǫ. Unfortunately, this amplification comes at the cost of a loss in circuit size. This is because such results are proven by reductions which show that any size s ′ errorless circuit for f ′ that outputs “don’t know ” on at most a 1 − ǫ fraction of inputs could be used to construct a size s errorless circuit for f that outputs “don’t know ” on at most a δ fraction of inputs. If the reduction makes q queries to the hypothesized errorless circuit for f ′, then plugging in a size s ′ circuit yields a circuit of size ≥ qs ′, and thus we must have s ′ ≤ s/q. Hence it is desirable to keep the query complexity to a minimum. The first results on errorless hardness amplification were obtained by Bogdanov and Safra. They achieved query complexity O ( ( 1 1 δ log ǫ)2 · 1