Results 1  10
of
74
Pseudorandom generators without the XOR Lemma
, 1998
"... Madhu Sudan y Luca Trevisan z Salil Vadhan x Abstract Impagliazzo and Wigderson [IW97] have recently shown that if there exists a decision problem solvable in time 2 O(n) and having circuit complexity 2 n) (for all but finitely many n) then P = BPP. This result is a culmination of a serie ..."
Abstract

Cited by 126 (21 self)
 Add to MetaCart
Madhu Sudan y Luca Trevisan z Salil Vadhan x Abstract Impagliazzo and Wigderson [IW97] have recently shown that if there exists a decision problem solvable in time 2 O(n) and having circuit complexity 2 n) (for all but finitely many n) then P = BPP. This result is a culmination of a series of works showing connections between the existence of hard predicates and the existence of good pseudorandom generators. The construction of Impagliazzo and Wigderson goes through three phases of "hardness amplification" (a multivariate polynomial encoding, a first derandomized XOR Lemma, and a second derandomized XOR Lemma) that are composed with the Nisan Wigderson [NW94] generator. In this paper we present two different approaches to proving the main result of Impagliazzo and Wigderson. In developing each approach, we introduce new techniques and prove new results that could be useful in future improvements and/or applications of hardnessrandomness tradeoffs. Our first result is that when (a modified version of) the NisanWigderson generator construction is applied with a "mildly" hard predicate, the result is a generator that produces a distribution indistinguishable from having large minentropy. An extractor can then be used to produce a distribution computationally indistinguishable from uniform. This is the first construction of a pseudorandom generator that works with a mildly hard predicate without doing hardness amplification. We then show that in the ImpagliazzoWigderson construction only the first hardnessamplification phase (encoding with multivariate polynomial) is necessary, since it already gives the required averagecase hardness. We prove this result by (i) establishing a connection between the hardnessamplification problem and a listdecoding...
Simple Extractors for All MinEntropies and a New PseudoRandom Generator
 Journal of the ACM
, 2001
"... A “randomness extractor ” is an algorithm that given a sample from a distribution with sufficiently high minentropy and a short random seed produces an output that is statistically indistinguishable from uniform. (Minentropy is a measure of the amount of randomness in a distribution). We present a ..."
Abstract

Cited by 107 (30 self)
 Add to MetaCart
A “randomness extractor ” is an algorithm that given a sample from a distribution with sufficiently high minentropy and a short random seed produces an output that is statistically indistinguishable from uniform. (Minentropy is a measure of the amount of randomness in a distribution). We present a simple, selfcontained extractor construction that produces good extractors for all minentropies. Our construction is algebraic and builds on a new polynomialbased approach introduced by TaShma, Zuckerman, and Safra [TSZS01]. Using our improvements, we obtain, for example, an extractor with output length m = k/(log n) O(1/α) and seed length (1 + α) log n for an arbitrary 0 < α ≤ 1, where n is the input length, and k is the minentropy of the input distribution. A “pseudorandom generator ” is an algorithm that given a short random seed produces a long output that is computationally indistinguishable from uniform. Our technique also gives a new way to construct pseudorandom generators from functions that require large circuits. Our pseudorandom generator construction is not based on the NisanWigderson generator [NW94], and turns worstcase hardness directly into pseudorandomness. The parameters of our generator match those in [IW97, STV01] and in particular are strong enough to obtain a new proof that P = BP P if E requires exponential size circuits.
Extractors and Pseudorandom Generators
 Journal of the ACM
, 1999
"... We introduce a new approach to constructing extractors. Extractors are algorithms that transform a "weakly random" distribution into an almost uniform distribution. Explicit constructions of extractors have a variety of important applications, and tend to be very difficult to obtain. ..."
Abstract

Cited by 87 (5 self)
 Add to MetaCart
We introduce a new approach to constructing extractors. Extractors are algorithms that transform a "weakly random" distribution into an almost uniform distribution. Explicit constructions of extractors have a variety of important applications, and tend to be very difficult to obtain.
Extracting randomness from samplable distributions
 In Proceedings of the 41st Annual IEEE Symposium on Foundations of Computer Science
, 2000
"... The standard notion of a randomness extractor is a procedure which converts any weak source of randomness into an almost uniform distribution. The conversion necessarily uses a small amount of pure randomness, which can be eliminated by complete enumeration in some, but not all, applications. Here, ..."
Abstract

Cited by 55 (8 self)
 Add to MetaCart
The standard notion of a randomness extractor is a procedure which converts any weak source of randomness into an almost uniform distribution. The conversion necessarily uses a small amount of pure randomness, which can be eliminated by complete enumeration in some, but not all, applications. Here, we consider the problem of deterministically converting a weak source of randomness into an almost uniform distribution. Previously, deterministic extraction procedures were known only for sources satisfying strong independence requirements. In this paper, we look at sources which are samplable, i.e. can be generated by an efficient sampling algorithm. We seek an efficient deterministic procedure that, given a sample from any samplable distribution of sufficiently large minentropy, gives an almost uniformly distributed output. We explore the conditions under which such deterministic extractors exist. We observe that no deterministic extractor exists if the sampler is allowed to use more computational resources than the extractor. On the other hand, if the extractor is allowed (polynomially) more resources than the sampler, we show that deterministic extraction becomes possible. This is true unconditionally in the nonuniform setting (i.e., when the extractor can be computed by a small circuit), and (necessarily) relies on complexity assumptions in the uniform setting. One of our uniform constructions is as follows: assuming that there are problems in���ÌÁÅ�ÇÒthat are not solvable by subexponentialsize circuits with¦� gates, there is an efficient extractor that transforms any samplable distribution of lengthÒand minentropy Ò into an output distribution of length ÇÒ, whereis any sufficiently small constant. The running time of the extractor is polynomial inÒand the circuit complexity of the sampler. These extractors are based on a connection be
In Search of an Easy Witness: Exponential Time vs. Probabilistic Polynomial Time
"... Restricting the search space f0; 1g to the set of truth tables of "easy" Boolean functions on log n variables, as well as using some known hardnessrandomness tradeoffs, we establish a number of results relating the complexity of exponentialtime and probabilistic polynomialtime complexity cla ..."
Abstract

Cited by 55 (5 self)
 Add to MetaCart
Restricting the search space f0; 1g to the set of truth tables of "easy" Boolean functions on log n variables, as well as using some known hardnessrandomness tradeoffs, we establish a number of results relating the complexity of exponentialtime and probabilistic polynomialtime complexity classes. In particular, we show that NEXP ae P=poly , NEXP = MA; this can be interpreted as saying that no derandomization of MA (and, hence, of promiseBPP) is possible unless NEXP contains a hard Boolean function. We also prove several downward closure results for ZPP, RP, BPP, and MA; e.g., we show EXP = BPP , EE = BPE, where EE is the doubleexponential time class and BPE is the exponentialtime analogue of BPP.
Easiness Assumptions and Hardness Tests: Trading Time for Zero Error
 Journal of Computer and System Sciences
, 2000
"... We propose a new approach towards derandomization in the uniform setting, where it is computationally hard to nd possible mistakes in the simulation of a given probabilistic algorithm. The approach consists in combining both easiness and hardness complexity assumptions: if a derandomization metho ..."
Abstract

Cited by 42 (2 self)
 Add to MetaCart
We propose a new approach towards derandomization in the uniform setting, where it is computationally hard to nd possible mistakes in the simulation of a given probabilistic algorithm. The approach consists in combining both easiness and hardness complexity assumptions: if a derandomization method based on an easiness assumption fails, then we obtain a certain hardness test that can be used to remove error in BPP algorithms. As an application, we prove that every RP algorithm can be simulated by a zeroerror probabilistic algorithm, running in expected subexponential time, that appears correct innitely often (i.o.) to every ecient adversary. A similar result by Impagliazzo and Wigderson (FOCS'98) states that BPP allows deterministic subexponentialtime simulations that appear correct with respect to any eciently sampleable distribution i.o., under the assumption that EXP 6= BPP; in contrast, our result does not rely on any unproven assumptions. As another application of our...
Extractors and PseudoRandom Generators with Optimal Seed Length
, 1999
"... We give the rst construction of a pseudorandom generator with optimal seed length that uses (essentially) arbitrary hardness. It builds on the novel recursive use of the NWgenerator in [ISW99], which produced many optimal generators one of which was pseudorandom. This is achieved in two stages ..."
Abstract

Cited by 39 (11 self)
 Add to MetaCart
We give the rst construction of a pseudorandom generator with optimal seed length that uses (essentially) arbitrary hardness. It builds on the novel recursive use of the NWgenerator in [ISW99], which produced many optimal generators one of which was pseudorandom. This is achieved in two stages  rst signicantly reducing the number of candidate generators, and then eciently combining them into one. We also give the rst construction of an extractor with optimal seed length, that can handle subpolynomial entropy levels. It builds on the fundamental connection between extractors and pseudorandom generators discovered by Trevisan [Tre99], combined with construction above. Moreover, using Kolmogorov Complexity rather than circuit size in the analysis gives superpolynomial savings for our construction, and renders our extractors better than known for all entropy levels. Research Supported by NSF Award CCR9734911, Sloan Research Fellowship BR3311, grant #93025 of the j...
Statistical zeroknowledge proofs with efficient provers: Lattice problems and more
 In CRYPTO
, 2003
"... Abstract. We construct several new statistical zeroknowledge proofs with efficient provers, i.e. ones where the prover strategy runs in probabilistic polynomial time given an NP witness for the input string. Our first proof systems are for approximate versions of the Shortest Vector Problem (SVP) a ..."
Abstract

Cited by 39 (9 self)
 Add to MetaCart
Abstract. We construct several new statistical zeroknowledge proofs with efficient provers, i.e. ones where the prover strategy runs in probabilistic polynomial time given an NP witness for the input string. Our first proof systems are for approximate versions of the Shortest Vector Problem (SVP) and Closest Vector Problem (CVP), where the witness is simply a short vector in the lattice or a lattice vector close to the target, respectively. Our proof systems are in fact proofs of knowledge, and as a result, we immediately obtain efficient latticebased identification schemes which can be implemented with arbitrary families of lattices in which the approximate SVP or CVP are hard. We then turn to the general question of whether all problems in SZK ∩ NP admit statistical zeroknowledge proofs with efficient provers. Towards this end, we give a statistical zeroknowledge proof system with an efficient prover for a natural restriction of Statistical Difference, a complete problem for SZK. We also suggest a plausible approach to resolving the general question in the positive. 1
Power from Random Strings
 IN PROCEEDINGS OF THE 43RD IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE
, 2002
"... We show that sets consisting of strings of high Kolmogorov complexity provide examples of sets that are complete for several complexity classes under probabilistic and nonuniform reductions. These sets are provably not complete under the usual manyone reductions. Let ..."
Abstract

Cited by 36 (15 self)
 Add to MetaCart
We show that sets consisting of strings of high Kolmogorov complexity provide examples of sets that are complete for several complexity classes under probabilistic and nonuniform reductions. These sets are provably not complete under the usual manyone reductions. Let
The complexity of constructing pseudorandom generators from hard functions
 Computational Complexity
, 2004
"... Abstract. We study the complexity of constructing pseudorandom generators (PRGs) from hard functions, focussing on constantdepth circuits. We show that, starting from a function f: {0, 1} l → {0, 1} computable in alternating time O(l) with O(1) alternations that is hard on average (i.e. there is a ..."
Abstract

Cited by 35 (8 self)
 Add to MetaCart
Abstract. We study the complexity of constructing pseudorandom generators (PRGs) from hard functions, focussing on constantdepth circuits. We show that, starting from a function f: {0, 1} l → {0, 1} computable in alternating time O(l) with O(1) alternations that is hard on average (i.e. there is a constant ɛ> 0 such that every circuit of size 2 ɛl fails to compute f on at least a 1/poly(l) fraction of inputs) we can construct a PRG: {0, 1} O(log n) → {0, 1} n computable by DLOGTIMEuniform constantdepth circuits of size polynomial in n. Such a PRG implies BP · AC 0 = AC 0 under DLOGTIMEuniformity. On the negative side, we prove that starting from a worstcase hard function f: {0, 1} l → {0, 1} (i.e. there is a constant ɛ> 0 such that every circuit of size 2 ɛl fails to compute f on some input) for every positive constant δ < 1 there is no blackbox construction of a PRG: {0, 1} δn → {0, 1} n computable by constantdepth circuits of size polynomial in n. We also study worstcase hardness amplification, which is the related problem of producing an averagecase hard function starting from a worstcase hard one. In particular, we deduce that there is no blackbox worstcase hardness amplification within the polynomial time hierarchy. These negative results are obtained by showing that polynomialsize constantdepth circuits cannot compute good extractors and listdecodable codes.