Results 1  10
of
16
Pseudorandom generators without the XOR Lemma
, 1998
"... Madhu Sudan y Luca Trevisan z Salil Vadhan x Abstract Impagliazzo and Wigderson [IW97] have recently shown that if there exists a decision problem solvable in time 2 O(n) and having circuit complexity 2 n) (for all but finitely many n) then P = BPP. This result is a culmination of a serie ..."
Abstract

Cited by 130 (21 self)
 Add to MetaCart
Madhu Sudan y Luca Trevisan z Salil Vadhan x Abstract Impagliazzo and Wigderson [IW97] have recently shown that if there exists a decision problem solvable in time 2 O(n) and having circuit complexity 2 n) (for all but finitely many n) then P = BPP. This result is a culmination of a series of works showing connections between the existence of hard predicates and the existence of good pseudorandom generators. The construction of Impagliazzo and Wigderson goes through three phases of "hardness amplification" (a multivariate polynomial encoding, a first derandomized XOR Lemma, and a second derandomized XOR Lemma) that are composed with the Nisan Wigderson [NW94] generator. In this paper we present two different approaches to proving the main result of Impagliazzo and Wigderson. In developing each approach, we introduce new techniques and prove new results that could be useful in future improvements and/or applications of hardnessrandomness tradeoffs. Our first result is that when (a modified version of) the NisanWigderson generator construction is applied with a "mildly" hard predicate, the result is a generator that produces a distribution indistinguishable from having large minentropy. An extractor can then be used to produce a distribution computationally indistinguishable from uniform. This is the first construction of a pseudorandom generator that works with a mildly hard predicate without doing hardness amplification. We then show that in the ImpagliazzoWigderson construction only the first hardnessamplification phase (encoding with multivariate polynomial) is necessary, since it already gives the required averagecase hardness. We prove this result by (i) establishing a connection between the hardnessamplification problem and a listdecoding...
Graph Nonisomorphism Has Subexponential Size Proofs Unless The PolynomialTime Hierarchy Collapses
 SIAM Journal on Computing
, 1998
"... We establish hardness versus randomness tradeoffs for a broad class of randomized procedures. In particular, we create efficient nondeterministic simulations of bounded round ArthurMerlin games using a language in exponential time that cannot be decided by polynomial size oracle circuits with acce ..."
Abstract

Cited by 110 (6 self)
 Add to MetaCart
We establish hardness versus randomness tradeoffs for a broad class of randomized procedures. In particular, we create efficient nondeterministic simulations of bounded round ArthurMerlin games using a language in exponential time that cannot be decided by polynomial size oracle circuits with access to satisfiability. We show that every language with a bounded round ArthurMerlin game has subexponential size membership proofs for infinitely many input lengths unless exponential time coincides with the third level of the polynomialtime hierarchy (and hence the polynomialtime hierarchy collapses). This provides the first strong evidence that graph nonisomorphism has subexponential size proofs. We set up a general framework for derandomization which encompasses more than the traditional model of randomized computation. For a randomized procedure to fit within this framework, we only require that for any fixed input the complexity of checking whether the procedure succeeds on a given ...
Extracting all the Randomness and Reducing the Error in Trevisan's Extractors
 In Proceedings of the 31st Annual ACM Symposium on Theory of Computing
, 1999
"... We give explicit constructions of extractors which work for a source of any minentropy on strings of length n. These extractors can extract any constant fraction of the minentropy using O(log² n) additional random bits, and can extract all the minentropy using O(log³ n) additional rando ..."
Abstract

Cited by 80 (17 self)
 Add to MetaCart
We give explicit constructions of extractors which work for a source of any minentropy on strings of length n. These extractors can extract any constant fraction of the minentropy using O(log² n) additional random bits, and can extract all the minentropy using O(log³ n) additional random bits. Both of these constructions use fewer truly random bits than any previous construction which works for all minentropies and extracts a constant fraction of the minentropy. We then improve our second construction and show that we can reduce the entropy loss to 2 log(1=") +O(1) bits, while still using O(log³ n) truly random bits (where entropy loss is defined as [(source minentropy) + (# truly random bits used) (# output bits)], and " is the statistical difference from uniform achieved). This entropy loss is optimal up to a constant additive term. our...
Extracting randomness from samplable distributions
 In Proceedings of the 41st Annual IEEE Symposium on Foundations of Computer Science
, 2000
"... The standard notion of a randomness extractor is a procedure which converts any weak source of randomness into an almost uniform distribution. The conversion necessarily uses a small amount of pure randomness, which can be eliminated by complete enumeration in some, but not all, applications. Here, ..."
Abstract

Cited by 58 (8 self)
 Add to MetaCart
The standard notion of a randomness extractor is a procedure which converts any weak source of randomness into an almost uniform distribution. The conversion necessarily uses a small amount of pure randomness, which can be eliminated by complete enumeration in some, but not all, applications. Here, we consider the problem of deterministically converting a weak source of randomness into an almost uniform distribution. Previously, deterministic extraction procedures were known only for sources satisfying strong independence requirements. In this paper, we look at sources which are samplable, i.e. can be generated by an efficient sampling algorithm. We seek an efficient deterministic procedure that, given a sample from any samplable distribution of sufficiently large minentropy, gives an almost uniformly distributed output. We explore the conditions under which such deterministic extractors exist. We observe that no deterministic extractor exists if the sampler is allowed to use more computational resources than the extractor. On the other hand, if the extractor is allowed (polynomially) more resources than the sampler, we show that deterministic extraction becomes possible. This is true unconditionally in the nonuniform setting (i.e., when the extractor can be computed by a small circuit), and (necessarily) relies on complexity assumptions in the uniform setting. One of our uniform constructions is as follows: assuming that there are problems in���ÌÁÅ�ÇÒthat are not solvable by subexponentialsize circuits with¦� gates, there is an efficient extractor that transforms any samplable distribution of lengthÒand minentropy Ò into an output distribution of length ÇÒ, whereis any sufficiently small constant. The running time of the extractor is polynomial inÒand the circuit complexity of the sampler. These extractors are based on a connection be
Pseudorandomness and averagecase complexity via uniform reductions
 In Proceedings of the 17th Annual IEEE Conference on Computational Complexity
, 2002
"... Abstract. Impagliazzo and Wigderson (36th FOCS, 1998) gave the first construction of pseudorandom generators from a uniform complexity assumption on EXP (namely EXP � = BPP). Unlike results in the nonuniform setting, their result does not provide a continuous tradeoff between worstcase hardness an ..."
Abstract

Cited by 57 (9 self)
 Add to MetaCart
Abstract. Impagliazzo and Wigderson (36th FOCS, 1998) gave the first construction of pseudorandom generators from a uniform complexity assumption on EXP (namely EXP � = BPP). Unlike results in the nonuniform setting, their result does not provide a continuous tradeoff between worstcase hardness and pseudorandomness, nor does it explicitly establish an averagecase hardness result. In this paper: ◦ We obtain an optimal worstcase to averagecase connection for EXP: if EXP � ⊆ BPTIME(t(n)), then EXP has problems that cannot be solved on a fraction 1/2 + 1/t ′ (n) of the inputs by BPTIME(t ′ (n)) algorithms, for t ′ = t Ω(1). ◦ We exhibit a PSPACEcomplete selfcorrectible and downward selfreducible problem. This slightly simplifies and strengthens the proof of Impagliazzo and Wigderson, which used a #Pcomplete problem with these properties. ◦ We argue that the results of Impagliazzo and Wigderson, and the ones in this paper, cannot be proved via “blackbox ” uniform reductions.
Extractors and PseudoRandom Generators with Optimal Seed Length
, 1999
"... We give the rst construction of a pseudorandom generator with optimal seed length that uses (essentially) arbitrary hardness. It builds on the novel recursive use of the NWgenerator in [ISW99], which produced many optimal generators one of which was pseudorandom. This is achieved in two stages ..."
Abstract

Cited by 41 (11 self)
 Add to MetaCart
We give the rst construction of a pseudorandom generator with optimal seed length that uses (essentially) arbitrary hardness. It builds on the novel recursive use of the NWgenerator in [ISW99], which produced many optimal generators one of which was pseudorandom. This is achieved in two stages  rst signicantly reducing the number of candidate generators, and then eciently combining them into one. We also give the rst construction of an extractor with optimal seed length, that can handle subpolynomial entropy levels. It builds on the fundamental connection between extractors and pseudorandom generators discovered by Trevisan [Tre99], combined with construction above. Moreover, using Kolmogorov Complexity rather than circuit size in the analysis gives superpolynomial savings for our construction, and renders our extractors better than known for all entropy levels. Research Supported by NSF Award CCR9734911, Sloan Research Fellowship BR3311, grant #93025 of the j...
Coding Constructions for Blacklisting Problems Without Computational Assumptions
, 1999
"... . We consider the broadcast exclusion problem: how to transmit a message over a broadcast channel shared by N = 2 n users so that all but some specified coalition of k excluded users can understand the contents of the message. Using errorcorrecting codes, and avoiding any computational assumpt ..."
Abstract

Cited by 36 (0 self)
 Add to MetaCart
. We consider the broadcast exclusion problem: how to transmit a message over a broadcast channel shared by N = 2 n users so that all but some specified coalition of k excluded users can understand the contents of the message. Using errorcorrecting codes, and avoiding any computational assumptions in our constructions, we construct natural schemes that completely avoid any dependence on n in the transmission overhead. Specifically, we construct: (i) (for illustrative purposes,) a randomized scheme where the server's storage is exponential (in n), but the transmission overhead is O(k), and each user's storage is O(kn); (ii) a scheme based on polynomials where the transmission overhead is O(kn) and each user's storage is O(kn); and (iii) a scheme using algebraicgeometric codes where the transmission overhead is O(k 2 ) and each user is required to store O(kn) keys. In the process of proving these results, we show how to construct very good coverfree set systems and co...
On Recycling the Randomness of States in Space Bounded Computation
 In Proceedings of the ThirtyFirst Annual ACM Symposium on the Theory of Computing
, 1999
"... Let M be a logarithmic space Turing machine (or a polynomial width branching program) that uses up to k 2 p log n (read once) random bits. For a fixed input, let P i (S) be the probability (over the random string) that at time i the machine M is in state S, and assume that some weak estimation of ..."
Abstract

Cited by 34 (14 self)
 Add to MetaCart
Let M be a logarithmic space Turing machine (or a polynomial width branching program) that uses up to k 2 p log n (read once) random bits. For a fixed input, let P i (S) be the probability (over the random string) that at time i the machine M is in state S, and assume that some weak estimation of the probabilities P i (S) is known or given or can be easily computed. We construct a logarithmic space pseudorandom generator that uses only logarithmic number of truly random bits and outputs a sequence of k bits that looks random to M . This means that a very weak estimation of the state probabilities of M is sufficient for a full derandomization of M and for constructing pseudorandom sequences for M . We have several applications of the main theorem, as stated within. To prove our theorem, we introduce the idea of recycling the state S of the machine M at time i as part of the random string for the same machine at later time. That is, we use the entropy of the random variable S in o...
Derandomizing ArthurMerlin Games under Uniform Assumptions
 Computational Complexity
, 2000
"... We study how the nondeterminism versus determinism problem and the time versus space problem are related to the problem of derandomization. In particular, we show two ways of derandomizing the complexity class AM under uniform assumptions, which was only known previously under nonuniform assumption ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
We study how the nondeterminism versus determinism problem and the time versus space problem are related to the problem of derandomization. In particular, we show two ways of derandomizing the complexity class AM under uniform assumptions, which was only known previously under nonuniform assumptions [13, 14]. First, we prove that either AM = NP or it appears to any nondeterministic polynomial time adversary that NP is contained in deterministic subexponential time infinitely often. This implies that to any nondeterministic polynomial time adversary, the graph nonisomorphism problem appears to have subexponentialsize proofs infinitely often, the first nontrivial derandomization of this problem without any assumption. Next, we show that either all BPP = P, AM = NP, and PH P hold, or for any t(n) = 2 n) , DTIME(t(n)) DSPACE(t (n)) infinitely often for any constant > 0. Similar tradeoffs also hold for a whole range of parameters. This improves previous results [17, 5] ...
Derandomization That is Rarely Wrong From Short Advice That is Typically Good
, 2002
"... For every ffl ? 0, we present a deterministic logspace algorithm that correctly decides undirected graph connectivity on all but at most 2 of the nvertex graphs. The same holds for every problem in Symmetric Logspace (i.e., SL). ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
For every ffl ? 0, we present a deterministic logspace algorithm that correctly decides undirected graph connectivity on all but at most 2 of the nvertex graphs. The same holds for every problem in Symmetric Logspace (i.e., SL).