Results 1  10
of
55
Simple Extractors for All MinEntropies and a New PseudoRandom Generator
 Journal of the ACM
, 2001
"... A “randomness extractor ” is an algorithm that given a sample from a distribution with sufficiently high minentropy and a short random seed produces an output that is statistically indistinguishable from uniform. (Minentropy is a measure of the amount of randomness in a distribution). We present a ..."
Abstract

Cited by 107 (30 self)
 Add to MetaCart
A “randomness extractor ” is an algorithm that given a sample from a distribution with sufficiently high minentropy and a short random seed produces an output that is statistically indistinguishable from uniform. (Minentropy is a measure of the amount of randomness in a distribution). We present a simple, selfcontained extractor construction that produces good extractors for all minentropies. Our construction is algebraic and builds on a new polynomialbased approach introduced by TaShma, Zuckerman, and Safra [TSZS01]. Using our improvements, we obtain, for example, an extractor with output length m = k/(log n) O(1/α) and seed length (1 + α) log n for an arbitrary 0 < α ≤ 1, where n is the input length, and k is the minentropy of the input distribution. A “pseudorandom generator ” is an algorithm that given a short random seed produces a long output that is computationally indistinguishable from uniform. Our technique also gives a new way to construct pseudorandom generators from functions that require large circuits. Our pseudorandom generator construction is not based on the NisanWigderson generator [NW94], and turns worstcase hardness directly into pseudorandomness. The parameters of our generator match those in [IW97, STV01] and in particular are strong enough to obtain a new proof that P = BP P if E requires exponential size circuits.
Lossless condensers, unbalanced expanders, and extractors
 In Proceedings of the 33rd Annual ACM Symposium on Theory of Computing
, 2001
"... Abstract Trevisan showed that many pseudorandom generator constructions give rise to constructionsof explicit extractors. We show how to use such constructions to obtain explicit lossless condensers. A lossless condenser is a probabilistic map using only O(log n) additional random bitsthat maps n bi ..."
Abstract

Cited by 89 (20 self)
 Add to MetaCart
Abstract Trevisan showed that many pseudorandom generator constructions give rise to constructionsof explicit extractors. We show how to use such constructions to obtain explicit lossless condensers. A lossless condenser is a probabilistic map using only O(log n) additional random bitsthat maps n bits strings to poly(log K) bit strings, such that any source with support size Kis mapped almost injectively to the smaller domain. Our construction remains the best lossless condenser to date.By composing our condenser with previous extractors, we obtain new, improved extractors. For small enough minentropies our extractors can output all of the randomness with only O(log n) bits. We also obtain a new disperser that works for every entropy loss, uses an O(log n)bit seed, and has only O(log n) entropy loss. This is the best disperser construction to date,and yields other applications. Finally, our lossless condenser can be viewed as an unbalanced
Extractors and Pseudorandom Generators
 Journal of the ACM
, 1999
"... We introduce a new approach to constructing extractors. Extractors are algorithms that transform a "weakly random" distribution into an almost uniform distribution. Explicit constructions of extractors have a variety of important applications, and tend to be very difficult to obtain. ..."
Abstract

Cited by 87 (5 self)
 Add to MetaCart
We introduce a new approach to constructing extractors. Extractors are algorithms that transform a "weakly random" distribution into an almost uniform distribution. Explicit constructions of extractors have a variety of important applications, and tend to be very difficult to obtain.
Unbalanced expanders and randomness extractors from parvareshvardy codes
 In Proceedings of the 22nd Annual IEEE Conference on Computational Complexity
, 2007
"... We give an improved explicit construction of highly unbalanced bipartite expander graphs with expansion arbitrarily close to the degree (which is polylogarithmic in the number of vertices). Both the degree and the number of righthand vertices are polynomially close to optimal, whereas the previous ..."
Abstract

Cited by 77 (7 self)
 Add to MetaCart
We give an improved explicit construction of highly unbalanced bipartite expander graphs with expansion arbitrarily close to the degree (which is polylogarithmic in the number of vertices). Both the degree and the number of righthand vertices are polynomially close to optimal, whereas the previous constructions of TaShma, Umans, and Zuckerman (STOC ‘01) required at least one of these to be quasipolynomial in the optimal. Our expanders have a short and selfcontained description and analysis, based on the ideas underlying the recent listdecodable errorcorrecting codes of Parvaresh and Vardy (FOCS ‘05). Our expanders can be interpreted as nearoptimal “randomness condensers, ” that reduce the task of extracting randomness from sources of arbitrary minentropy rate to extracting randomness from sources of minentropy rate arbitrarily close to 1, which is a much easier task. Using this connection, we obtain a new construction of randomness extractors that is optimal up to constant factors, while being much simpler than the previous construction of Lu et al. (STOC ‘03) and improving upon it when the error parameter is small (e.g. 1/poly(n)).
Extractors: Optimal up to Constant Factors
 STOC'03
, 2003
"... This paper provides the first explicit construction of extractors which are simultaneously optimal up to constant factors in both seed length and output length. More precisely, for every n, k, our extractor uses a random seed of length O(log n) to transform any random source on n bits with (min)ent ..."
Abstract

Cited by 51 (12 self)
 Add to MetaCart
This paper provides the first explicit construction of extractors which are simultaneously optimal up to constant factors in both seed length and output length. More precisely, for every n, k, our extractor uses a random seed of length O(log n) to transform any random source on n bits with (min)entropy k, into a distribution on (1 − α)k bits that is ɛclose to uniform. Here α and ɛ can be taken to be any positive constants. (In fact, ɛ can be almost polynomially small). Our improvements are obtained via three new techniques, each of which may be of independent interest. The first is a general construction of mergers [22] from locally decodable errorcorrecting codes. The second introduces new condensers that have constant seed length (and retain a constant fraction of the minentropy in the random source). The third is a way to augment the “winwin repeated condensing” paradigm of [17] with error reduction techniques like [15] so that the our constant seedlength condensers can be used without error accumulation.
Extracting Randomness via Repeated Condensing
 In Proceedings of the 41st Annual IEEE Symposium on Foundations of Computer Science
, 2000
"... On an input probability distribution with some (min)entropy an extractor outputs a distribution with a (near) maximum entropy rate (namely the uniform distribution). A natural weakening of this concept is a condenser, whose output distribution has a higher entropy rate than the input distribution ( ..."
Abstract

Cited by 43 (16 self)
 Add to MetaCart
On an input probability distribution with some (min)entropy an extractor outputs a distribution with a (near) maximum entropy rate (namely the uniform distribution). A natural weakening of this concept is a condenser, whose output distribution has a higher entropy rate than the input distribution (without losing much of the initial entropy). In this paper we construct efficient explicit condensers. The condenser constructions combine (variants or more efficient versions of) ideas from several works, including the block extraction scheme of [NZ96], the observation made in [SZ94, NT99] that a failure of the block extraction scheme is also useful, the recursive "winwin" case analysis of [ISW99, ISW00], and the error correction of random sources used in [Tre99]. As a natural byproduct, (via repeated iterating of condensers), we obtain new extractor constructions. The new extractors give significant qualitative improvements over previous ones for sources of arbitrary minentropy; they...
Extractor Codes
, 2001
"... We de ne new error correcting codes based on extractors. Weshow that for certain choices of parameters these codes have better list decoding properties than are known for other codes, and are provably better than ReedSolomon codes. We further show that codes with strong list decoding properties ar ..."
Abstract

Cited by 42 (6 self)
 Add to MetaCart
We de ne new error correcting codes based on extractors. Weshow that for certain choices of parameters these codes have better list decoding properties than are known for other codes, and are provably better than ReedSolomon codes. We further show that codes with strong list decoding properties are equivalent to slice extractors, a variant of extractors. Wegive an application of extractor codes to extracting many hardcore bits from a oneway function, using few auxiliary random bits. Finally,weshow that explicit slice extractors for certain other parameters would yield optimal bipartite Ramsey graphs.
Extractors and PseudoRandom Generators with Optimal Seed Length
, 1999
"... We give the rst construction of a pseudorandom generator with optimal seed length that uses (essentially) arbitrary hardness. It builds on the novel recursive use of the NWgenerator in [ISW99], which produced many optimal generators one of which was pseudorandom. This is achieved in two stages ..."
Abstract

Cited by 39 (11 self)
 Add to MetaCart
We give the rst construction of a pseudorandom generator with optimal seed length that uses (essentially) arbitrary hardness. It builds on the novel recursive use of the NWgenerator in [ISW99], which produced many optimal generators one of which was pseudorandom. This is achieved in two stages  rst signicantly reducing the number of candidate generators, and then eciently combining them into one. We also give the rst construction of an extractor with optimal seed length, that can handle subpolynomial entropy levels. It builds on the fundamental connection between extractors and pseudorandom generators discovered by Trevisan [Tre99], combined with construction above. Moreover, using Kolmogorov Complexity rather than circuit size in the analysis gives superpolynomial savings for our construction, and renders our extractors better than known for all entropy levels. Research Supported by NSF Award CCR9734911, Sloan Research Fellowship BR3311, grant #93025 of the j...
Extractors from ReedMuller Codes
 In Proceedings of the 42nd Annual IEEE Symposium on Foundations of Computer Science
, 2001
"... Finding explicit extractors is an important derandomization goal that has received a lot of attention in the past decade. This research has focused on two approaches, one related to hashing and the other to pseudorandom generators. A third view, regarding extractors as good error correcting codes, w ..."
Abstract

Cited by 39 (5 self)
 Add to MetaCart
Finding explicit extractors is an important derandomization goal that has received a lot of attention in the past decade. This research has focused on two approaches, one related to hashing and the other to pseudorandom generators. A third view, regarding extractors as good error correcting codes, was noticed before. Yet, researchers had failed to build extractors directly from a good code, without using other tools from pseudorandomness. We succeed in constructing an extractor directly from a ReedMuller code. To do this, we develop a novel proof technique. Furthermore, our construction is the first and only construction with degree close to linear. In contrast, the best previous constructions had brought the log of the degree within a constant of optimal, which gives polynomial degree. This improvement is important for certain applications. For example, it follows that approximating the VC dimension to within a factor of N