Results 1 - 10
of
49
Unbalanced expanders and randomness extractors from parvaresh-vardy codes
- In Proceedings of the 22nd Annual IEEE Conference on Computational Complexity
, 2007
"... We give an improved explicit construction of highly unbalanced bipartite expander graphs with expansion arbitrarily close to the degree (which is polylogarithmic in the number of vertices). Both the degree and the number of right-hand vertices are polynomially close to optimal, whereas the previous ..."
Abstract
-
Cited by 120 (7 self)
- Add to MetaCart
(Show Context)
We give an improved explicit construction of highly unbalanced bipartite expander graphs with expansion arbitrarily close to the degree (which is polylogarithmic in the number of vertices). Both the degree and the number of right-hand vertices are polynomially close to optimal, whereas the previous constructions of Ta-Shma, Umans, and Zuckerman (STOC ‘01) required at least one of these to be quasipolynomial in the optimal. Our expanders have a short and self-contained description and analysis, based on the ideas underlying the recent list-decodable errorcorrecting codes of Parvaresh and Vardy (FOCS ‘05). Our expanders can be interpreted as near-optimal “randomness condensers, ” that reduce the task of extracting randomness from sources of arbitrary min-entropy rate to extracting randomness from sources of min-entropy rate arbitrarily close to 1, which is a much easier task. Using this connection, we obtain a new construction of randomness extractors that is optimal up to constant factors, while being much simpler than the previous construction of Lu et al. (STOC ‘03) and improving upon it when the error parameter is small (e.g. 1/poly(n)).
Simple Extractors for All Min-Entropies and a New Pseudo-Random Generator
"... We present a simple, self-contained extractor construction that produces good extractors for all min-entropies (min-entropy measures the amount of randomness contained in a weak random source). Our construction is algebraic and builds on a new polynomial-based approach introduced by Ta-Shma, Zuckerm ..."
Abstract
-
Cited by 111 (27 self)
- Add to MetaCart
We present a simple, self-contained extractor construction that produces good extractors for all min-entropies (min-entropy measures the amount of randomness contained in a weak random source). Our construction is algebraic and builds on a new polynomial-based approach introduced by Ta-Shma, Zuckerman, and Safra [37]. Using our improvements, we obtain, for example, an extractor with out-put length m = k1\Gamma ffi and seed length O(log n). This matches the parameters of Trevisan's breakthrough result [38] and additionally achieves those parameters for smallmin-entropies k. Extending [38] to small k has been the focus of a sequence of recent works [15, 26, 35]. Our construction gives a much simpler and more direct solution tothis problem. Applying similar ideas to the problem of building pseudo-random generators, we obtain a new pseudo-random generator construction that is not based on the NW generator[21], and turns worst-case hardness directly into pseudorandomness. The parameters of this generator match those in [16, 33] and in particular are strong enough to obtain a new proof that P = BP P if E requires exponential size circuits. Essentially the same construction yields a hitting set generator with optimal seed length that outputs s\Omega (1) bits when given a function that requires circuits of size s (for any s). This implies a hardness versus randomness tradeoff for RP and BP P that is optimal (up to polynomial factors), solving an open problem raised by [14]. Our generators can also be used to derandomize AM in a way that improves and extends the results of [4, 18, 20].
Loss-less condensers, unbalanced expanders, and extractors
- In Proceedings of the 33rd Annual ACM Symposium on Theory of Computing
, 2001
"... Abstract Trevisan showed that many pseudorandom generator constructions give rise to constructionsof explicit extractors. We show how to use such constructions to obtain explicit lossless condensers. A lossless condenser is a probabilistic map using only O(log n) additional random bitsthat maps n bi ..."
Abstract
-
Cited by 98 (17 self)
- Add to MetaCart
(Show Context)
Abstract Trevisan showed that many pseudorandom generator constructions give rise to constructionsof explicit extractors. We show how to use such constructions to obtain explicit lossless condensers. A lossless condenser is a probabilistic map using only O(log n) additional random bitsthat maps n bits strings to poly(log K) bit strings, such that any source with support size Kis mapped almost injectively to the smaller domain. Our construction remains the best lossless condenser to date.By composing our condenser with previous extractors, we obtain new, improved extractors. For small enough min-entropies our extractors can output all of the randomness with only O(log n) bits. We also obtain a new disperser that works for every entropy loss, uses an O(log n)bit seed, and has only O(log n) entropy loss. This is the best disperser construction to date,and yields other applications. Finally, our lossless condenser can be viewed as an unbalanced
On Constructing Locally Computable Extractors and Cryptosystems In The Bounded Storage Model
- Journal of Cryptology
, 2002
"... We consider the problem of constructing randomness extractors which are locally computable, i.e. only read a small number of bits from their input. As recently shown by Lu (CRYPTO `02 ), locally computable extractors directly yield secure private-key cryptosystems in Maurer's bounded storage ..."
Abstract
-
Cited by 81 (8 self)
- Add to MetaCart
(Show Context)
We consider the problem of constructing randomness extractors which are locally computable, i.e. only read a small number of bits from their input. As recently shown by Lu (CRYPTO `02 ), locally computable extractors directly yield secure private-key cryptosystems in Maurer's bounded storage model (J. Cryptology, 1992).
Extracting randomness using few independent sources
- In Proceedings of the 45th Annual IEEE Symposium on Foundations of Computer Science
, 2004
"... In this work we give the first deterministic extractors from a constant number of weak sources whose entropy rate is less than 1/2. Specifically, for every δ> 0 we give an explicit construction for extracting randomness from a constant (depending polynomially on 1/δ) number of distributions over ..."
Abstract
-
Cited by 50 (6 self)
- Add to MetaCart
(Show Context)
In this work we give the first deterministic extractors from a constant number of weak sources whose entropy rate is less than 1/2. Specifically, for every δ> 0 we give an explicit construction for extracting randomness from a constant (depending polynomially on 1/δ) number of distributions over {0, 1} n, each having min-entropy δn. These extractors output n bits, which are 2 −n close to uniform. This construction uses several results from additive number theory, and in particular a recent one by Bourgain, Katz and Tao [BKT03] and of Konyagin [Kon03]. We also consider the related problem of constructing randomness dispersers. For any constant output length m, our dispersers use a constant number of identical distributions, each with min-entropy Ω(log n) and outputs every possible m-bit string with positive probability. The main tool we use is a variant of the “stepping-up lemma ” used in establishing lower bound
Extractors for a constant number of polynomially small min-entropy independent sources
- In Proceedings of the 38th Annual ACM Symposium on Theory of Computing
, 2006
"... We consider the problem of randomness extraction from independent sources. We construct an extractor that can extract from a constant number of independent sources of length n, each of which have min-entropy n γ for an arbitrarily small constant γ> 0. Our extractor is obtained by composing seeded ..."
Abstract
-
Cited by 42 (9 self)
- Add to MetaCart
(Show Context)
We consider the problem of randomness extraction from independent sources. We construct an extractor that can extract from a constant number of independent sources of length n, each of which have min-entropy n γ for an arbitrarily small constant γ> 0. Our extractor is obtained by composing seeded extractors in simple ways. We introduce a new technique to condense independent somewhere-random sources which looks like a useful way to manipulate independent sources. Our techniques are different from those used in recent work [BIW04, BKS + 05, Raz05, Bou05] for this problem in the sense that they do not rely on any results from additive number theory. Using Bourgain’s extractor [Bou05] as a black box, we obtain a new extractor for 2 independent block-sources with few blocks, even when the min-entropy is as small as polylog(n). We also show how to modify the 2 source disperser for linear min-entropy of Barak et al. [BKS + 05] and the 3 source extractor of Raz [Raz05] to get dispersers/extractors with exponentially small error and linear output length where previously both were constant. In terms of Ramsey Hypergraphs, for every constant 1> γ> 0 our construction gives a family of explicit O(1/γ)-uniform hypergraphs on N vertices that avoid cliques and independent sets of (log N)γ size 2.
Extracting Randomness via Repeated Condensing
"... On an input probability distribution with some (min-)entropy an extractor outputs a distribution with a (near) maximum entropy rate (namely the uniform distribution). A natural weakening of this concept is a condenser, whose output distribution has a higher entropy rate than the input distribution ..."
Abstract
-
Cited by 42 (15 self)
- Add to MetaCart
(Show Context)
On an input probability distribution with some (min-)entropy an extractor outputs a distribution with a (near) maximum entropy rate (namely the uniform distribution). A natural weakening of this concept is a condenser, whose output distribution has a higher entropy rate than the input distribution (without losing much of the initial entropy). In this paper we construct efficient explicit condensers.The condenser constructions combine (variants or more efficient versions of) ideas from several works, including the block extraction scheme of [10], the observation made in [15, 9] that a failure of the block extraction scheme is also useful, the recursive "win-win " case analysis of [4, 5], and the error correction of random sources used in [17]. As a natural byproduct, (via repeated iterating of condensers), we obtain new extractor constructions. The new extractors give significant qualitative improvements over previous ones for sources of arbitrary min-entropy; they are nearly optimal simultaneously in the main two parameters- seed length and output length. Specifically, our extractors can make any of these two parameters optimal (up to a constant factor), only at a poly-logarithmic loss in the other. Previous constructions require polynomial loss in both cases for general sources. We also give a simple reduction converting "standard"extractors (which are good for an average seed) to "strong " ones (which are good for most seeds), with essentially the same parameters. With it, all the above improvements apply to strong extractors as well.
Simulating Independence: New Constructions of Condensers, Ramsey Graphs, Dispersers, and Extractors
- In Proceedings of the 37th Annual ACM Symposium on Theory of Computing
, 2005
"... We present new explicit constructions of deterministic randomness extractors, dispersers and related objects. More precisely, a distribution X over binary strings of length n is called a δ-source if it assigns probability at most 2 −δn to any string of length n, and for any δ> 0 we construct the ..."
Abstract
-
Cited by 41 (10 self)
- Add to MetaCart
(Show Context)
We present new explicit constructions of deterministic randomness extractors, dispersers and related objects. More precisely, a distribution X over binary strings of length n is called a δ-source if it assigns probability at most 2 −δn to any string of length n, and for any δ> 0 we construct the following poly(n)-time computable functions: 2-source disperser: D: ({0, 1} n) 2 → {0, 1} such that for any two independent δ-sources X1, X2 we have that the support of D(X1, X2) is {0, 1}. Bipartite Ramsey graph: Let N = 2 n. A corollary is that the function D is a 2-coloring of the edges of KN,N (the complete bipartite graph over two sets of N vertices) such that any induced subgraph of size N δ by N δ is not monochromatic. 3-source extractor: E: ({0, 1} n) 2 → {0, 1} such that for any three independent δ-sources X1, X2, X3 we have that E(X1, X2, X3) is (o(1)-close to being) an unbiased random bit. No previous explicit construction was known for either of these, for any δ < 1/2 and these results constitute major progress to long-standing open problems. A component in these results is a new construction of condensers that may be of independent
Extensions to the Method of Multiplicities, with applications to Kakeya Sets and Mergers
, 2009
"... We extend the “method of multiplicities ” to get the following results, of interest in combinatorics and randomness extraction. 1. We show that every Kakeya set in F n q, the n-dimensional vector space over the finite field on q elements, must be of size at least q n /2 n. This bound is tight to wit ..."
Abstract
-
Cited by 38 (6 self)
- Add to MetaCart
(Show Context)
We extend the “method of multiplicities ” to get the following results, of interest in combinatorics and randomness extraction. 1. We show that every Kakeya set in F n q, the n-dimensional vector space over the finite field on q elements, must be of size at least q n /2 n. This bound is tight to within a 2 + o(1) factor for every n as q → ∞. 2. We give improved “randomness mergers”, i.e., seeded functions that take as input k (possibly correlated) random variables in {0, 1} N and a short random seed and output a single random variable in {0, 1} N that is statistically close to having entropy (1−δ)·N when one of the k input variables is distributed uniformly. The seed we require is only (1/δ)·log k-bits long, which significantly improves upon previous construction of mergers. The “method of multiplicities”, as used in prior work, analyzed subsets of vector spaces over finite fields by constructing somewhat low degree interpolating polynomials that vanish on every point in the subset with high multiplicity. The typical use of this method involved showing that the interpolating polynomial also vanished on some points outside the subset, and then used simple