Results 1  10
of
25
Simple Constructions of Almost kwise Independent Random Variables
, 1992
"... We present three alternative simple constructions of small probability spaces on n bits for which any k bits are almost independent. The number of bits used to specify a point in the sample space is (2 + o(1))(log log n + k/2 + log k + log 1 ɛ), where ɛ is the statistical difference between the dist ..."
Abstract

Cited by 270 (41 self)
 Add to MetaCart
We present three alternative simple constructions of small probability spaces on n bits for which any k bits are almost independent. The number of bits used to specify a point in the sample space is (2 + o(1))(log log n + k/2 + log k + log 1 ɛ), where ɛ is the statistical difference between the distribution induced on any k bit locations and the uniform distribution. This is asymptotically comparable to the construction recently presented by Naor and Naor (our size bound is better as long as ɛ < 1/(k log n)). An additional advantage of our constructions is their simplicity.
SmallBias Probability Spaces: Efficient Constructions and Applications
 SIAM J. Comput
, 1993
"... We show how to efficiently construct a small probability space on n binary random variables such that for every subset, its parity is either zero or one with "almost" equal probability. They are called fflbiased random variables. The number of random bits needed to generate the random variables is ..."
Abstract

Cited by 258 (15 self)
 Add to MetaCart
We show how to efficiently construct a small probability space on n binary random variables such that for every subset, its parity is either zero or one with "almost" equal probability. They are called fflbiased random variables. The number of random bits needed to generate the random variables is O(log n + log 1 ffl ). Thus, if ffl is polynomially small, then the size of the sample space is also polynomial. Random variables that are fflbiased can be used to construct "almost" kwise independent random variables where ffl is a function of k. These probability spaces have various applications: 1. Derandomization of algorithms: many randomized algorithms that require only k wise independence of their random bits (where k is bounded by O(log n)), can be derandomized by using fflbiased random variables. 2. Reducing the number of random bits required by certain randomized algorithms, e.g., verification of matrix multiplication. 3. Exhaustive testing of combinatorial circui...
Randomness is Linear in Space
 Journal of Computer and System Sciences
, 1993
"... We show that any randomized algorithm that runs in space S and time T and uses poly(S) random bits can be simulated using only O(S) random bits in space S and time T poly(S). A deterministic simulation in space S follows. Of independent interest is our main technical tool: a procedure which extracts ..."
Abstract

Cited by 229 (20 self)
 Add to MetaCart
We show that any randomized algorithm that runs in space S and time T and uses poly(S) random bits can be simulated using only O(S) random bits in space S and time T poly(S). A deterministic simulation in space S follows. Of independent interest is our main technical tool: a procedure which extracts randomness from a defective random source using a small additional number of truly random bits. 1
Construction of asymptotically good, lowrate errorcorrecting codes through pseudorandom graphs
 IEEE Transactions on Information Theory
, 1992
"... A new technique, based on the pseudorandom properties of certain graphs, known as expanders, is used to obtain new simple explicit constructions of asymptotically good codes. In one of the constructions, the expanders are used to enhance Justesen codes by replicating, shuffling and then regrouping ..."
Abstract

Cited by 117 (24 self)
 Add to MetaCart
A new technique, based on the pseudorandom properties of certain graphs, known as expanders, is used to obtain new simple explicit constructions of asymptotically good codes. In one of the constructions, the expanders are used to enhance Justesen codes by replicating, shuffling and then regrouping the code coordinates. For any fixed (small) rate, and for sufficiently large alphabet, the codes thus obtained lie above the Zyablov bound. Using these codes as outer codes in a concatenated scheme, a second asymptotic good construction is obtained which applies to small alphabets (say, GF (2)) as well. Although these concatenated codes lie below Zyablov bound, they are still superior to previouslyknown explicit constructions in the zerorate neighborhood.
Dispersers, Deterministic Amplification, and Weak Random Sources.
, 1989
"... We use a certain type of expanding bipartite graphs, called disperser graphs, to design procedures for picking highly correlated samples from a finite set, with the property that the probability of hitting any sufficiently large subset is high. These procedures require a relatively small number of r ..."
Abstract

Cited by 93 (11 self)
 Add to MetaCart
We use a certain type of expanding bipartite graphs, called disperser graphs, to design procedures for picking highly correlated samples from a finite set, with the property that the probability of hitting any sufficiently large subset is high. These procedures require a relatively small number of random bits and are robust with respect to the quality of the random bits. Using these sampling procedures to sample random inputs of polynomial time probabilistic algorithms, we can simulate the performance of some probabilistic algorithms with less random bits or with low quality random bits. We obtain the following results: 1. The error probability of an RP or BPP algorithm that operates with a constant error bound and requires n random bits, can be made exponentially small (i.e. 2 \Gamman ), with only (3 + ffl)n random bits, as opposed to standard amplification techniques that require \Omega\Gamma n 2 ) random bits for the same task. This result is nearly optimal, since the informati...
Extracting Randomness: A Survey and New Constructions
, 1999
"... this paper we do two things. First, we survey extractors and dispersers: what they are, how they can be designed, and some of their applications. The work described in the survey is due to a long list of research papers by various authors##most notably by David Zuckerman. Then, we present a new tool ..."
Abstract

Cited by 90 (5 self)
 Add to MetaCart
this paper we do two things. First, we survey extractors and dispersers: what they are, how they can be designed, and some of their applications. The work described in the survey is due to a long list of research papers by various authors##most notably by David Zuckerman. Then, we present a new tool for constructing explicit extractors and give two new constructions that greatly improve upon previous results. The new tool we devise, a merger," is a function that accepts d strings, one of which is uniformly distributed and outputs a single string that is guaranteed to be uniformly distributed. We show how to build good explicit mergers, and how mergers can be used to build better extractors. Using this, we present two new constructions. The first construction succeeds in extracting all of the randomness from any somewhat random source. This improves upon previous extractors that extract only some of the randomness from somewhat random sources with enough" randomness. The amount of truly random bits used by this extractor, however, is not optimal. The second extractor we build extracts only some of the randomness and works only for sources with enough randomness, but uses a nearoptimal amount of truly random bits. Extractors and dispersers have many applications in removing randomness" in various settings and in making randomized constructions explicit. We survey some of these applications and note whenever our new constructions yield better results, e.g., plugging our new extractors into a previous construction we achieve the first explicit Nsuperconcentrators of linear size and polyloglog(N) depth. ] 1999 Academic Press CONTENTS 1.
Deterministic Extractors for BitFixing Sources and ExposureResilient Cryptography
 In Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science
, 2003
"... Abstract. We give an efficient deterministic algorithm that extracts Ω(n2γ) almostrandom bits from sources where n 1 2 +γ of the n bits are uniformly random and the rest are fixed in advance. This improves upon previous constructions, which required that at least n/2 of the bits be random in order ..."
Abstract

Cited by 55 (3 self)
 Add to MetaCart
Abstract. We give an efficient deterministic algorithm that extracts Ω(n2γ) almostrandom bits from sources where n 1 2 +γ of the n bits are uniformly random and the rest are fixed in advance. This improves upon previous constructions, which required that at least n/2 of the bits be random in order to extract many bits. Our construction also has applications in exposureresilient cryptography, giving explicit adaptive exposureresilient functions and, in turn, adaptive allornothing transforms. For sources where instead of bits the values are chosen from [d], for d>2, we give an algorithm that extracts a constant fraction of the randomness. We also give bounds on extracting randomness for sources where the fixed bits can depend on the random bits.
On Unapproximable Versions of NPComplete Problems
"... . We prove that all of Karp's 21 original NPcomplete problems have a version that's hard to approximate. These versions are obtained from the original problems by adding essentially the same, simple constraint. We further show that these problems are absurdly hard to approximate. In fact, no polyn ..."
Abstract

Cited by 35 (1 self)
 Add to MetaCart
. We prove that all of Karp's 21 original NPcomplete problems have a version that's hard to approximate. These versions are obtained from the original problems by adding essentially the same, simple constraint. We further show that these problems are absurdly hard to approximate. In fact, no polynomialtime algorithm can even approximate log (k) of the magnitude of these problems to within any constant factor, where log (k) denotes the logarithm iterated k times, unless NP is recognized by slightly superpolynomial randomized machines. We use the same technique to improve the constant ffl such that MAX CLIQUE is hard to approximate to within a factor of n ffl . Finally, we show that it is even harder to approximate two counting problems: counting the number of satisfying assignments to a monotone 2SAT formula and computing the permanent of1,0,1 matrices. Key words. NPcomplete, unapproximable, randomized reduction, clique, counting problems, permanent, 2SAT AMS subject clas...
A Technique for Lower Bounding the Cover Time
 SIAM J. Disc. Math
, 1992
"... We give a general technique for proving lower bounds on expected covering times of random walks on graphs in terms of expected hitting times between vertices. We use this technique to prove: i) A tight bound of \Omega\Gamma jV j log 2 jV j) for the 2dimensional torus. ii) A tight bound of \Omega\ ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
We give a general technique for proving lower bounds on expected covering times of random walks on graphs in terms of expected hitting times between vertices. We use this technique to prove: i) A tight bound of \Omega\Gamma jV j log 2 jV j) for the 2dimensional torus. ii) A tight bound of \Omega\Gamma jV j log 2 jV j= log dmax ) for trees with maximum degree dmax . iii) Tight bounds of \Omega\Gamma ¯ + log jV j) for rapidly mixing walks on vertex transitive graphs, where ¯ + denotes the maximum expected hitting time between vertices. In addition to these new results, our technique allows us to systematically prove several known lower bounds on cover times, often in a much simpler way. Finally, we use a different technique to prove an\Omega \Gamma 1=(1 \Gamma 2 ) \Delta lower bound on the cover time, where 2 is the second largest eigenvalue of the transition matrix. This was previously known only in the case where the walk starts in the stationary distribution [BK]. * Thi...