Results 1  10
of
51
Simple Constructions of Almost kwise Independent Random Variables
, 1992
"... We present three alternative simple constructions of small probability spaces on n bits for which any k bits are almost independent. The number of bits used to specify a point in the sample space is (2 + o(1))(log log n + k/2 + log k + log 1 ɛ), where ɛ is the statistical difference between the dist ..."
Abstract

Cited by 270 (41 self)
 Add to MetaCart
We present three alternative simple constructions of small probability spaces on n bits for which any k bits are almost independent. The number of bits used to specify a point in the sample space is (2 + o(1))(log log n + k/2 + log k + log 1 ɛ), where ɛ is the statistical difference between the distribution induced on any k bit locations and the uniform distribution. This is asymptotically comparable to the construction recently presented by Naor and Naor (our size bound is better as long as ɛ < 1/(k log n)). An additional advantage of our constructions is their simplicity.
SmallBias Probability Spaces: Efficient Constructions and Applications
 SIAM J. Comput
, 1993
"... We show how to efficiently construct a small probability space on n binary random variables such that for every subset, its parity is either zero or one with "almost" equal probability. They are called fflbiased random variables. The number of random bits needed to generate the random variables is ..."
Abstract

Cited by 258 (15 self)
 Add to MetaCart
We show how to efficiently construct a small probability space on n binary random variables such that for every subset, its parity is either zero or one with "almost" equal probability. They are called fflbiased random variables. The number of random bits needed to generate the random variables is O(log n + log 1 ffl ). Thus, if ffl is polynomially small, then the size of the sample space is also polynomial. Random variables that are fflbiased can be used to construct "almost" kwise independent random variables where ffl is a function of k. These probability spaces have various applications: 1. Derandomization of algorithms: many randomized algorithms that require only k wise independence of their random bits (where k is bounded by O(log n)), can be derandomized by using fflbiased random variables. 2. Reducing the number of random bits required by certain randomized algorithms, e.g., verification of matrix multiplication. 3. Exhaustive testing of combinatorial circui...
How to Recycle Random Bits
, 1989
"... We show that modified versions of the linear congruential generator and the shift register generator are provably good for amplifying the correctness of a probabilistic algorithm. More precisely, if r random bits are needed for a BPP algorithm to be correct with probability at least 2=3, then O(r + ..."
Abstract

Cited by 183 (12 self)
 Add to MetaCart
We show that modified versions of the linear congruential generator and the shift register generator are provably good for amplifying the correctness of a probabilistic algorithm. More precisely, if r random bits are needed for a BPP algorithm to be correct with probability at least 2=3, then O(r + k 2 ) bits are needed to improve this probability to 1 \Gamma 2 \Gammak . We also present a different pseudorandom generator that is optimal, up to a constant factor, in this regard: it uses only O(r + k) bits to improve the probability to 1 \Gamma 2 \Gammak . This generator is based on random walks on expanders. Our results do not depend on any unproven assumptions. Next we show that our modified versions of the shift register and linear congruential generators can be used to sample from distributions using, in the limit, the informationtheoretic lower bound on random bits. 1. Introduction Randomness plays a vital role in almost all areas of computer science, both in theory and in...
Construction of asymptotically good, lowrate errorcorrecting codes through pseudorandom graphs
 IEEE Transactions on Information Theory
, 1992
"... A new technique, based on the pseudorandom properties of certain graphs, known as expanders, is used to obtain new simple explicit constructions of asymptotically good codes. In one of the constructions, the expanders are used to enhance Justesen codes by replicating, shuffling and then regrouping ..."
Abstract

Cited by 117 (24 self)
 Add to MetaCart
A new technique, based on the pseudorandom properties of certain graphs, known as expanders, is used to obtain new simple explicit constructions of asymptotically good codes. In one of the constructions, the expanders are used to enhance Justesen codes by replicating, shuffling and then regrouping the code coordinates. For any fixed (small) rate, and for sufficiently large alphabet, the codes thus obtained lie above the Zyablov bound. Using these codes as outer codes in a concatenated scheme, a second asymptotic good construction is obtained which applies to small alphabets (say, GF (2)) as well. Although these concatenated codes lie below Zyablov bound, they are still superior to previouslyknown explicit constructions in the zerorate neighborhood.
Dispersers, Deterministic Amplification, and Weak Random Sources.
, 1989
"... We use a certain type of expanding bipartite graphs, called disperser graphs, to design procedures for picking highly correlated samples from a finite set, with the property that the probability of hitting any sufficiently large subset is high. These procedures require a relatively small number of r ..."
Abstract

Cited by 93 (11 self)
 Add to MetaCart
We use a certain type of expanding bipartite graphs, called disperser graphs, to design procedures for picking highly correlated samples from a finite set, with the property that the probability of hitting any sufficiently large subset is high. These procedures require a relatively small number of random bits and are robust with respect to the quality of the random bits. Using these sampling procedures to sample random inputs of polynomial time probabilistic algorithms, we can simulate the performance of some probabilistic algorithms with less random bits or with low quality random bits. We obtain the following results: 1. The error probability of an RP or BPP algorithm that operates with a constant error bound and requires n random bits, can be made exponentially small (i.e. 2 \Gamman ), with only (3 + ffl)n random bits, as opposed to standard amplification techniques that require \Omega\Gamma n 2 ) random bits for the same task. This result is nearly optimal, since the informati...
A proof of Alon’s second eigenvalue conjecture
, 2003
"... A dregular graph has largest or first (adjacency matrix) eigenvalue λ1 = d. Consider for an even d ≥ 4, a random dregular graph model formed from d/2 uniform, independent permutations on {1,...,n}. We shall show that for any ɛ>0 we have all eigenvalues aside from λ1 = d are bounded by 2 √ d − 1 +ɛ ..."
Abstract

Cited by 92 (1 self)
 Add to MetaCart
A dregular graph has largest or first (adjacency matrix) eigenvalue λ1 = d. Consider for an even d ≥ 4, a random dregular graph model formed from d/2 uniform, independent permutations on {1,...,n}. We shall show that for any ɛ>0 we have all eigenvalues aside from λ1 = d are bounded by 2 √ d − 1 +ɛwith probability 1 − O(n−τ), where τ = ⌈ � √ d − 1+1 � /2⌉−1. We also show that this probability is at most 1 − c/nτ ′, for a constant c and a τ ′ that is either τ or τ +1 (“more often ” τ than τ + 1). We prove related theorems for other models of random graphs, including models with d odd. These theorems resolve the conjecture of Alon, that says that for any ɛ>0andd, the second largest eigenvalue of “most ” random dregular graphs are at most 2 √ d − 1+ɛ (Alon did not specify precisely what “most ” should mean or what model of random graph one should take). 1
A sample of samplers  a computational perspective on sampling (survey
 In FOCS
, 1997
"... Abstract. We consider the problem of estimating the average of a huge set of values. That is, given oracle access to an arbitrary function f: {0, 1} n P −n → [0, 1], we wish to estimate 2 x∈{0,1} n f(x) upto an additive error of ǫ. We are allowed to employ a randomized algorithm that may err with pr ..."
Abstract

Cited by 71 (7 self)
 Add to MetaCart
Abstract. We consider the problem of estimating the average of a huge set of values. That is, given oracle access to an arbitrary function f: {0, 1} n P −n → [0, 1], we wish to estimate 2 x∈{0,1} n f(x) upto an additive error of ǫ. We are allowed to employ a randomized algorithm that may err with probability at most δ. We survey known algorithms for this problem and focus on the ideas underlying their construction. In particular, we present an algorithm that makes O(ǫ −2 · log(1/δ)) queries and uses n + O(log(1/ǫ)) + O(log(1/δ)) coin tosses, both complexities being very close to the corresponding lower bounds.
On the second eigenvalue and random walks in random dregular graphs
 Combinatorica
, 1991
"... The main goal of this paper is to estimate the magnitude of the second largest eigenvalue in absolute value, λ2, of (the adjacency matrix of) a random dregular graph, G. In order to do so, we study the probability that a random walk on a random graph returns to its originating vertex at the kth st ..."
Abstract

Cited by 62 (9 self)
 Add to MetaCart
The main goal of this paper is to estimate the magnitude of the second largest eigenvalue in absolute value, λ2, of (the adjacency matrix of) a random dregular graph, G. In order to do so, we study the probability that a random walk on a random graph returns to its originating vertex at the kth step, for various values of k. Our main theorem about eigenvalues is that E {λ2(G)  m �
Geometric Applications of a Randomized Optimization Technique
 Discrete Comput. Geom
, 1999
"... We propose a simple, general, randomized technique to reduce certain geometric optimization problems to their corresponding decision problems. These reductions increase the expected time complexity by only a constant factor and eliminate extra logarithmic factors in previous, often more complicated, ..."
Abstract

Cited by 53 (6 self)
 Add to MetaCart
We propose a simple, general, randomized technique to reduce certain geometric optimization problems to their corresponding decision problems. These reductions increase the expected time complexity by only a constant factor and eliminate extra logarithmic factors in previous, often more complicated, deterministic approaches (such as parametric searching). Faster algorithms are thus obtained for a variety of problems in computational geometry: finding minimal kpoint subsets, matching point sets under translation, computing rectilinear pcenters and discrete 1centers, and solving linear programs with k violations. 1 Introduction Consider the classic randomized algorithm for finding the minimum of r numbers minfA[1]; : : : ; A[r]g: Algorithm randmin 1. randomly pick a permutation hi 1 ; : : : ; i r i of h1; : : : ; ri 2. t /1 3. for k = 1; : : : ; r do 4. if A[i k ] ! t then 5. t / A[i k ] 6. return t By a wellknown fact [27, 44], the expected number of times that step 5 is execut...
A linear time erasureresilient code with nearly optimal recovery
 IEEE Transactions on Information Theory
, 1996
"... We develop an efficient scheme that produces an encoding of a given message such that the message can be decoded from any portion of the encoding that is approximately equal to the length of the message. More precisely, an (n, c, ℓ, r)erasureresilient code consists of an encoding algorithm and a d ..."
Abstract

Cited by 44 (6 self)
 Add to MetaCart
We develop an efficient scheme that produces an encoding of a given message such that the message can be decoded from any portion of the encoding that is approximately equal to the length of the message. More precisely, an (n, c, ℓ, r)erasureresilient code consists of an encoding algorithm and a decoding algorithm with the following properties. The encoding algorithm produces a set of ℓbit packets of total length cn from an nbit message. The decoding algorithm is able to recover the message from any set of packets whose total length is r, i.e., from any set of r/ℓ packets. We describe erasureresilient codes where both the encoding and decoding algorithms run in linear time and where r is only slightly larger than n. 1