Results 1 
8 of
8
Simulating Independence: New Constructions of Condensers, Ramsey Graphs, Dispersers, and Extractors
 In Proceedings of the 37th Annual ACM Symposium on Theory of Computing
, 2005
"... We present new explicit constructions of deterministic randomness extractors, dispersers and related objects. More precisely, a distribution X over binary strings of length n is called a δsource if it assigns probability at most 2 −δn to any string of length n, and for any δ> 0 we construct the ..."
Abstract

Cited by 47 (10 self)
 Add to MetaCart
We present new explicit constructions of deterministic randomness extractors, dispersers and related objects. More precisely, a distribution X over binary strings of length n is called a δsource if it assigns probability at most 2 −δn to any string of length n, and for any δ> 0 we construct the following poly(n)time computable functions: 2source disperser: D: ({0, 1} n) 2 → {0, 1} such that for any two independent δsources X1, X2 we have that the support of D(X1, X2) is {0, 1}. Bipartite Ramsey graph: Let N = 2 n. A corollary is that the function D is a 2coloring of the edges of KN,N (the complete bipartite graph over two sets of N vertices) such that any induced subgraph of size N δ by N δ is not monochromatic. 3source extractor: E: ({0, 1} n) 2 → {0, 1} such that for any three independent δsources X1, X2, X3 we have that E(X1, X2, X3) is (o(1)close to being) an unbiased random bit. No previous explicit construction was known for either of these, for any δ < 1/2 and these results constitute major progress to longstanding open problems. A component in these results is a new construction of condensers that may be of independent
An introduction to randomness extractors
"... Abstract. We give an introduction to the area of “randomness extraction” and survey the main concepts of this area: deterministic extractors, seeded extractors and multiple sources extractors. For each one we briefly discuss background, definitions, explicit constructions and applications. 1 ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
(Show Context)
Abstract. We give an introduction to the area of “randomness extraction” and survey the main concepts of this area: deterministic extractors, seeded extractors and multiple sources extractors. For each one we briefly discuss background, definitions, explicit constructions and applications. 1
Dispersers for affine sources with subpolynomial entropy
"... We construct an explicit disperser for affine sources over F n 2 with entropy k = 2 log0.9 n = n o(1). This is a polynomial time computable function D: F n 2 → {0, 1} such that for every affine space V of F n 2 that has dimension at least k, D(V) = {0, 1}. This improves the best previous constructi ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
(Show Context)
We construct an explicit disperser for affine sources over F n 2 with entropy k = 2 log0.9 n = n o(1). This is a polynomial time computable function D: F n 2 → {0, 1} such that for every affine space V of F n 2 that has dimension at least k, D(V) = {0, 1}. This improves the best previous construction of BenSasson and Kopparty (STOC 2009) that achieved k = Ω(n 4/5). Our technique follows a high level approach that was developed in Barak, Kindler, Shaltiel, Sudakov
Derandomizing Algorithms on Product Distributions
"... Getting the deterministic complexity closer to the best known randomized complexity is an important goal in algorithms and communication protocols. In this work, we investigate the case where instead of one input, the algorithm\protocol is given multiple inputs sampled independently from an arbitrar ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Getting the deterministic complexity closer to the best known randomized complexity is an important goal in algorithms and communication protocols. In this work, we investigate the case where instead of one input, the algorithm\protocol is given multiple inputs sampled independently from an arbitrary unknown distribution. We show that in this case a strong and generic derandomization result can be obtained by a simple argument. Our method relies on extracting randomness from “samesource ” product distributions, which are distributions generated from multiple independent samples from the same source. The extraction process succeeds even for arbitrarily low minentropy, and is based on the order of the values and not on the values themselves (This may be seen as a generalization of the classical method of VonNeumann [23] extended by Elias [7] for extracting randomness from a biased coin). The tools developed in the paper are generic, and can be used elsewhere. We present applications to streaming algorithms, and to implicit probe search [8]. We also refine our method to handle product distributions, where the i’th sample comes from one of several arbitrary unknown distributions. This requires creating a new set of tools, which may also be of independent interest. 1
CS369E: Communication Complexity (for Algorithm Designers) Lecture #7: Lower Bounds in Algorithmic Game Theory∗
, 2015
"... This lecture explains some applications of communication complexity to proving lower bounds in algorithmic game theory (AGT), at the border of computer science and economics. In AGT, the natural description size of an object is often exponential in a parameter of interest, and the goal is to perfor ..."
Abstract
 Add to MetaCart
(Show Context)
This lecture explains some applications of communication complexity to proving lower bounds in algorithmic game theory (AGT), at the border of computer science and economics. In AGT, the natural description size of an object is often exponential in a parameter of interest, and the goal is to perform nontrivial computations in time polynomial in the parameter (i.e., logarithmic in the description size). As we know, communication complexity is a great tool for understanding when nontrivial computations require looking at most of the input. 2 The Welfare Maximization Problem The focus of this lecture is the following optimization problem, which has been studied in AGT more than any other. 1. There are k players. 2. There is a set M of m items. 3. Each player i has a valuation vi: 2 M → R+. The number vi(T) indicates i’s value, or willingness to pay, for the items T ⊆ M. The valuation is the private input of player i — i knows vi but none of the other vj’s. We assume that vi(∅) = 0 and that the valuations are monotone, meaning vi(S) ≤ vi(T) whenever S ⊆ T. To avoid bit complexity issues, we’ll also assume that all of the vi(T)’s are integers with description length polynomial in k and m. ∗ c©2015, Tim Roughgarden.
Derandomizing Algorithms on Product Distributions and Other Applications of OrderBased Extraction
"... Abstract: Getting the deterministic complexity closer to the best known randomized complexity is an important goal in algorithms and communication protocols. In this work, we investigate the case where instead of one input, the algorithm/protocol is given multiple inputs sampled independently from a ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract: Getting the deterministic complexity closer to the best known randomized complexity is an important goal in algorithms and communication protocols. In this work, we investigate the case where instead of one input, the algorithm/protocol is given multiple inputs sampled independently from an arbitrary unknown distribution. We show that in this case a strong and generic derandomization result can be obtained by a simple argument. Our method relies on extracting randomness from “samesource ” product distributions, which are distributions generated from multiple independent samples from the same source. The extraction process succeeds even for arbitrarily low minentropy, and is based on the order of the values and not on the values themselves (this may be seen as a generalization of the classical method of VonNeumann [26] extended by Elias [8] for extracting randomness from a biased coin.) The tools developed in the paper are generic, and can be used in several other problems. We present applications to streaming algorithms, and to implicit probe search [9]. We also refine our method to handle product distributions, where the i’th sample comes from one of several arbitrary unknown distributions. This requires creating a new set of tools, which may also be of independent interest.
Invertible ZeroError Dispersers and Defective . . .
, 2012
"... Kuznetsov and Tsybakov [11] considered the problem of storing information in a memory where some of the cells are ‘stuck’ at certain values. More precisely, For 0 < r, p < 1 we want to store a string z ∈ {0, 1} rn in an nbit memory x = (x1,..., xn) in which a subset S ⊆ [n] of size pn are stu ..."
Abstract
 Add to MetaCart
Kuznetsov and Tsybakov [11] considered the problem of storing information in a memory where some of the cells are ‘stuck’ at certain values. More precisely, For 0 < r, p < 1 we want to store a string z ∈ {0, 1} rn in an nbit memory x = (x1,..., xn) in which a subset S ⊆ [n] of size pn are stuck at certain values u1,..., upn and cannot be modified. The encoding procedure receives S, u1,..., upn and z and can modify the cells outside of S. The decoding procedure should be able to recover z given x (without having to know S or u1,..., upn). This problem is related to, and harder than, the WriteOnceMemory (WOM) problem (in which once we raise a cell xi from zero to one, it is stuck at this value). We give explicit schemes with rate r ≥ 1 − p − o(1) (note that r ≤ 1 − p is a trivial lower bound). This is the first explicit scheme with asymptotically optimal rate. We are able to guarantee the same rate even if following the encoding, the memory x is corrupted in o ( √ n) adversarially chosen positions. This more general setup was first considered by Tsybakov [26] (see also [10, 8]) and our scheme improves upon previous results. Our approach utilizes a recent connection observed by Shpilka [23] between the WOM problem