Results 1  10
of
12
SmallBias Probability Spaces: Efficient Constructions and Applications
 SIAM J. Comput
, 1993
"... We show how to efficiently construct a small probability space on n binary random variables such that for every subset, its parity is either zero or one with "almost" equal probability. They are called fflbiased random variables. The number of random bits needed to generate the random variables is ..."
Abstract

Cited by 259 (14 self)
 Add to MetaCart
We show how to efficiently construct a small probability space on n binary random variables such that for every subset, its parity is either zero or one with "almost" equal probability. They are called fflbiased random variables. The number of random bits needed to generate the random variables is O(log n + log 1 ffl ). Thus, if ffl is polynomially small, then the size of the sample space is also polynomial. Random variables that are fflbiased can be used to construct "almost" kwise independent random variables where ffl is a function of k. These probability spaces have various applications: 1. Derandomization of algorithms: many randomized algorithms that require only k wise independence of their random bits (where k is bounded by O(log n)), can be derandomized by using fflbiased random variables. 2. Reducing the number of random bits required by certain randomized algorithms, e.g., verification of matrix multiplication. 3. Exhaustive testing of combinatorial circui...
Improved NonApproximability Results
, 1994
"... We indicate strong nonapproximability factors for central problems: N^{1/4} for Max Clique; N^{1/10} for Chromatic Number; and 66/65 for Max 3SAT. Underlying the Max Clique result is a proof system in... ..."
Abstract

Cited by 117 (15 self)
 Add to MetaCart
We indicate strong nonapproximability factors for central problems: N^{1/4} for Max Clique; N^{1/10} for Chromatic Number; and 66/65 for Max 3SAT. Underlying the Max Clique result is a proof system in...
Constructing Small Sample Spaces Satisfying Given Constraints
 SIAM JOURNAL ON DISCRETE MATHEMATICS
, 1993
"... The subject of this paper is finding small sample spaces for joint distributions of n discrete random variables. Such distributions are often only required to obey a certain limited set of constraints of the form Pr(E)=. We show that the problem of deciding whether there exists any distribution sati ..."
Abstract

Cited by 31 (3 self)
 Add to MetaCart
The subject of this paper is finding small sample spaces for joint distributions of n discrete random variables. Such distributions are often only required to obey a certain limited set of constraints of the form Pr(E)=. We show that the problem of deciding whether there exists any distribution satisfying a given set of constraints is NPhard. However, if the constraints are consistent, then there exists a distribution satisfying them which is supported by a "small" sample space (one whose cardinalityis equal to the number of constraints). For the important case of independenceconstraints,where the constraints have a certain form and are consistent with a joint distribution of n independent random variables, a small sample space can be constructed in polynomial time. This last result is also useful for derandomizing algorithms. We demonstrate this technique by an application to the problem of finding large independentsetsin sparse hypergraphs.
Algorithmic Derandomization via Complexity Theory
 In Proceedings of the 34th annual ACM Symposium on Theory of Computing (STOC
, 2002
"... We point out how the methods of Nisan [Nis90, Nis92], originally developed for derandomizing spacebounded computations, may be applied to obtain polynomialtime and NC derandomizations of several probabilistic algorithms. Our list includes the randomized rounding steps of linear and semidefinit ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
We point out how the methods of Nisan [Nis90, Nis92], originally developed for derandomizing spacebounded computations, may be applied to obtain polynomialtime and NC derandomizations of several probabilistic algorithms. Our list includes the randomized rounding steps of linear and semidefinite programming relaxations of optimization problems, parallel derandomization of discrepancytype problems, and the JohnsonLindenstrauss lemma, to name a few.
Discrepancy Sets and Pseudorandom Generators for Combinatorial Rectangles
, 1996
"... A common subproblem of DNF approximate counting and derandomizing RL is the discrepancy problem for combinatorial rectangles. We explicitly construct a poly(n)size sample space that approximates the volume of any combinatorial rectangle in [n] n to within o(1) error (improving on the construction ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
A common subproblem of DNF approximate counting and derandomizing RL is the discrepancy problem for combinatorial rectangles. We explicitly construct a poly(n)size sample space that approximates the volume of any combinatorial rectangle in [n] n to within o(1) error (improving on the constructions of [EGLNV92]). The construction extends the techniques of [LLSZ95] for the analogous hitting set problem, most notably via discrepancy preserving reductions. 1 Introduction In a general discrepancy problem, we are given a family of sets and want to construct a small sample space that approximates the volume of an arbitrary set in the family. This problem is closely related to other important issues in combinatorial constructions such as the problem of constructing small sample spaces that approximate the independent distributions on many multivalued random variables [KW84, Lub85, ABI86, CG89, NN90, AGHP90, EGLNV92, Sch92, KM93, KK94], and the problem of constructing pseudorandom generat...
(De)randomized Construction of Small Sample Spaces in NC
, 1994
"... Koller and Megiddo introduced the paradigm of constructing compact distributions that satisfy a given set of constraints, and showed how it can be used to efficiently derandomize certain types of algorithm. In this paper, we significantly extend their resdts in two ways. First, we show how their app ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
Koller and Megiddo introduced the paradigm of constructing compact distributions that satisfy a given set of constraints, and showed how it can be used to efficiently derandomize certain types of algorithm. In this paper, we significantly extend their resdts in two ways. First, we show how their approach can be applied to deal with more general expectation constraints. More importantly, we provide the first parallel (Ne) algorithm for constructing a compact distribution that satisfies the constraints up to a small relative error. This algorithm deals with constraints over any event that can be verified by finite automata, including all independence constraints as well as constraints over events relating to the parity OT sum of a certain set of variables. OUT construction relies on a new and independently interesting parallel algorithm for converting a solution to a linear system into an almost basic approximate solution to the same system. We use these techniques in the first AfC derandomization of an algorithm for constructing large independent sets in duniform hypergraphs for arbitrary d. We also show how the linear programming perspective suggests new proof techniques which might be useful in general probabilistic analysis.
On Construction of kwise Independent Random Variables
, 1994
"... A 01 probability space is a probability space(\Omega ; 2\Omega ; P ), where the sample space\Omega ` f0; 1g n for some n. A probability space is kwise independent if, when Y i is defined to be the ith coordinate of the random nvector, then any subset of k of the Y i 's is (mutually) indepen ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
A 01 probability space is a probability space(\Omega ; 2\Omega ; P ), where the sample space\Omega ` f0; 1g n for some n. A probability space is kwise independent if, when Y i is defined to be the ith coordinate of the random nvector, then any subset of k of the Y i 's is (mutually) independent, and it is said to be a probability space for p1 ; p2 ; :::; pn if P [Y i = 1] = p i . We study constructions of kwise independent 01 probability spaces in which the p i 's are arbitrary. It was known that for any p1 ; p2 ; :::; pn , a kwise independent probability space of size m(n;k) = \Gamma n k \Delta + \Gamma n k\Gamma1 \Delta + \Gamma n k\Gamma2 \Delta + \Delta \Delta \Delta + \Gamma n 0 \Delta always exists. We prove that for some p1 ; p2 ; :::; pn 2 [0; 1], m(n;k) is a lower bound on the size of any kwise independent 01 probability space. For each fixed k, we prove that every kwise independent 01 probability space for all p i = k=n has size\Omega\Gamma n ...
Randomness in Private Computations
 Proc. of 15th PODC
, 1996
"... We consider the amount of randomness used in private computations. Specifically, we show how to compute the exclusiveor (xor) of n boolean inputs tprivately, using only O(t 2 log(n=t)) random bits (the best known upper bound is O(tn)). We accompany this result by a lower bound on the number of ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
We consider the amount of randomness used in private computations. Specifically, we show how to compute the exclusiveor (xor) of n boolean inputs tprivately, using only O(t 2 log(n=t)) random bits (the best known upper bound is O(tn)). We accompany this result by a lower bound on the number of random bits required to carry out this task; we show that any protocol solving this problem requires at least t random bits (again, this significantly improves over the known lower bounds). For the upper bound, we show how, given m subsets of f1; : : : ; ng, to construct in (deterministic) polynomial time a probability distribution of n random variables such that (1) the parity of randomvariables in each of these m subsets is 0 or 1 with equal probability; and (2) the support of the distribution is of size at most 2m. This construction generalizes previously considered types of sample spaces (such as kwise independent spaces and Schulman's spaces [S92]). We believe that this construction i...
On kwise independent distributions and Boolean functions
"... We pursue a systematic study of the following problem. Let f: {0, 1} n → {0, 1} be a (usually monotone) boolean function whose behaviour is well understood when the input bits are identically independently distributed. What can be said about the behaviour of the function when the input bits are not ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
We pursue a systematic study of the following problem. Let f: {0, 1} n → {0, 1} be a (usually monotone) boolean function whose behaviour is well understood when the input bits are identically independently distributed. What can be said about the behaviour of the function when the input bits are not completely independent, but only kwise independent, i.e. every subset of k bits is independent? more precisely, how high should k be so that any kwise independent distribution ”fools ” the function, i.e. causes it to behave nearly the same as when the bits are completely independent? In this paper, we are mainly interested in asymptotic results about monotone functions which exhibit sharp thresholds, i.e. there is a critical probability, pc, such that P (f = 1) under the completely independent distribution with marginal p, makes a sharp transition, from being close to 0 to being close to 1, in the vicinity of pc. For such (sequences of) functions we define 2 notions of ”fooling”: K1 is the independence needed in order to force the existence of the sharp threshold (which must then be at pc). K2 is the independence needed to ”fool ” the function at pc. In order to answer these questions, we explore the extremal properties of kwise independent distributions and provide ways of constructing such distributions. These constructions are connected to linear error correcting codes. We also utilize duality theory and show that for the function f to behave (almost) the same under all kwise independent inputs is equivalent to the function f being well approximated by a real polynomial in a certain fashion. This type of approximation is stronger than approximation in L1. We analyze several well known boolean functions (including AND, Majority, Tribes and Percolation among others), some of which turn out to have surprising properties with respect to these questions. In some of our results we use tools from the theory of the classical moment problem, seemingly for the first time in this subject, to shed light on these questions.
Amortizing Randomness in Private Multiparty Computations
 Proc. of 17th PODC
, 2002
"... We study the relationship between the number of rounds needed to repeatedly perform a private computation (i.e. where there are many sets of inputs sequentially given to the players on which the players must compute a function privately) and the overall randomness needed for this task. For the xo ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
We study the relationship between the number of rounds needed to repeatedly perform a private computation (i.e. where there are many sets of inputs sequentially given to the players on which the players must compute a function privately) and the overall randomness needed for this task. For the xor function, we show that for k sets of inputs, if instead of using totally fresh (i.e., independent) random bits for each of these k sets of inputs, we reuse the same ` random bits then we can significantly speedup the roundcomplexity of each computation compared to what is achieved by the naive strategy of partitioning the ` random bits between the k computations.