Results 1  10
of
24
SmallBias Probability Spaces: Efficient Constructions and Applications
 SIAM J. Comput
, 1993
"... We show how to efficiently construct a small probability space on n binary random variables such that for every subset, its parity is either zero or one with "almost" equal probability. They are called fflbiased random variables. The number of random bits needed to generate the random variables is ..."
Abstract

Cited by 259 (14 self)
 Add to MetaCart
We show how to efficiently construct a small probability space on n binary random variables such that for every subset, its parity is either zero or one with "almost" equal probability. They are called fflbiased random variables. The number of random bits needed to generate the random variables is O(log n + log 1 ffl ). Thus, if ffl is polynomially small, then the size of the sample space is also polynomial. Random variables that are fflbiased can be used to construct "almost" kwise independent random variables where ffl is a function of k. These probability spaces have various applications: 1. Derandomization of algorithms: many randomized algorithms that require only k wise independence of their random bits (where k is bounded by O(log n)), can be derandomized by using fflbiased random variables. 2. Reducing the number of random bits required by certain randomized algorithms, e.g., verification of matrix multiplication. 3. Exhaustive testing of combinatorial circui...
ChernoffHoeffding Bounds for Applications with Limited Independence
 SIAM J. Discrete Math
, 1993
"... ChernoffHoeffding bounds are fundamental tools used in bounding the tail probabilities of the sums of bounded and independent random variables. We present a simple technique which gives slightly better bounds than these, and which more importantly requires only limited independence among the rando ..."
Abstract

Cited by 102 (10 self)
 Add to MetaCart
ChernoffHoeffding bounds are fundamental tools used in bounding the tail probabilities of the sums of bounded and independent random variables. We present a simple technique which gives slightly better bounds than these, and which more importantly requires only limited independence among the random variables, thereby importing a variety of standard results to the case of limited independence for free. Additional methods are also presented, and the aggregate results are sharp and provide a better understanding of the proof techniques behind these bounds. They also yield improved bounds for various tail probability distributions and enable improved approximation algorithms for jobshop scheduling. The "limited independence" result implies that a reduced amount of randomness and weaker sources of randomness are sufficient for randomized algorithms whose analyses use the ChernoffHoeffding bounds, e.g., the analysis of randomized algorithms for random sampling and oblivious packet routi...
On LinearTime Deterministic Algorithms for Optimization Problems in Fixed Dimension
, 1992
"... We show that with recently developed derandomization techniques, one can convert Clarkson's randomized algorithm for linear programming in fixed dimension into a lineartime deterministic one. The constant of proportionality is d O(d) , which is better than for previously known such algorithms. We s ..."
Abstract

Cited by 91 (10 self)
 Add to MetaCart
We show that with recently developed derandomization techniques, one can convert Clarkson's randomized algorithm for linear programming in fixed dimension into a lineartime deterministic one. The constant of proportionality is d O(d) , which is better than for previously known such algorithms. We show that the algorithm works in a fairly general abstract setting, which allows us to solve various other problems (such as finding the maximum volume ellipsoid inscribed into the intersection of n halfspaces) in linear time.
A New Rounding Procedure for the Assignment Problem with Applications to Dense Graph Arrangement Problems
, 2001
"... We present a randomized procedure for rounding fractional perfect matchings to (integral) matchings. If the original fractional matching satis es any linear inequality, then with high probability, the new matching satis es that linear inequality in an approximate sense. This extends the wellkn ..."
Abstract

Cited by 74 (3 self)
 Add to MetaCart
We present a randomized procedure for rounding fractional perfect matchings to (integral) matchings. If the original fractional matching satis es any linear inequality, then with high probability, the new matching satis es that linear inequality in an approximate sense. This extends the wellknown LP rounding procedure of Raghavan and Thompson, which is usually used to round fractional solutions of linear programs.
Derandomization, witnesses for Boolean matrix multiplication and construction of perfect hash functions
 Algorithmica
, 1996
"... Small sample spaces with almost independent random variables are applied to design efficient sequential deterministic algorithms for two problems. The first algorithm, motivated by the attempt to design efficient algorithms for the All Pairs Shortest Path problem using fast matrix multiplication, so ..."
Abstract

Cited by 61 (5 self)
 Add to MetaCart
Small sample spaces with almost independent random variables are applied to design efficient sequential deterministic algorithms for two problems. The first algorithm, motivated by the attempt to design efficient algorithms for the All Pairs Shortest Path problem using fast matrix multiplication, solves the problem of computing witnesses for the Boolean product of two matrices. That is, if A and B are two n by n matrices, and C = AB is their Boolean product, the algorithm finds for every entry Cij = 1 a witness: an index k so that Aik = Bkj = 1. Its running time exceeds that of computing the product of two n by n matrices with small integer entries by a polylogarithmic factor. The second algorithm is a nearly linear time deterministic procedure for constructing a perfect hash function for a given nsubset of {1,..., m}.
The probabilistic method yields deterministic parallel algorithms
 Journal of Computer and System Sciences
, 1989
"... We present a technique for converting RNC algorithms into NC algorithms. Our approach is based on a parallel implementation of the method of conditional probabilities. This method was used to convert probabilistic proofs of existence of combinatorial structures into polynomial time deterministic alg ..."
Abstract

Cited by 50 (5 self)
 Add to MetaCart
We present a technique for converting RNC algorithms into NC algorithms. Our approach is based on a parallel implementation of the method of conditional probabilities. This method was used to convert probabilistic proofs of existence of combinatorial structures into polynomial time deterministic algorithms. It has the apparent drawback of being extremely sequential in nature. We show certain general conditions under which it is possible to use this technique for devising deterministic parallel algorithms. We use our technique to devise an NC algorithm for the set balancing problem. This problem turns out to be a useful tool for parallel algorithms. Using our derandomization method and the set balancing algorithm, we provide an NC algorithm for the lattice approximation problem. We also use the lattice approximation problem to bootstrap the set balancing algorithm, and the result is a more processor efficient algorithm. The set balancing algorithm also yields an NC algorithm for nearoptimal edge coloring of simple graphs. Our methods also extend to the parallelization of various algorithms in computational geometry that rely upon the random sampling technique of Clarkson. Finally, our methods apply to constructing certain combinatorial structures, e.g. ...
Removing Randomness in Parallel Computation Without a Processor Penalty
 Journal of Computer and System Sciences
, 1988
"... We develop some general techniques for converting randomized parallel algorithms into deterministic parallel algorithms without a blowup in the number of processors. One of the requirements for the application of these techniques is that the analysis of the randomized algorithm uses only pairwise in ..."
Abstract

Cited by 48 (1 self)
 Add to MetaCart
We develop some general techniques for converting randomized parallel algorithms into deterministic parallel algorithms without a blowup in the number of processors. One of the requirements for the application of these techniques is that the analysis of the randomized algorithm uses only pairwise independence. Our main new result is a parallel algorithm for coloring the vertices of an undirected graph using at most \Delta + 1 distinct colors in such a way that no two adjacent vertices receive the same color, where \Delta is the maximum degree of any vertex in the graph. The running time of the algorithm is O(log 3 n log log n) using a linear number of processors on a concurrent read, exclusive write (CREW) parallel random access machine (PRAM). 1 Our techniques also apply to several other problems, including the maximal independent set problem and the maximal matching problem. The application of the general technique to these last two problems is mostly of academic interest because...
Splitters and nearoptimal derandomization
"... We present a fairly general method for finding deterministic constructions obeying what we call krestrictions; this yields structures of size not much larger than the probabilistic bound. The structures constructed by our method include (n; k)universal sets (a collection of binary vectors of lengt ..."
Abstract

Cited by 34 (1 self)
 Add to MetaCart
We present a fairly general method for finding deterministic constructions obeying what we call krestrictions; this yields structures of size not much larger than the probabilistic bound. The structures constructed by our method include (n; k)universal sets (a collection of binary vectors of length n such that for any subset of size k of the indices, all 2k configurations appear) and families of perfect hash functions. The nearoptimal constructions of these objects imply the very efficient derandomization of algorithms in learning, of fixedsubgraph finding algorithms, and of near optimal threshold formulae. In addition, they derandomize the reduction showing the hardness of approximation of set cover. They also yield deterministic constructions for a localcoloring protocol, and for exhaustive testing of circuits.
Constructing Small Sample Spaces Satisfying Given Constraints
 SIAM JOURNAL ON DISCRETE MATHEMATICS
, 1993
"... The subject of this paper is finding small sample spaces for joint distributions of n discrete random variables. Such distributions are often only required to obey a certain limited set of constraints of the form Pr(E)=. We show that the problem of deciding whether there exists any distribution sati ..."
Abstract

Cited by 31 (3 self)
 Add to MetaCart
The subject of this paper is finding small sample spaces for joint distributions of n discrete random variables. Such distributions are often only required to obey a certain limited set of constraints of the form Pr(E)=. We show that the problem of deciding whether there exists any distribution satisfying a given set of constraints is NPhard. However, if the constraints are consistent, then there exists a distribution satisfying them which is supported by a "small" sample space (one whose cardinalityis equal to the number of constraints). For the important case of independenceconstraints,where the constraints have a certain form and are consistent with a joint distribution of n independent random variables, a small sample space can be constructed in polynomial time. This last result is also useful for derandomizing algorithms. We demonstrate this technique by an application to the problem of finding large independentsetsin sparse hypergraphs.
Nearly optimal distributed edge colouring in O(log log n) rounds
 in Proceedings of the Eight Annual ACMSIAM Symposium on Discrete Algorithms (SODA 97
, 1996
"... An extremely simple distributed randomized algorithm is presented which with high probability properly edge colours a given graph using (1+ ")\Delta colours, where \Delta is the maximum degree of the graph and " is any given positive constant. The algorithm is very fast. In particular, for graphs wi ..."
Abstract

Cited by 26 (7 self)
 Add to MetaCart
An extremely simple distributed randomized algorithm is presented which with high probability properly edge colours a given graph using (1+ ")\Delta colours, where \Delta is the maximum degree of the graph and " is any given positive constant. The algorithm is very fast. In particular, for graphs with sufficiently large vertex degrees (larger than polylog n, but smaller than any positive power of n), the algorithm requires only O(log log n) communication rounds. The algorithm is inherently distributed, but can be implemented on the PRAM, where it requires O(m\Delta) processors and O(log \Delta log log n) time, or in a sequential setting, where it requires O(m\Delta) time. 1 Introduction The edge colouring problem is a much studied problem in the theory of algorithms, graph theory, and combinatorics, whose relevance to computer science stems from its applications to scheduling and resource allocation problems [6, 11, 14, 17, 19, 12, 24, among others]. Given an input graph, the problem ...