Results 1  10
of
58
SmallBias Probability Spaces: Efficient Constructions and Applications
 SIAM J. Comput
, 1993
"... We show how to efficiently construct a small probability space on n binary random variables such that for every subset, its parity is either zero or one with "almost" equal probability. They are called fflbiased random variables. The number of random bits needed to generate the random var ..."
Abstract

Cited by 260 (14 self)
 Add to MetaCart
We show how to efficiently construct a small probability space on n binary random variables such that for every subset, its parity is either zero or one with "almost" equal probability. They are called fflbiased random variables. The number of random bits needed to generate the random variables is O(log n + log 1 ffl ). Thus, if ffl is polynomially small, then the size of the sample space is also polynomial. Random variables that are fflbiased can be used to construct "almost" kwise independent random variables where ffl is a function of k. These probability spaces have various applications: 1. Derandomization of algorithms: many randomized algorithms that require only k wise independence of their random bits (where k is bounded by O(log n)), can be derandomized by using fflbiased random variables. 2. Reducing the number of random bits required by certain randomized algorithms, e.g., verification of matrix multiplication. 3. Exhaustive testing of combinatorial circui...
How to recycle random bits
 In Proceedings of the 30th Annual Symposium on Foundations of Computer Science, SFCS ’89
, 1989
"... ..."
On LinearTime Deterministic Algorithms for Optimization Problems in Fixed Dimension
, 1992
"... We show that with recently developed derandomization techniques, one can convert Clarkson's randomized algorithm for linear programming in fixed dimension into a lineartime deterministic one. The constant of proportionality is d O(d) , which is better than for previously known such algorithms. ..."
Abstract

Cited by 92 (10 self)
 Add to MetaCart
We show that with recently developed derandomization techniques, one can convert Clarkson's randomized algorithm for linear programming in fixed dimension into a lineartime deterministic one. The constant of proportionality is d O(d) , which is better than for previously known such algorithms. We show that the algorithm works in a fairly general abstract setting, which allows us to solve various other problems (such as finding the maximum volume ellipsoid inscribed into the intersection of n halfspaces) in linear time.
Randomized Rounding without Solving the Linear Program
 In Proceedings of the Sixth Annual ACMSIAM Symposium on Discrete Algorithms
, 1995
"... We introduce a new technique called oblivious rounding  a variant of randomized rounding that avoids the bottleneck of first solving the linear program. Avoiding this bottleneck yields more efficient algorithms and brings probabilistic methods to bear on a new class of problems. We give oblivious ..."
Abstract

Cited by 90 (6 self)
 Add to MetaCart
We introduce a new technique called oblivious rounding  a variant of randomized rounding that avoids the bottleneck of first solving the linear program. Avoiding this bottleneck yields more efficient algorithms and brings probabilistic methods to bear on a new class of problems. We give oblivious rounding algorithms that approximately solve general packing and covering problems, including a parallel algorithm to find sparse strategies for matrix games.
Clique Partitions, Graph Compression and Speedingup Algorithms
 Journal of Computer and System Sciences
, 1991
"... We first consider the problem of partitioning the edges of a graph G into bipartite cliques such that the total order of the cliques is minimized, where the order of a clique is the number of vertices in it. It is shown that the problem is NPcomplete. We then prove the existence of a partition of s ..."
Abstract

Cited by 72 (3 self)
 Add to MetaCart
We first consider the problem of partitioning the edges of a graph G into bipartite cliques such that the total order of the cliques is minimized, where the order of a clique is the number of vertices in it. It is shown that the problem is NPcomplete. We then prove the existence of a partition of small total order in a sufficiently dense graph and devise an efficient algorithm to compute such a partition. It turns out that our algorithm exhibits a tradeoff between the total order of the partition and the running time. Next, we define the notion of a compression of a graph G and use the result on graph partitioning to efficiently compute an optimal compression for graphs of a given size. An interesting application of the graph compression result arises from the fact that several graph algorithms can be adapted to work with the compressed representation of the input graph, thereby improving the bound on their running times, particularly on dense graphs. This makes use of the tradeoff ...
Derandomization, witnesses for Boolean matrix multiplication and construction of perfect hash functions
 Algorithmica
, 1996
"... Small sample spaces with almost independent random variables are applied to design efficient sequential deterministic algorithms for two problems. The first algorithm, motivated by the attempt to design efficient algorithms for the All Pairs Shortest Path problem using fast matrix multiplication, so ..."
Abstract

Cited by 62 (5 self)
 Add to MetaCart
Small sample spaces with almost independent random variables are applied to design efficient sequential deterministic algorithms for two problems. The first algorithm, motivated by the attempt to design efficient algorithms for the All Pairs Shortest Path problem using fast matrix multiplication, solves the problem of computing witnesses for the Boolean product of two matrices. That is, if A and B are two n by n matrices, and C = AB is their Boolean product, the algorithm finds for every entry Cij = 1 a witness: an index k so that Aik = Bkj = 1. Its running time exceeds that of computing the product of two n by n matrices with small integer entries by a polylogarithmic factor. The second algorithm is a nearly linear time deterministic procedure for constructing a perfect hash function for a given nsubset of {1,..., m}.
The Probabilistic Method Yields Deterministic Parallel Algorithms
 Journal of Computer and System Sciences 49
, 1994
"... ..."
On Key Storage In Secure Networks
, 1995
"... We consider systems where the keys for encrypting messages are derived from the pairwise intersections of sets of private keys issued to the users. We give improved bounds on the storage requirements of systems of this type for secure communication in a large network. Supported by NATO grant R ..."
Abstract

Cited by 35 (0 self)
 Add to MetaCart
We consider systems where the keys for encrypting messages are derived from the pairwise intersections of sets of private keys issued to the users. We give improved bounds on the storage requirements of systems of this type for secure communication in a large network. Supported by NATO grant RG0088/89 y Supported by NSF grant CCR8900112 and NATO grant RG0088/89 1 1