Results 1  10
of
21
SmallBias Probability Spaces: Efficient Constructions and Applications
 SIAM J. Comput
, 1993
"... We show how to efficiently construct a small probability space on n binary random variables such that for every subset, its parity is either zero or one with "almost" equal probability. They are called fflbiased random variables. The number of random bits needed to generate the random variables is ..."
Abstract

Cited by 258 (15 self)
 Add to MetaCart
We show how to efficiently construct a small probability space on n binary random variables such that for every subset, its parity is either zero or one with "almost" equal probability. They are called fflbiased random variables. The number of random bits needed to generate the random variables is O(log n + log 1 ffl ). Thus, if ffl is polynomially small, then the size of the sample space is also polynomial. Random variables that are fflbiased can be used to construct "almost" kwise independent random variables where ffl is a function of k. These probability spaces have various applications: 1. Derandomization of algorithms: many randomized algorithms that require only k wise independence of their random bits (where k is bounded by O(log n)), can be derandomized by using fflbiased random variables. 2. Reducing the number of random bits required by certain randomized algorithms, e.g., verification of matrix multiplication. 3. Exhaustive testing of combinatorial circui...
How to Recycle Random Bits
, 1989
"... We show that modified versions of the linear congruential generator and the shift register generator are provably good for amplifying the correctness of a probabilistic algorithm. More precisely, if r random bits are needed for a BPP algorithm to be correct with probability at least 2=3, then O(r + ..."
Abstract

Cited by 183 (12 self)
 Add to MetaCart
We show that modified versions of the linear congruential generator and the shift register generator are provably good for amplifying the correctness of a probabilistic algorithm. More precisely, if r random bits are needed for a BPP algorithm to be correct with probability at least 2=3, then O(r + k 2 ) bits are needed to improve this probability to 1 \Gamma 2 \Gammak . We also present a different pseudorandom generator that is optimal, up to a constant factor, in this regard: it uses only O(r + k) bits to improve the probability to 1 \Gamma 2 \Gammak . This generator is based on random walks on expanders. Our results do not depend on any unproven assumptions. Next we show that our modified versions of the shift register and linear congruential generators can be used to sample from distributions using, in the limit, the informationtheoretic lower bound on random bits. 1. Introduction Randomness plays a vital role in almost all areas of computer science, both in theory and in...
Dispersers, Deterministic Amplification, and Weak Random Sources.
, 1989
"... We use a certain type of expanding bipartite graphs, called disperser graphs, to design procedures for picking highly correlated samples from a finite set, with the property that the probability of hitting any sufficiently large subset is high. These procedures require a relatively small number of r ..."
Abstract

Cited by 93 (11 self)
 Add to MetaCart
We use a certain type of expanding bipartite graphs, called disperser graphs, to design procedures for picking highly correlated samples from a finite set, with the property that the probability of hitting any sufficiently large subset is high. These procedures require a relatively small number of random bits and are robust with respect to the quality of the random bits. Using these sampling procedures to sample random inputs of polynomial time probabilistic algorithms, we can simulate the performance of some probabilistic algorithms with less random bits or with low quality random bits. We obtain the following results: 1. The error probability of an RP or BPP algorithm that operates with a constant error bound and requires n random bits, can be made exponentially small (i.e. 2 \Gamman ), with only (3 + ffl)n random bits, as opposed to standard amplification techniques that require \Omega\Gamma n 2 ) random bits for the same task. This result is nearly optimal, since the informati...
A MonteCarlo Algorithm for Estimating the Permanent
, 1993
"... Let A be an n \Theta n matrix with 01 valued entries, and let per(A) be the permanent of A. We describe a MonteCarlo algorithm which produces a "good in the relative sense" estimate of per(A) and has running time poly(n)2 n=2 , where poly(n) denotes a function that grows polynomially with n. ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
Let A be an n \Theta n matrix with 01 valued entries, and let per(A) be the permanent of A. We describe a MonteCarlo algorithm which produces a "good in the relative sense" estimate of per(A) and has running time poly(n)2 n=2 , where poly(n) denotes a function that grows polynomially with n. 1 Introduction Let A be an n \Theta n matrix with 01 valued entries, let det(A) denote the determinant of A and let per(A) denote the permanent of A. The marked contrast between the computational complexity of computing det(A) versus that of computing per(A), despite the deceiving similarity between the two tasks, has baffled researchers for years. One of the reasons for interest in computing per(A) is that A can be viewed as the adjacency matrix of a bipartite graph, H = (X; Y; E) where X corresponds 1 to the rows in A, Y to the columns in A, and A ij = 1 if there is and edge between X i and Y j . The quantity per(A) is exactly the number of perfect matchings in H. It is well known tha...
On the Deterministic Complexity of Factoring Polynomials over Finite Fields
 Inform. Process. Lett
, 1990
"... . We present a new deterministic algorithm for factoring polynomials over Z p of degree n. We show that the worstcase running time of our algorithm is O(p 1=2 (log p) 2 n 2+ffl ), which is faster than the running times of previous deterministic algorithms with respect to both n and p. We also ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
. We present a new deterministic algorithm for factoring polynomials over Z p of degree n. We show that the worstcase running time of our algorithm is O(p 1=2 (log p) 2 n 2+ffl ), which is faster than the running times of previous deterministic algorithms with respect to both n and p. We also show that our algorithm runs in polynomial time for all but at most an exponentially small fraction of the polynomials of degree n over Z p . Specifically, we prove that the fraction of polynomials of degree n over Z p for which our algorithm fails to halt in time O((log p) 2 n 2+ffl ) is O((n log p) 2 =p). Consequently, the averagecase running time of our algorithm is polynomial in n and log p. Keywords: factorization, finite fields, irreducible polynomials. This research was supported by NSF grants DCR8504485 and DCR8552596. Appeared in Information Processing Letters 33, pp. 261267, 1990. An preliminary version of this paper appeared as University of WisconsinMadison, Comput...
Faster Factoring of Integers of a Special Form
, 1996
"... . A speedup of Lenstra's Elliptic Curve Method of factorization is presented. The speedup works for integers of the form N = PQ^2 , where P is a prime sufficiently smaller than Q. The result is of interest to cryptographers, since integers with secret factorization of this form are being used in dig ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
. A speedup of Lenstra's Elliptic Curve Method of factorization is presented. The speedup works for integers of the form N = PQ^2 , where P is a prime sufficiently smaller than Q. The result is of interest to cryptographers, since integers with secret factorization of this form are being used in digital signatures. The algorithm makes use of what we call "Jacobi signatures". We believe these to be of independent interest. 1 Introduction It is not known how to efficiently factor a large integer N . Currently, the algorithm with best asymptotic complexity is the Number Field Sieve (see [6] ). For numbers below a certain size (currently believed to be about 120 integers), either the Quadratic Sieve [14] or the Elliptic Curve Method [7] are faster. Which of these algorithms to use depends on the size of N and of the smallest prime factor of N . When the size of the smallest factor is sufficiently smaller than p N , the Elliptic Curve Method is the fastest of the three. In this no...
Subquadratic ZeroKnowledge
, 1995
"... We improve on the communication complexity of zeroknowledge proof systems. Let C be a boolean circuit of size n. Previous zeroknowledge proof systems for the satisfiability of C require the use of \Omega\Gamma kn) bit commitments in order to achieve a probability of undetected cheating below 2 \G ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
We improve on the communication complexity of zeroknowledge proof systems. Let C be a boolean circuit of size n. Previous zeroknowledge proof systems for the satisfiability of C require the use of \Omega\Gamma kn) bit commitments in order to achieve a probability of undetected cheating below 2 \Gammak . In the case k = n, the communication complexity of these protocols is therefore\Omega\Gamma n 2 ) bit commitments. In this paper, we present a zeroknowledge proof system for achieving the same goal with only O(n 1+"n + k p n 1+"n ) bit commitments, where " n goes to zero as n goes to infinity. In the case k = n, this is O(n p n 1+"n ). Moreover, only O(k) commitments need ever be opened, which is interesting if it is substantially less expensive to commit to a bit than to open a commitment. A preliminary version of this paper appeared in the Proceedings of the 32nd Annual IEEE Symposium on Foundations of Computer Science, October 1991. y Supported in part by NSA Gr...
On the distribution of quadratic residues and nonresidues modulo a prime number
 Mathematics of Computation
, 1992
"... you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, noncommercial use. Please contact the publisher regarding any further use of this work. Publisher contact inform ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, noncommercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at.
On constructing 11 oneway functions
 Electronic Colloquium on Computational Complexity (ECCC
, 1995
"... Abstract. We show how to construct lengthpreserving 11 oneway functions based on popular intractability assumptions (e.g., RSA, DLP). Such 11 functions should not be confused with (infinite) families of (finite) oneway permutations. What we want and obtain is a single (infinite) 11 oneway fun ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
Abstract. We show how to construct lengthpreserving 11 oneway functions based on popular intractability assumptions (e.g., RSA, DLP). Such 11 functions should not be confused with (infinite) families of (finite) oneway permutations. What we want and obtain is a single (infinite) 11 oneway function.
Minimizing Randomness in Minimum Spanning Tree, Parallel Connectivity, and Set Maxima Algorithms
 In Proc. 13th Annual ACMSIAM Symposium on Discrete Algorithms (SODA'02
, 2001
"... There are several fundamental problems whose deterministic complexity remains unresolved, but for which there exist randomized algorithms whose complexity is equal to known lower bounds. Among such problems are the minimum spanning tree problem, the set maxima problem, the problem of computing conne ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
There are several fundamental problems whose deterministic complexity remains unresolved, but for which there exist randomized algorithms whose complexity is equal to known lower bounds. Among such problems are the minimum spanning tree problem, the set maxima problem, the problem of computing connected components and (minimum) spanning trees in parallel, and the problem of performing sensitivity analysis on shortest path trees and minimum spanning trees. However, while each of these problems has a randomized algorithm whose performance meets a known lower bound, all of these randomized algorithms use a number of random bits which is linear in the number of operations they perform. We address the issue of reducing the number of random bits used in these randomized algorithms. For each of the problems listed above, we present randomized algorithms that have optimal performance but use only a polylogarithmic number of random bits; for some of the problems our optimal algorithms use only log n random bits. Our results represent an exponential savings in the amount of randomness used to achieve the same optimal performance as in the earlier algorithms. Our techniques are general and could likely be applied to other problems.