Results 1  10
of
144
Stable Distributions, Pseudorandom Generators, Embeddings and Data Stream Computation
, 2000
"... In this paper we show several results obtained by combining the use of stable distributions with pseudorandom generators for bounded space. In particular: ffl we show how to maintain (using only O(log n=ffl 2 ) words of storage) a sketch C(p) of a point p 2 l n 1 under dynamic updates of its coo ..."
Abstract

Cited by 263 (15 self)
 Add to MetaCart
In this paper we show several results obtained by combining the use of stable distributions with pseudorandom generators for bounded space. In particular: ffl we show how to maintain (using only O(log n=ffl 2 ) words of storage) a sketch C(p) of a point p 2 l n 1 under dynamic updates of its coordinates, such that given sketches C(p) and C(q) one can estimate jp \Gamma qj 1 up to a factor of (1 + ffl) with large probability. This solves the main open problem of [10]. ffl we obtain another sketch function C 0 which maps l n 1 into a normed space l m 1 (as opposed to C), such that m = m(n) is much smaller than n; to our knowledge this is the first dimensionality reduction lemma for l 1 norm ffl we give an explicit embedding of l n 2 into l n O(log n) 1 with distortion (1 + 1=n \Theta(1) ) and a nonconstructive embedding of l n 2 into l O(n) 1 with distortion (1 + ffl) such that the embedding can be represented using only O(n log 2 n) bits (as opposed to at least...
Randomness is Linear in Space
 Journal of Computer and System Sciences
, 1993
"... We show that any randomized algorithm that runs in space S and time T and uses poly(S) random bits can be simulated using only O(S) random bits in space S and time T poly(S). A deterministic simulation in space S follows. Of independent interest is our main technical tool: a procedure which extracts ..."
Abstract

Cited by 229 (20 self)
 Add to MetaCart
We show that any randomized algorithm that runs in space S and time T and uses poly(S) random bits can be simulated using only O(S) random bits in space S and time T poly(S). A deterministic simulation in space S follows. Of independent interest is our main technical tool: a procedure which extracts randomness from a defective random source using a small additional number of truly random bits. 1
Graph Nonisomorphism Has Subexponential Size Proofs Unless The PolynomialTime Hierarchy Collapses
 SIAM Journal on Computing
, 1998
"... We establish hardness versus randomness tradeoffs for a broad class of randomized procedures. In particular, we create efficient nondeterministic simulations of bounded round ArthurMerlin games using a language in exponential time that cannot be decided by polynomial size oracle circuits with acce ..."
Abstract

Cited by 108 (6 self)
 Add to MetaCart
We establish hardness versus randomness tradeoffs for a broad class of randomized procedures. In particular, we create efficient nondeterministic simulations of bounded round ArthurMerlin games using a language in exponential time that cannot be decided by polynomial size oracle circuits with access to satisfiability. We show that every language with a bounded round ArthurMerlin game has subexponential size membership proofs for infinitely many input lengths unless exponential time coincides with the third level of the polynomialtime hierarchy (and hence the polynomialtime hierarchy collapses). This provides the first strong evidence that graph nonisomorphism has subexponential size proofs. We set up a general framework for derandomization which encompasses more than the traditional model of randomized computation. For a randomized procedure to fit within this framework, we only require that for any fixed input the complexity of checking whether the procedure succeeds on a given ...
Dispersers, Deterministic Amplification, and Weak Random Sources.
, 1989
"... We use a certain type of expanding bipartite graphs, called disperser graphs, to design procedures for picking highly correlated samples from a finite set, with the property that the probability of hitting any sufficiently large subset is high. These procedures require a relatively small number of r ..."
Abstract

Cited by 93 (11 self)
 Add to MetaCart
We use a certain type of expanding bipartite graphs, called disperser graphs, to design procedures for picking highly correlated samples from a finite set, with the property that the probability of hitting any sufficiently large subset is high. These procedures require a relatively small number of random bits and are robust with respect to the quality of the random bits. Using these sampling procedures to sample random inputs of polynomial time probabilistic algorithms, we can simulate the performance of some probabilistic algorithms with less random bits or with low quality random bits. We obtain the following results: 1. The error probability of an RP or BPP algorithm that operates with a constant error bound and requires n random bits, can be made exponentially small (i.e. 2 \Gamman ), with only (3 + ffl)n random bits, as opposed to standard amplification techniques that require \Omega\Gamma n 2 ) random bits for the same task. This result is nearly optimal, since the informati...
Selftesting/correcting for polynomials and for approximate functions
 in Proceedings of the 23rd Annual Symposium on Theory of Computing (STOC
, 1991
"... The study of selftesting/correcting programs was introduced in [8] in order to allow one to use program P to compute function f without trusting that P works correctly. A selftester for f estimates the fraction of x for which P (x) = f(x); and a selfcorrector for f takes a program that is correc ..."
Abstract

Cited by 81 (15 self)
 Add to MetaCart
The study of selftesting/correcting programs was introduced in [8] in order to allow one to use program P to compute function f without trusting that P works correctly. A selftester for f estimates the fraction of x for which P (x) = f(x); and a selfcorrector for f takes a program that is correct on most inputs and turns it into a program that is correct on every input with high probability 1. Both access P only as a blackbox and in some precise way are not allowed to compute the function f. Selfcorrecting is usually easy when the function has the random selfreducibility property. One class of such functions that has this property is the class of multivariate polynomials over finite fields [4] [12]. We extend this result in two directions. First, we show that polynomials are random selfreducible over more general domains: specifically, over the rationals and over noncommutative rings. Second, we show that one can get selfcorrectors even when the program satisfies weaker conditions, i.e. when the program has more errors, or when the program behaves in a more adversarial manner by changing the function it computes between successive calls. Selftesting is a much harder task. Previously it was known how to selftest for a few special examples of functions, such as the class of linear functions. We show that one can selftest the whole class of polynomial functions over Zp for prime p.
Compact NameIndependent Routing with Minimum Stretch
 In Proceedings of the 16th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA 2004
, 2004
"... Given a weighted undirected network with arbitrary node names, we present a compact routing scheme, using a O(√n) space routing table at each node, and routing along paths of stretch 3, that is, at most thrice as long as the shortest paths. This is optimal in a very strong sense. It is known t ..."
Abstract

Cited by 64 (12 self)
 Add to MetaCart
Given a weighted undirected network with arbitrary node names, we present a compact routing scheme, using a O(√n) space routing table at each node, and routing along paths of stretch 3, that is, at most thrice as long as the shortest paths. This is optimal in a very strong sense. It is known that no compact routing using o(n) space per node can route with stretch below 3. Also, it is known that any stretch below 5 requires Ω(√n) space per node.
Computing The Volume Of Convex Bodies: A Case Where Randomness Provably Helps
, 1991
"... We discuss the problem of computing the volume of a convex body K in IR n . We review worstcase results which show that it is hard to deterministically approximate volnK and randomised approximation algorithms which show that with randomisation one can approximate very nicely. We then provide som ..."
Abstract

Cited by 62 (6 self)
 Add to MetaCart
We discuss the problem of computing the volume of a convex body K in IR n . We review worstcase results which show that it is hard to deterministically approximate volnK and randomised approximation algorithms which show that with randomisation one can approximate very nicely. We then provide some applications of this latter result. Supported by NATO grant RG0088/89 y Supported by NSF grant CCR8900112 and NATO grant RG0088/89 1 Introduction The mathematical study of areas and volumes is as old as civilization itself, and has been conducted for both intellectual and practical reasons. As far back as 2000 B.C., the Egyptians 1 had methods for approximating the areas of fields (for taxation purposes) and the volumes of granaries. The exact study of areas and volumes began with Euclid 2 and was carried to a high art form by Archimedes 3 . The modern study of this subject began with the great astronomer Johann Kepler's treatise 4 Nova stereometria doliorum vinariorum, wh...
Rounds in communication complexity revisited
 SIAM Journal on Computing
, 1993
"... We also study the three party communication model, and exhibit an exponential gap in 3round protocols that differ in the starting player. ..."
Abstract

Cited by 58 (8 self)
 Add to MetaCart
We also study the three party communication model, and exhibit an exponential gap in 3round protocols that differ in the starting player.
The Computational Complexity of Universal Hashing
 Theoretical Computer Science
, 2002
"... Any implementation of CarterWegman universal hashing from nbit strings to mbit strings requires a timespace tradeoff of TS = Ω(nm). The bound holds in the general boolean branching program model, and thus in essentially any model of computation. As a corollary, computing a+b*c in any field ..."
Abstract

Cited by 58 (3 self)
 Add to MetaCart
Any implementation of CarterWegman universal hashing from nbit strings to mbit strings requires a timespace tradeoff of TS = Ω(nm). The bound holds in the general boolean branching program model, and thus in essentially any model of computation. As a corollary, computing a+b*c in any field F requires a quadratic timespace tradeoff, and the bound holds for any representation of the elements of the field. Other lower bounds on the...