Results 11  20
of
51
Pseudorandomness for Network Algorithms
 In Proceedings of the 26th Annual ACM Symposium on Theory of Computing
, 1994
"... We define pseudorandom generators for Yao's twoparty communication complexity model and exhibit a simple construction, based on expanders, for it. We then use a recursive composition of such generators to obtain pseudorandom generators that fool distributed network algorithms. While the construction ..."
Abstract

Cited by 42 (6 self)
 Add to MetaCart
We define pseudorandom generators for Yao's twoparty communication complexity model and exhibit a simple construction, based on expanders, for it. We then use a recursive composition of such generators to obtain pseudorandom generators that fool distributed network algorithms. While the construction and the proofs are simple, we demonstrate the generality of such generators by giving several applications. 1 Introduction The theory of pseudorandomness is aimed at understanding the minimum amount of randomness that a probabilistic model of computation actually needs. A typical result shows that n truly random bits used by the model can be replaced by n pseudorandom ones, generated deterministically from m !! n random bits, without significant difference in the behavior of the model. The deterministic function stretching the m random bits into n pseudorandom ones is called a pseudorandom generator, which is said to fool the Dept. of Computer Science, UCSD. Supported by USAIsrael BSF gra...
An ExpanderBased Approach to Geometric Optimization
 IN PROC. 9TH ANNU. ACM SYMPOS. COMPUT. GEOM
, 1993
"... We present a new approach to problems in geometric optimization that are traditionally solved using the parametric searching technique of Megiddo [34]. Our new approach ..."
Abstract

Cited by 40 (16 self)
 Add to MetaCart
We present a new approach to problems in geometric optimization that are traditionally solved using the parametric searching technique of Megiddo [34]. Our new approach
Randomization and Derandomization in SpaceBounded Computation
 In Proceedings of the 11th Annual IEEE Conference on Computational Complexity
, 1996
"... This is a survey of spacebounded probabilistic computation, summarizing the present state of knowledge about the relationships between the various complexity classes associated with such computation. The survey especially emphasizes recent progress in the construction of pseudorandom generators tha ..."
Abstract

Cited by 36 (0 self)
 Add to MetaCart
This is a survey of spacebounded probabilistic computation, summarizing the present state of knowledge about the relationships between the various complexity classes associated with such computation. The survey especially emphasizes recent progress in the construction of pseudorandom generators that fool probabilistic spacebounded computations, and the application of such generators to obtain deterministic simulations.
Tight bounds for testing bipartiteness in general graphs
 SICOMP
"... In this paper we consider the problem of testing bipartiteness of general graphs. The problem has previously been studied in two models, one most suitable for dense graphs, and one most suitable for boundeddegree graphs. Roughly speaking, dense graphs can be tested for bipartiteness with constant c ..."
Abstract

Cited by 36 (14 self)
 Add to MetaCart
In this paper we consider the problem of testing bipartiteness of general graphs. The problem has previously been studied in two models, one most suitable for dense graphs, and one most suitable for boundeddegree graphs. Roughly speaking, dense graphs can be tested for bipartiteness with constant complexity, while the complexity of testing boundeddegree graphs is ˜ Θ ( √ n), where n is the number of vertices in the graph (and ˜ Θ(f(n)) means Θ(f(n) · polylog(f(n)))). Thus there is a large gap between the complexity of testing in the two cases. In this work we bridge the gap described above. In particular, we study the problem of testing bipartiteness in a model that is suitable for all densities. We present an algorithm whose complexity is Õ(min( √ n, n 2 /m)) where m is the number of edges in the graph, and match it with an almost tight lower bound. This work is part of the author’s Ph.D. thesis prepared at Tel Aviv University under the supervision of Prof.
Random Cayley Graphs and Expanders
 Random Structures Algorithms
, 1997
"... For every 1 ? ffi ? 0 there exists a c = c(ffi) ? 0 such that for every group G of order n, and for a set S of c(ffi) log n random elements in the group, the expected value of the second largest eigenvalue of the normalized adjacency matrix of the Cayley graph X(G;S) is at most (1\Gammaffi). Thi ..."
Abstract

Cited by 34 (0 self)
 Add to MetaCart
For every 1 ? ffi ? 0 there exists a c = c(ffi) ? 0 such that for every group G of order n, and for a set S of c(ffi) log n random elements in the group, the expected value of the second largest eigenvalue of the normalized adjacency matrix of the Cayley graph X(G;S) is at most (1\Gammaffi). This implies that almost every such a graph is an "(ffi)expander. For Abelian groups this is essentially tight, and explicit constructions can be given in some cases. Department of Mathematics, Raymond and Beverly Sackler Faculty of Exact Sciences, Tel Aviv University, Ramat Aviv, Tel Aviv, Israel. Research supported in part by a U.S.A.Israeli BSF grant. y Department of Mathematics, Hebrew University of Jerusalem, Givat Ram, Jerusalem, Israel 0 1.
Linear Time Erasure Codes With Nearly Optimal Recovery
 Proc. of the 36 th Annual Symp. on Foundations of Computer Science
, 1995
"... An (n, c, ℓ, r)erasure code consists of an encoding algorithm and a decoding algorithm with the following properties. The encoding algorithm produces a set of ℓbit packets of total length cn from an nbit message. The decoding algorithm is able to recover the message from any set of packets whose ..."
Abstract

Cited by 33 (7 self)
 Add to MetaCart
An (n, c, ℓ, r)erasure code consists of an encoding algorithm and a decoding algorithm with the following properties. The encoding algorithm produces a set of ℓbit packets of total length cn from an nbit message. The decoding algorithm is able to recover the message from any set of packets whose total length is r, i.e., from any set of r/ℓ packets. We describe erasure codes where both the encoding and decoding algorithms run in linear time and where r is only slightly larger than n. 1
Randomness, Interactive Proofs and . . .
 APPEARS IN THE UNIVERSAL TURING MACHINE: A HALFCENTURY SURVEY, R. HERKEN ED.
, 1987
"... Recent approaches to the notions of randomness and proofs are surveyed. The new notions differ from the traditional ones in being subjective to the capabilities of the observer rather than reflecting "ideal " entities. The new notion of randomness regards probability distributions as equal if they c ..."
Abstract

Cited by 29 (6 self)
 Add to MetaCart
Recent approaches to the notions of randomness and proofs are surveyed. The new notions differ from the traditional ones in being subjective to the capabilities of the observer rather than reflecting "ideal " entities. The new notion of randomness regards probability distributions as equal if they cannot be told apart by efficient procedures. This notion is constructive and is suited for many applications. The new notion of a proof allows the introduction of the notion of zeroknowledge proofs: convincing arguments which yield nothing but the validity of the assertion. The new approaches to randomness and proofs are based on basic concepts and results from the theory of resourcebounded computation. In order to make the survey as accessible as possible, we have presented elements of the theory of resource bounded computation (but only to the extent required for the description of the new approaches). This survey is not intended to provide an account of the more traditional approaches to randomness (e.g. Kolmogorov Complexity) and proofs (i.e. traditional logic systems). Whenever these approaches are described it is only in order to confront them with the new approaches.
Optimal Slope Selection via Expanders
, 1993
"... Given n points in the plane and an integer k, the slope selection problem is to find the pair of points whose connecting line has the kth smallest slope. (In dual setting, given n lines in the plane, we want to find the vertex of their arrangement with the kth smallest xcoordinate.) Cole et al. ..."
Abstract

Cited by 25 (8 self)
 Add to MetaCart
Given n points in the plane and an integer k, the slope selection problem is to find the pair of points whose connecting line has the kth smallest slope. (In dual setting, given n lines in the plane, we want to find the vertex of their arrangement with the kth smallest xcoordinate.) Cole et al. [6] have given an O(n log n) solution (which is optimal), using the parametric searching technique of Megiddo. We obtain another optimal (deterministic) solution that does not depend on parametric searching and uses expander graphs instead. Our solution is somewhat simpler than that of [6] and has a more explicit geometric interpretation. keywords: computational geometry, algorithms, design of algorithms 1 Introduction In this paper we consider the slope selection problem, as defined in the abstract. For convenience, we prefer to study its dual version. We thus have a collection L = f` 1 ; : : : ; ` n g of n lines in the plane, which we assume to be in general position, meaning that no li...
Lower Bounds on the Competitive Ratio for Mobile User Tracking and Distributed Job Scheduling
 Theoretical Computer Science
, 1992
"... 1 We prove a lower bound of Ω(log n / log log n) on the competitive ratio of any (deterministic or randomized) distributed algorithm for solving the mobile user problem introduced by Awerbuch and Peleg [5], on certain networks of n processors. Our lower bound holds for various networks, including th ..."
Abstract

Cited by 24 (4 self)
 Add to MetaCart
1 We prove a lower bound of Ω(log n / log log n) on the competitive ratio of any (deterministic or randomized) distributed algorithm for solving the mobile user problem introduced by Awerbuch and Peleg [5], on certain networks of n processors. Our lower bound holds for various networks, including the hypercube, any network with sufficiently large girth, and any highly expanding graph. A similar Ω(log n / log log n) lower bound is proved for the competitive ratio of the maximum job delay of any distributed algorithm for solving the distributed scheduling problem of Awerbuch, Kutten and Peleg [4] on any of these networks. The proofs combine combinatorial techniques with tools from linear algebra and harmonic analysis and apply, in particular, a generalization of the vertex isoperimetric problem on the hypercube, which may be of independent interest. 2 Footnotes for title page