Results 11  20
of
73
A linear time erasureresilient code with nearly optimal recovery
 IEEE Transactions on Information Theory
, 1996
"... We develop an efficient scheme that produces an encoding of a given message such that the message can be decoded from any portion of the encoding that is approximately equal to the length of the message. More precisely, an (n, c, ℓ, r)erasureresilient code consists of an encoding algorithm and a d ..."
Abstract

Cited by 54 (6 self)
 Add to MetaCart
(Show Context)
We develop an efficient scheme that produces an encoding of a given message such that the message can be decoded from any portion of the encoding that is approximately equal to the length of the message. More precisely, an (n, c, ℓ, r)erasureresilient code consists of an encoding algorithm and a decoding algorithm with the following properties. The encoding algorithm produces a set of ℓbit packets of total length cn from an nbit message. The decoding algorithm is able to recover the message from any set of packets whose total length is r, i.e., from any set of r/ℓ packets. We describe erasureresilient codes where both the encoding and decoding algorithms run in linear time and where r is only slightly larger than n. 1
Pseudorandomness for Network Algorithms
 In Proceedings of the 26th Annual ACM Symposium on Theory of Computing
, 1994
"... We define pseudorandom generators for Yao's twoparty communication complexity model and exhibit a simple construction, based on expanders, for it. We then use a recursive composition of such generators to obtain pseudorandom generators that fool distributed network algorithms. While the constru ..."
Abstract

Cited by 52 (5 self)
 Add to MetaCart
(Show Context)
We define pseudorandom generators for Yao's twoparty communication complexity model and exhibit a simple construction, based on expanders, for it. We then use a recursive composition of such generators to obtain pseudorandom generators that fool distributed network algorithms. While the construction and the proofs are simple, we demonstrate the generality of such generators by giving several applications. 1 Introduction The theory of pseudorandomness is aimed at understanding the minimum amount of randomness that a probabilistic model of computation actually needs. A typical result shows that n truly random bits used by the model can be replaced by n pseudorandom ones, generated deterministically from m !! n random bits, without significant difference in the behavior of the model. The deterministic function stretching the m random bits into n pseudorandom ones is called a pseudorandom generator, which is said to fool the Dept. of Computer Science, UCSD. Supported by USAIsrael BSF gra...
Tiny Families of Functions with Random Properties: A QualitySize Tradeoff for Hashing
, 2003
"... We present three explicit constructions of hash functions, which exhibit a tradeo# between the size of the family (and hence the number of random bits needed to generate a member of the family), and the quality (or error parameter) of the pseudorandom property it achieves. Unlike previous const ..."
Abstract

Cited by 51 (10 self)
 Add to MetaCart
We present three explicit constructions of hash functions, which exhibit a tradeo# between the size of the family (and hence the number of random bits needed to generate a member of the family), and the quality (or error parameter) of the pseudorandom property it achieves. Unlike previous constructions, most notably universal hashing, the size of our families is essentially independent of the size of the domain on which the functions operate.
Tight bounds for testing bipartiteness in general graphs
 SICOMP
"... In this paper we consider the problem of testing bipartiteness of general graphs. The problem has previously been studied in two models, one most suitable for dense graphs, and one most suitable for boundeddegree graphs. Roughly speaking, dense graphs can be tested for bipartiteness with constant c ..."
Abstract

Cited by 42 (13 self)
 Add to MetaCart
In this paper we consider the problem of testing bipartiteness of general graphs. The problem has previously been studied in two models, one most suitable for dense graphs, and one most suitable for boundeddegree graphs. Roughly speaking, dense graphs can be tested for bipartiteness with constant complexity, while the complexity of testing boundeddegree graphs is ˜ Θ ( √ n), where n is the number of vertices in the graph (and ˜ Θ(f(n)) means Θ(f(n) · polylog(f(n)))). Thus there is a large gap between the complexity of testing in the two cases. In this work we bridge the gap described above. In particular, we study the problem of testing bipartiteness in a model that is suitable for all densities. We present an algorithm whose complexity is Õ(min( √ n, n 2 /m)) where m is the number of edges in the graph, and match it with an almost tight lower bound. This work is part of the author’s Ph.D. thesis prepared at Tel Aviv University under the supervision of Prof.
An ExpanderBased Approach to Geometric Optimization
 IN PROC. 9TH ANNU. ACM SYMPOS. COMPUT. GEOM
, 1993
"... We present a new approach to problems in geometric optimization that are traditionally solved using the parametric searching technique of Megiddo [34]. Our new approach ..."
Abstract

Cited by 42 (15 self)
 Add to MetaCart
We present a new approach to problems in geometric optimization that are traditionally solved using the parametric searching technique of Megiddo [34]. Our new approach
Randomization and Derandomization in SpaceBounded Computation
 In Proceedings of the 11th Annual IEEE Conference on Computational Complexity
, 1996
"... This is a survey of spacebounded probabilistic computation, summarizing the present state of knowledge about the relationships between the various complexity classes associated with such computation. The survey especially emphasizes recent progress in the construction of pseudorandom generators tha ..."
Abstract

Cited by 40 (0 self)
 Add to MetaCart
(Show Context)
This is a survey of spacebounded probabilistic computation, summarizing the present state of knowledge about the relationships between the various complexity classes associated with such computation. The survey especially emphasizes recent progress in the construction of pseudorandom generators that fool probabilistic spacebounded computations, and the application of such generators to obtain deterministic simulations.
Scalable Secure Storage when Half the System Is Faulty
 Information and Computation
, 2000
"... In this paper, we provide a method to safely store a document in perhaps the most challenging settings, a highly decentralized replicated storage system where up to half of the storage servers may incur arbitrary failures, including alterations to data stored in them. Using an error correcting code ..."
Abstract

Cited by 40 (5 self)
 Add to MetaCart
In this paper, we provide a method to safely store a document in perhaps the most challenging settings, a highly decentralized replicated storage system where up to half of the storage servers may incur arbitrary failures, including alterations to data stored in them. Using an error correcting code (ECC), e.g., a ReedSolomon code, one can take n pieces of a document, replace each piece with another piece of size larger by a factor of n n−2t+1 such that it is possible to recover the original set even when up to t of the larger pieces are altered. For t close to n/2 the space blowup factor of this scheme is close to n, and the overhead of an ECC such as the ReedSolomon code degenerates to that of a trivial replication code. We show a technique to reduce this large space overhead for high values of t. Our scheme blows up each piece by a factor slightly larger than two using an erasure code which makes it possible to recover the original set using n/2 − O(n/d) of the pieces, where d ≈ 80 is a fixed constant. Then we attach to each piece O(d log n / log d) additional bits to make it possible to identify a large enough set of unmodified pieces, with negligible error probability, assuming that at least half the pieces are unmodified, and with low complexity. For values of t close to n/2 we achieve a large asymptotic space reduction over the best possible space blowup of any ECC in deterministic setting. Our approach makes use of a dregular expander graph to compute the bits required for the identification of n/2 − O(n/d) good pieces. 1
Linear Time Erasure Codes With Nearly Optimal Recovery
 Proc. of the 36 th Annual Symp. on Foundations of Computer Science
, 1995
"... An (n, c, ℓ, r)erasure code consists of an encoding algorithm and a decoding algorithm with the following properties. The encoding algorithm produces a set of ℓbit packets of total length cn from an nbit message. The decoding algorithm is able to recover the message from any set of packets whose ..."
Abstract

Cited by 39 (8 self)
 Add to MetaCart
(Show Context)
An (n, c, ℓ, r)erasure code consists of an encoding algorithm and a decoding algorithm with the following properties. The encoding algorithm produces a set of ℓbit packets of total length cn from an nbit message. The decoding algorithm is able to recover the message from any set of packets whose total length is r, i.e., from any set of r/ℓ packets. We describe erasure codes where both the encoding and decoding algorithms run in linear time and where r is only slightly larger than n. 1
Random Cayley Graphs and Expanders
 Random Structures Algorithms
, 1997
"... For every 1 ? ffi ? 0 there exists a c = c(ffi) ? 0 such that for every group G of order n, and for a set S of c(ffi) log n random elements in the group, the expected value of the second largest eigenvalue of the normalized adjacency matrix of the Cayley graph X(G;S) is at most (1\Gammaffi). Thi ..."
Abstract

Cited by 38 (1 self)
 Add to MetaCart
(Show Context)
For every 1 ? ffi ? 0 there exists a c = c(ffi) ? 0 such that for every group G of order n, and for a set S of c(ffi) log n random elements in the group, the expected value of the second largest eigenvalue of the normalized adjacency matrix of the Cayley graph X(G;S) is at most (1\Gammaffi). This implies that almost every such a graph is an "(ffi)expander. For Abelian groups this is essentially tight, and explicit constructions can be given in some cases. Department of Mathematics, Raymond and Beverly Sackler Faculty of Exact Sciences, Tel Aviv University, Ramat Aviv, Tel Aviv, Israel. Research supported in part by a U.S.A.Israeli BSF grant. y Department of Mathematics, Hebrew University of Jerusalem, Givat Ram, Jerusalem, Israel 0 1.