Results 1  10
of
336
Effective Erasure Codes for Reliable Computer Communication Protocols
, 1997
"... Reliable communication protocols require that all the intended recipients of a message receive the message intact. Automatic Repeat reQuest (ARQ) techniques are used in unicast protocols, but they do not scale well to multicast protocols with large groups of receivers, since segment losses tend to b ..."
Abstract

Cited by 410 (14 self)
 Add to MetaCart
Reliable communication protocols require that all the intended recipients of a message receive the message intact. Automatic Repeat reQuest (ARQ) techniques are used in unicast protocols, but they do not scale well to multicast protocols with large groups of receivers, since segment losses tend to become uncorrelated thus greatly reducing the effectiveness of retransmissions. In such cases, Forward Error Correction (FEC) techniques can be used, consisting in the transmission of redundant packets (based on error correcting codes) to allow the receivers to recover from independent packet losses. Despite the widespread use of error correcting codes in many fields of information processing, and a general consensus on the usefulness of FEC techniques within some of the Internet protocols, very few actual implementations exist of the latter. This probably derives from the different types of applications, and from concerns related to the complexity of implementing such codes in software. To f...
FARSITE: Federated, Available, and Reliable Storage for an Incompletely Trusted Environment
 In Proceedings of the 5th Symposium on Operating Systems Design and Implementation (OSDI
, 2002
"... Farsite is a secure, scalable file system that logically functions as a centralized file server but is physically distributed among a set of trotrusted computers. Farsite provides file availability and reliability through randomized replicated storage; it ensures the secrecy of file contents with cr ..."
Abstract

Cited by 383 (11 self)
 Add to MetaCart
Farsite is a secure, scalable file system that logically functions as a centralized file server but is physically distributed among a set of trotrusted computers. Farsite provides file availability and reliability through randomized replicated storage; it ensures the secrecy of file contents with cryptographic techniques; it maintains the integrity of file and directory data with a Byzantinefaulttolerant protocol; it is designed to be scalable by using a distributed hint mechanism and delegation certificates for pathname translations; and it achieves good performance by locally caching file data, lazily propagating file updates, and varying the duration and granularity of content leases. We report on the design of Farsite and the lessons we have learned by implementing much of that design.
Tcplike congestion control for layered multicast data transfer
, 1998
"... Abstract—We present a novel congestion control algorithm suitable for use with cumulative, layered data streams in the MBone. Our algorithm behaves similarly to TCP congestion control algorithms, and shares bandwidth fairly with other instances of the protocol and with TCP flows. It is entirely rece ..."
Abstract

Cited by 344 (12 self)
 Add to MetaCart
Abstract—We present a novel congestion control algorithm suitable for use with cumulative, layered data streams in the MBone. Our algorithm behaves similarly to TCP congestion control algorithms, and shares bandwidth fairly with other instances of the protocol and with TCP flows. It is entirely receiver driven and requires no perreceiver status at the sender, in order to scale to large numbers of receivers. It relies on standard functionalities of multicast routers, and is suitable for continuous stream and reliable bulk data transfer. In the paper we illustrate the algorithm, characterize its response to losses both analytically and by simulations, and analyse its behaviour using simulations and experiments in real networks. We also show how error recovery can be dealt with independently from congestion control by using FEC techniques, so as to provide reliable bulk data transfer.
Fuzzy extractors: How to generate strong keys from biometrics and other noisy data. Technical Report 2003/235, Cryptology ePrint archive, http://eprint.iacr.org, 2006. Previous version appeared at EUROCRYPT 2004
 34 [DRS07] [DS05] [EHMS00] [FJ01] Yevgeniy Dodis, Leonid Reyzin, and Adam
, 2004
"... We provide formal definitions and efficient secure techniques for • turning noisy information into keys usable for any cryptographic application, and, in particular, • reliably and securely authenticating biometric data. Our techniques apply not just to biometric information, but to any keying mater ..."
Abstract

Cited by 291 (34 self)
 Add to MetaCart
We provide formal definitions and efficient secure techniques for • turning noisy information into keys usable for any cryptographic application, and, in particular, • reliably and securely authenticating biometric data. Our techniques apply not just to biometric information, but to any keying material that, unlike traditional cryptographic keys, is (1) not reproducible precisely and (2) not distributed uniformly. We propose two primitives: a fuzzy extractor reliably extracts nearly uniform randomness R from its input; the extraction is errortolerant in the sense that R will be the same even if the input changes, as long as it remains reasonably close to the original. Thus, R can be used as a key in a cryptographic application. A secure sketch produces public information about its input w that does not reveal w, and yet allows exact recovery of w given another value that is close to w. Thus, it can be used to reliably reproduce errorprone biometric inputs without incurring the security risk inherent in storing them. We define the primitives to be both formally secure and versatile, generalizing much prior work. In addition, we provide nearly optimal constructions of both primitives for various measures of “closeness” of input data, such as Hamming distance, edit distance, and set difference.
Efficient erasure correcting codes
 IEEE Transactions on Information Theory
, 2001
"... Abstract—We introduce a simple erasure recovery algorithm for codes derived from cascades of sparse bipartite graphs and analyze the algorithm by analyzing a corresponding discretetime random process. As a result, we obtain a simple criterion involving the fractions of nodes of different degrees on ..."
Abstract

Cited by 250 (20 self)
 Add to MetaCart
Abstract—We introduce a simple erasure recovery algorithm for codes derived from cascades of sparse bipartite graphs and analyze the algorithm by analyzing a corresponding discretetime random process. As a result, we obtain a simple criterion involving the fractions of nodes of different degrees on both sides of the graph which is necessary and sufficient for the decoding process to finish successfully with high probability. By carefully designing these graphs we can construct for any given rate and any given real number a family of linear codes of rate which can be encoded in time proportional to ��@I A times their block length. Furthermore, a codeword can be recovered with high probability from a portion of its entries of length @IC A or more. The recovery algorithm also runs in time proportional to ��@I A. Our algorithms have been implemented and work well in practice; various implementation issues are discussed. Index Terms—Erasure channel, large deviation analysis, lowdensity paritycheck codes. I.
Improved Decoding of ReedSolomon and AlgebraicGeometry Codes
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 1999
"... Given an errorcorrecting code over strings of length n and an arbitrary input string also of length n, the list decoding problem is that of finding all codewords within a specified Hamming distance from the input string. We present an improved list decoding algorithm for decoding ReedSolomon codes ..."
Abstract

Cited by 244 (40 self)
 Add to MetaCart
Given an errorcorrecting code over strings of length n and an arbitrary input string also of length n, the list decoding problem is that of finding all codewords within a specified Hamming distance from the input string. We present an improved list decoding algorithm for decoding ReedSolomon codes. The list decoding problem for ReedSolomon codes reduces to the following "curvefitting" problem over a field F : Given n points f(x i :y i )g i=1 , x i
Practical LossResilient Codes
, 1997
"... We present a randomized construction of lineartime encodable and decodable codes that can transmit over lossy channels at rates extremely close to capacity. The encoding and decoding algorithms for these codes have fast and simple software implementations. Partial implementations of our algorithms ..."
Abstract

Cited by 227 (26 self)
 Add to MetaCart
We present a randomized construction of lineartime encodable and decodable codes that can transmit over lossy channels at rates extremely close to capacity. The encoding and decoding algorithms for these codes have fast and simple software implementations. Partial implementations of our algorithms are faster by orders of magnitude than the best software implementations of any previous algorithm for this problem. We expect these codes will be extremely useful for applications such as realtime audio and video transmission over the Internet, where lossy channels are common and fast decoding is a requirement. Despite the simplicity of the algorithms, their design and analysis are mathematically intricate. The design requires the careful choice of a random irregular bipartite graph, where the structure of the irregular graph is extremely important. We model the progress of the decoding algorithm by a set of differential equations. The solution to these equations can then be expressed as p...
Sampling signals with finite rate of innovation
 IEEE Transactions on Signal Processing
, 2002
"... Abstract—Consider classes of signals that have a finite number of degrees of freedom per unit of time and call this number the rate of innovation. Examples of signals with a finite rate of innovation include streams of Diracs (e.g., the Poisson process), nonuniform splines, and piecewise polynomials ..."
Abstract

Cited by 214 (51 self)
 Add to MetaCart
Abstract—Consider classes of signals that have a finite number of degrees of freedom per unit of time and call this number the rate of innovation. Examples of signals with a finite rate of innovation include streams of Diracs (e.g., the Poisson process), nonuniform splines, and piecewise polynomials. Even though these signals are not bandlimited, we show that they can be sampled uniformly at (or above) the rate of innovation using an appropriate kernel and then be perfectly reconstructed. Thus, we prove sampling theorems for classes of signals and kernels that generalize the classic “bandlimited and sinc kernel ” case. In particular, we show how to sample and reconstruct periodic and finitelength streams of Diracs, nonuniform splines, and piecewise polynomials using sinc and Gaussian kernels. For infinitelength signals with finite local rate of innovation, we show local sampling and reconstruction based on spline kernels. The key in all constructions is to identify the innovative part of a signal (e.g., time instants and weights of Diracs) using an annihilating or locator filter: a device well known in spectral analysis and errorcorrection coding. This leads to standard computational procedures for solving the sampling problem, which we show through experimental results. Applications of these new sampling results can be found in signal processing, communications systems, and biological systems. Index Terms—Analogtodigital conversion, annihilating filters, generalized sampling, nonbandlimited signals, nonuniform splines, piecewise polynomials, poisson processes, sampling. I.
Adaptive FECbased error control for Internet telephony
 in Proc. IEEE INFOCOM
, 1999
"... www.inria.fr/rodeo/{bolot,sfosse} ..."
SplitStream: Highbandwidth content distribution in cooperative environments
, 2003
"... In treebased multicast systems, a relatively small number of interior nodes carry the load of forwarding multicast messages. This works well when the interior nodes are dedicated infrastructure routers. But it poses a problem in cooperative applicationlevel multicast, where participants expect to ..."
Abstract

Cited by 173 (4 self)
 Add to MetaCart
In treebased multicast systems, a relatively small number of interior nodes carry the load of forwarding multicast messages. This works well when the interior nodes are dedicated infrastructure routers. But it poses a problem in cooperative applicationlevel multicast, where participants expect to contribute resources proportional to the benefit they derive from using the system. Moreover, many participants may not have the network capacity and availability required of an interior node in highbandwidth multicast applications. SplitStream is a highbandwidth content distribution system based on applicationlevel multicast. It distributes the forwarding load among all the participants, and is able to accommodate participating nodes with different bandwidth capacities. We sketch the design of SplitStream and present some preliminary performance results.