Results 1  10
of
369
On coding for reliable communication over packet networks
, 2008
"... We consider the use of random linear network coding in lossy packet networks. In particular, we consider the following simple strategy: nodes store the packets that they receive and, whenever they have a transmission opportunity, they send out coded packets formed from random linear combinations of ..."
Abstract

Cited by 137 (33 self)
 Add to MetaCart
We consider the use of random linear network coding in lossy packet networks. In particular, we consider the following simple strategy: nodes store the packets that they receive and, whenever they have a transmission opportunity, they send out coded packets formed from random linear combinations of stored packets. In such a strategy, intermediate nodes perform additional coding yet do not decode nor wait for a block of packets before sending out coded packets. Moreover, all coding and decoding operations have polynomial complexity. We show that, provided packet headers can be used to carry an amount of sideinformation that grows arbitrarily large (but independently of payload size), random linear network coding achieves packetlevel capacity for both single unicast and single multicast connections and for both wireline and wireless networks. This result holds as long as packets received on links arrive according to processes that have average rates. Thus packet losses on links may exhibit correlations in time or with losses on other links. In the special case of Poisson traffic with i.i.d. losses, we give error exponents that quantify the rate of decay of the probability of error with coding delay. Our analysis of random linear network coding shows not only that it achieves packetlevel capacity, but also that the propagation of packets carrying “innovative ” information follows the propagation of jobs through a queueing network, thus implying that fluid flow models yield good approximations.
MinimumCost Multicast over Coded Packet Networks
 IEEE TRANS. ON INF. THE
, 2006
"... We consider the problem of establishing minimumcost multicast connections over coded packet networks, i.e., packet networks where the contents of outgoing packets are arbitrary, causal functions of the contents of received packets. We consider both wireline and wireless packet networks as well as b ..."
Abstract

Cited by 120 (26 self)
 Add to MetaCart
We consider the problem of establishing minimumcost multicast connections over coded packet networks, i.e., packet networks where the contents of outgoing packets are arbitrary, causal functions of the contents of received packets. We consider both wireline and wireless packet networks as well as both static multicast (where membership of the multicast group remains constant for the duration of the connection) and dynamic multicast (where membership of the multicast group changes in time, with nodes joining and leaving the group). For static multicast, we reduce the problem to a polynomialtime solvable optimization problem, ... and we present decentralized algorithms for solving it. These algorithms, when coupled with existing decentralized schemes for constructing network codes, yield a fully decentralized approach for achieving minimumcost multicast. By contrast, establishing minimumcost static multicast connections over routed packet networks is a very difficult problem even using centralized computation, except in the special cases of unicast and broadcast connections. For dynamic multicast, we reduce the problem to a dynamic programming problem and apply the theory of dynamic programming to suggest how it may be solved.
Pors: proofs of retrievability for large files
 In CCS ’07: Proceedings of the 14th ACM conference on Computer and communications security
, 2007
"... Abstract. In this paper, we define and explore proofs of retrievability (PORs). A POR scheme enables an archive or backup service (prover) to produce a concise proof that a user (verifier) can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient fo ..."
Abstract

Cited by 120 (8 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we define and explore proofs of retrievability (PORs). A POR scheme enables an archive or backup service (prover) to produce a concise proof that a user (verifier) can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient for the user to recover F in its entirety. A POR may be viewed as a kind of cryptographic proof of knowledge (POK), but one specially designed to handle a large file (or bitstring) F. We explore POR protocols here in which the communication costs, number of memory accesses for the prover, and storage requirements of the user (verifier) are small parameters essentially independent of the length of F. In addition to proposing new, practical POR constructions, we explore implementation considerations and optimizations that bear on previously explored, related schemes. In a POR, unlike a POK, neither the prover nor the verifier need actually have knowledge of F. PORs give rise to a new and unusual security definition whose formulation is another contribution of our work. We view PORs as an important tool for semitrusted online archives. Existing cryptographic techniques help users ensure the privacy and integrity of files they retrieve. It is also natural, however, for users to want to verify that archives do not delete or modify files prior to retrieval. The goal of a POR is to accomplish these checks without users having to download the files themselves. A POR can also provide qualityofservice guarantees, i.e., show that a file is retrievable within a certain time bound. Key words: storage systems, storage security, proofs of retrievability, proofs of knowledge 1
Network Coding for Distributed Storage Systems
 In Proc. of IEEE INFOCOM
, 2007
"... Distributed storage systems provide reliable access to data through redundancy spread over individually unreliable nodes. Application scenarios include data centers, peertopeer storage systems, and storage in wireless networks. Storing data using an erasure code, in fragments spread across nodes, ..."
Abstract

Cited by 106 (9 self)
 Add to MetaCart
(Show Context)
Distributed storage systems provide reliable access to data through redundancy spread over individually unreliable nodes. Application scenarios include data centers, peertopeer storage systems, and storage in wireless networks. Storing data using an erasure code, in fragments spread across nodes, requires less redundancy than simple replication for the same level of reliability. However, since fragments must be periodically replaced as nodes fail, a key question is how to generate encoded fragments in a distributed way while transferring as little data as possible across the network. For an erasure coded system, a common practice to repair from a node failure is for a new node to download subsets of data stored at a number of surviving nodes, reconstruct a lost coded block using the downloaded data, and store it at the new node. We show that this procedure is suboptimal. We introduce the notion of regenerating codes, which allow a new node to download functions of the stored data from the surviving nodes. We show that regenerating codes can significantly reduce the repair bandwidth. Further, we show that there is a fundamental tradeoff between storage and repair bandwidth which we theoretically characterize using flow arguments on an appropriately constructed graph. By invoking constructive results in network coding, we introduce regenerating codes that can achieve any point in this optimal tradeoff. I.
Onthefly verification of rateless erasure codes for efficient content distribution
 In Proceedings of the IEEE Symposium on Security and Privacy
, 2004
"... Abstract — The quality of peertopeer content distribution can suffer when malicious participants intentionally corrupt content. Some systems using simple blockbyblock downloading can verify blocks with traditional cryptographic signatures and hashes, but these techniques do not apply well to mor ..."
Abstract

Cited by 102 (4 self)
 Add to MetaCart
Abstract — The quality of peertopeer content distribution can suffer when malicious participants intentionally corrupt content. Some systems using simple blockbyblock downloading can verify blocks with traditional cryptographic signatures and hashes, but these techniques do not apply well to more elegant systems that use rateless erasure codes for efficient multicast transfers. This paper presents a practical scheme, based on homomorphic hashing, that enables a downloader to perform onthefly verification of erasureencoded blocks. I.
CapacityAchieving Ensembles for the Binary Erasure Channel with Bounded Complexity
 IEEE TRANS. INFORMATION THEORY
, 2004
"... We present two sequences of ensembles of nonsystematic irregular repeataccumulate codes which asymptotically (as their block length tends to infinity) achieve capacity on the binary erasure channel (BEC) with bounded complexity. This is in contrast to all previous constructions of capacityachievi ..."
Abstract

Cited by 50 (13 self)
 Add to MetaCart
(Show Context)
We present two sequences of ensembles of nonsystematic irregular repeataccumulate codes which asymptotically (as their block length tends to infinity) achieve capacity on the binary erasure channel (BEC) with bounded complexity. This is in contrast to all previous constructions of capacityachieving sequences of ensembles whose complexity grows at least like the log of the inverse of the gap to capacity. The new bounded complexity result is achieved by allowing a su#cient number of state nodes in the Tanner graph representing the codes.
Further results on coding for reliable communication over packet networks,” submitted to 2005
 IEEE International Symposium on Information Theory (ISIT
, 2005
"... capacityachieving coding scheme for unicast or multicast over lossy wireline or wireless packet networks is presented. We extend that paper’s results in two ways: First, we extend the network model to allow packets received on a link to arrive according to any process with an average rate, as oppos ..."
Abstract

Cited by 43 (9 self)
 Add to MetaCart
(Show Context)
capacityachieving coding scheme for unicast or multicast over lossy wireline or wireless packet networks is presented. We extend that paper’s results in two ways: First, we extend the network model to allow packets received on a link to arrive according to any process with an average rate, as opposed to the assumption of Poisson traffic with i.i.d. losses that was previously made. Second, in the case of Poisson traffic with i.i.d. losses, we derive error exponents that quantify the rate at which the probability of error decays with coding delay. I.
Network coding for efficient wireless unicast
 in IEEE International Zurich Seminar on Communications
, 2006
"... Abstract — We consider the problem of establishing efficient unicast connections over wireless packet networks. We show how network coding, combined with distributed flow optimization, gives a practicable approach that promises to significantly outperform the present approach of endtoend or linkb ..."
Abstract

Cited by 41 (7 self)
 Add to MetaCart
(Show Context)
Abstract — We consider the problem of establishing efficient unicast connections over wireless packet networks. We show how network coding, combined with distributed flow optimization, gives a practicable approach that promises to significantly outperform the present approach of endtoend or linkbylink retransmission combined with route optimization, where performance may be measured in terms of energy consumption, congestion, or any other cost that increases with the number of transmissions made by each node. We present a specific coding scheme and specific distributed flow optimization techniques that may be used to form the basis of a protocol. I.
Optimizing Cauchy ReedSolomon codes for faulttolerant network storage applications
 In NCA06: 5th IEEE International Symposium on Network Computing Applications
, 2006
"... NOTE: NCA’s page limit is rather severe: 8 pages. As a result, the final paper is pretty much a hatchet job of the original submission. I would recommend reading the technical report version of this paper, because it presents the material with some accompanying tutorial material, and is easier to re ..."
Abstract

Cited by 35 (11 self)
 Add to MetaCart
(Show Context)
NOTE: NCA’s page limit is rather severe: 8 pages. As a result, the final paper is pretty much a hatchet job of the original submission. I would recommend reading the technical report version of this paper, because it presents the material with some accompanying tutorial material, and is easier to read. The technical report is available at:
Raptor Forward Error Correction Scheme for Object Delivery
 IETF RMT Working Group, Work in Progress
, 2007
"... This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the "Internet Official Protocol Standards " (STD 1) for the standardization state and status of this pro ..."
Abstract

Cited by 32 (1 self)
 Add to MetaCart
This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the "Internet Official Protocol Standards " (STD 1) for the standardization state and status of this protocol. Distribution of this memo is unlimited. This document describes a FullySpecified Forward Error Correction (FEC) scheme, corresponding to FEC Encoding ID 1, for the Raptor forward error correction code and its application to reliable delivery of data objects. Raptor is a fountain code, i.e., as many encoding symbols as needed can be generated by the encoder onthefly from the source symbols of a source block of data. The decoder is able to recover the source block from any set of encoding symbols only slightly more in number than the number of source symbols.