Results 1  10
of
91
Raptor codes
 IEEE Transactions on Information Theory
, 2006
"... LTCodes are a new class of codes introduced in [1] for the purpose of scalable and faulttolerant distribution of data over computer networks. In this paper we introduce Raptor Codes, an extension of LTCodes with linear time encoding and decoding. We will exhibit a class of universal Raptor codes: ..."
Abstract

Cited by 309 (6 self)
 Add to MetaCart
LTCodes are a new class of codes introduced in [1] for the purpose of scalable and faulttolerant distribution of data over computer networks. In this paper we introduce Raptor Codes, an extension of LTCodes with linear time encoding and decoding. We will exhibit a class of universal Raptor codes: for a given integer k, and any real ε> 0, Raptor codes in this class produce a potentially infinite stream of symbols such that any subset of symbols of size k(1 + ε) is sufficient to recover the original k symbols with high probability. Each output symbol is generated using O(log(1/ε)) operations, and the original symbols are recovered from the collected ones with O(k log(1/ε)) operations. We will also introduce novel techniques for the analysis of the error probability of the decoder for finite length Raptor codes. Moreover, we will introduce and analyze systematic versions of Raptor codes, i.e., versions in which the first output elements of the coding system coincide with the original k elements. 1
Extrinsic information transfer functions: A model and two properties
 IEEE Trans. Inform. Theory
, 2004
"... Abstract—Extrinsic information transfer (EXIT) charts are a tool for predicting the convergence behavior of iterative processors for a variety of communication problems. A model is introduced that applies to decoding problems, including the iterative decoding of parallel concatenated (turbo) codes, ..."
Abstract

Cited by 120 (2 self)
 Add to MetaCart
Abstract—Extrinsic information transfer (EXIT) charts are a tool for predicting the convergence behavior of iterative processors for a variety of communication problems. A model is introduced that applies to decoding problems, including the iterative decoding of parallel concatenated (turbo) codes, serially concatenated codes, lowdensity paritycheck (LDPC) codes, and repeat–accumulate (RA) codes. EXIT functions are defined using the model, and several properties of such functions are proved for erasure channels. One property expresses the area under an EXIT function in terms of a conditional entropy. A useful consequence of this result is that the design of capacityapproaching codes reduces to a curvefitting problem for all the aforementioned codes. A second property relates the EXIT function of a code to its Helleseth–Kløve–Levenshtein information functions, and thereby to the support weights of its subcodes. The relation is via a refinement of information functions called split information functions, and via a refinement of support weights called split support weights. Split information functions are used to prove a third property that relates the EXIT function of a linear code to the EXIT function of its dual. Index Terms—Concatenated codes, duality, errorcorrection coding, iterative decoding, mutual information.
Using linear programming to decode binary linear codes
 IEEE TRANS. INFORM. THEORY
, 2005
"... A new method is given for performing approximate maximumlikelihood (ML) decoding of an arbitrary binary linear code based on observations received from any discrete memoryless symmetric channel. The decoding algorithm is based on a linear programming (LP) relaxation that is defined by a factor grap ..."
Abstract

Cited by 113 (11 self)
 Add to MetaCart
A new method is given for performing approximate maximumlikelihood (ML) decoding of an arbitrary binary linear code based on observations received from any discrete memoryless symmetric channel. The decoding algorithm is based on a linear programming (LP) relaxation that is defined by a factor graph or paritycheck representation of the code. The resulting “LP decoder” generalizes our previous work on turbolike codes. A precise combinatorial characterization of when the LP decoder succeeds is provided, based on pseudocodewords associated with the factor graph. Our definition of a pseudocodeword unifies other such notions known for iterative algorithms, including “stopping sets, ” “irreducible closed walks, ” “trellis cycles, ” “deviation sets, ” and “graph covers.” The fractional distance ��— ™ of a code is introduced, which is a lower bound on the classical distance. It is shown that the efficient LP decoder will correct up to ��— ™ P I errors and that there are codes with ��— ™ a @ I A. An efficient algorithm to compute the fractional distance is presented. Experimental evidence shows a similar performance on lowdensity paritycheck (LDPC) codes between LP decoding and the minsum and sumproduct algorithms. Methods for tightening the LP relaxation to improve performance are also provided.
Graphcover decoding and finitelength analysis of messagepassing iterative decoding of LDPC codes
 IEEE TRANS. INFORM. THEORY
, 2005
"... The goal of the present paper is the derivation of a framework for the finitelength analysis of messagepassing iterative decoding of lowdensity paritycheck codes. To this end we introduce the concept of graphcover decoding. Whereas in maximumlikelihood decoding all codewords in a code are comp ..."
Abstract

Cited by 67 (12 self)
 Add to MetaCart
The goal of the present paper is the derivation of a framework for the finitelength analysis of messagepassing iterative decoding of lowdensity paritycheck codes. To this end we introduce the concept of graphcover decoding. Whereas in maximumlikelihood decoding all codewords in a code are competing to be the best explanation of the received vector, under graphcover decoding all codewords in all finite covers of a Tanner graph representation of the code are competing to be the best explanation. We are interested in graphcover decoding because it is a theoretical tool that can be used to show connections between linear programming decoding and messagepassing iterative decoding. Namely, on the one hand it turns out that graphcover decoding is essentially equivalent to linear programming decoding. On the other hand, because iterative, locally operating decoding algorithms like messagepassing iterative decoding cannot distinguish the underlying Tanner graph from any covering graph, graphcover decoding can serve as a model to explain the behavior of messagepassing iterative decoding. Understanding the behavior of graphcover decoding is tantamount to understanding
Asymptotic enumeration methods for analyzing LDPC codes
 IEEE Trans. Inform. Theory
, 2004
"... We show how asymptotic estimates of powers of polynomials with nonnegative coefficients can be used in the analysis of lowdensity paritycheck (LDPC) codes. In particular we show how these estimates can be used to derive the asymptotic distance spectrum of both regular and irregular LDPC code ense ..."
Abstract

Cited by 41 (2 self)
 Add to MetaCart
We show how asymptotic estimates of powers of polynomials with nonnegative coefficients can be used in the analysis of lowdensity paritycheck (LDPC) codes. In particular we show how these estimates can be used to derive the asymptotic distance spectrum of both regular and irregular LDPC code ensembles. We then consider the binary erasure channel (BEC). Using these estimates we derive lower bounds on the error exponent, under iterative decoding, of LDPC codes used over the BEC. Both regular and irregular code structures are considered. These bounds are compared to the corresponding bounds when optimal (maximum likelihood) decoding is applied.
LDPC block and convolutional codes based on circulant matrices
 IEEE TRANS. INFORM. THEORY
, 2004
"... A class of algebraically structured quasicyclic (QC) lowdensity paritycheck (LDPC) codes and their convolutional counterparts is presented. The QC codes are described by sparse paritycheck matrices comprised of blocks of circulant matrices. The sparse paritycheck representation allows for prac ..."
Abstract

Cited by 41 (5 self)
 Add to MetaCart
A class of algebraically structured quasicyclic (QC) lowdensity paritycheck (LDPC) codes and their convolutional counterparts is presented. The QC codes are described by sparse paritycheck matrices comprised of blocks of circulant matrices. The sparse paritycheck representation allows for practical graphbased iterative messagepassing decoding. Based on the algebraic structure, bounds on the girth and minimum distance of the codes are found, and several possible encoding techniques are described. The performance of the QC LDPC block codes compares favorably with that of randomly constructed LDPC codes for short to moderate block lengths. The performance of the LDPC convolutional codes is superior to that of the QC codes on which they are based; this performance is the limiting performance obtained by increasing the circulant size of the base QC code. Finally, a continuous decoding procedure for the LDPC convolutional codes is described.
On the stopping distance and the stopping redundancy of codes
 IEEE Trans. Inf. Theory
, 2006
"... Abstract — It is now well known that the performance of a linear code C under iterative decoding on a binary erasure channel (and other channels) is determined by the size of the smallest stopping set in the Tanner graph for C. Several recent papers refer to this parameter as the stopping distance s ..."
Abstract

Cited by 40 (2 self)
 Add to MetaCart
Abstract — It is now well known that the performance of a linear code C under iterative decoding on a binary erasure channel (and other channels) is determined by the size of the smallest stopping set in the Tanner graph for C. Several recent papers refer to this parameter as the stopping distance s of C. This is somewhat of a misnomer since the size of the smallest stopping set in the Tanner graph for C depends on the corresponding choice of a paritycheck matrix. It is easy to see that s � d, whered is the minimum Hamming distance of C, and we show that it is always possible to choose a paritycheck matrix for C (with sufficiently many dependent rows) such that s = d. We thus introduce a new parameter, termed the stopping redundancy of C, defined as the minimum number of rows in a paritycheck matrix H for C such that the corresponding stopping distance s(H) attains its largest possible value, namely s(H) =d. We then derive general bounds on the stopping redundancy of linear codes. We also examine several simple ways of constructing codes from other codes, and study the effect of these constructions on the stopping redundancy. Specifically, for the family of binary ReedMuller codes (of all orders), we prove that their stopping redundancy is at most a constant times their conventional redundancy. We show that the stopping redundancies of the binary and ternary extended Golay codes are at most 34 and 22, respectively. Finally, we provide upper and lower bounds on the stopping redundancy of MDS codes. I.
Channel coding rate in the finite blocklength regime
 IEEE TRANS. INF. THEORY
, 2010
"... This paper investigates the maximal channel coding rate achievable at a given blocklength and error probability. For general classes of channels new achievability and converse bounds are given, which are tighter than existing bounds for wide ranges of parameters of interest, and lead to tight appro ..."
Abstract

Cited by 38 (5 self)
 Add to MetaCart
This paper investigates the maximal channel coding rate achievable at a given blocklength and error probability. For general classes of channels new achievability and converse bounds are given, which are tighter than existing bounds for wide ranges of parameters of interest, and lead to tight approximations of the maximal achievable rate for blocklengths as short as 100. It is also shown analytically that the maximal rate achievable with error probability is closely approximated by where is the capacity, is a characteristic of the channel referred to as channel dispersion, and is the complementary Gaussian cumulative distribution function.
GraphCovers and Iterative Decoding of Finite Length Codes
, 2003
"... Codewords in finite covers of a Tanner graph G are characterized. Since iterative, locally operating decoding algorithms cannot distinguish the underlying graph G from any covering graph, these codewords, dubbed pseudocodewords are directly responible for suboptimal behavior of iterative decoding ..."
Abstract

Cited by 34 (0 self)
 Add to MetaCart
Codewords in finite covers of a Tanner graph G are characterized. Since iterative, locally operating decoding algorithms cannot distinguish the underlying graph G from any covering graph, these codewords, dubbed pseudocodewords are directly responible for suboptimal behavior of iterative decoding algorithms. We give a simple characterization of pseudocodewords from finite covers and show that, for the additive, white Gaussian noise channel, their impact is captured in a finite set of "minimal" pseudocodewords.
Construction of Short Block Length Irregular LowDensity ParityCheck Codes
, 2004
"... We present a construction algorithm for short block length irregular lowdensity paritycheck (LDPC) codes. Based on a novel interpretation of stopping sets in terms of the paritycheck matrix, we present an approximate trellisbased search algorithm that detects many stopping sets. Growing the parit ..."
Abstract

Cited by 23 (7 self)
 Add to MetaCart
We present a construction algorithm for short block length irregular lowdensity paritycheck (LDPC) codes. Based on a novel interpretation of stopping sets in terms of the paritycheck matrix, we present an approximate trellisbased search algorithm that detects many stopping sets. Growing the parity check matrix by a combination of random generation and the trellisbased search, we obtain codes that possess error floors orders of magnitude below randomly constructed codes and significantly better than other comparable constructions.