Results 1  10
of
36
Good ErrorCorrecting Codes based on Very Sparse Matrices
, 1999
"... We study two families of errorcorrecting codes defined in terms of very sparse matrices. "MN" (MacKayNeal) codes are recently invented, and "Gallager codes" were first investigated in 1962, but appear to have been largely forgotten, in spite of their excellent properties. The decoding of both cod ..."
Abstract

Cited by 514 (25 self)
 Add to MetaCart
We study two families of errorcorrecting codes defined in terms of very sparse matrices. "MN" (MacKayNeal) codes are recently invented, and "Gallager codes" were first investigated in 1962, but appear to have been largely forgotten, in spite of their excellent properties. The decoding of both codes can be tackled with a practical sumproduct algorithm. We prove that these codes are "very good," in that sequences of codes exist which, when optimally decoded, achieve information rates up to the Shannon limit. This result holds not only for the binarysymmetric channel but also for any channel with symmetric stationary ergodic noise. We give experimental results for binarysymmetric channels and Gaussian channels demonstrating that practical performance substantially better than that of standard convolutional and concatenated codes can be achieved; indeed, the performance of Gallager codes is almost as close to the Shannon limit as that of turbo codes.
Design of capacityapproaching irregular lowdensity paritycheck codes
 IEEE TRANS. INFORM. THEORY
, 2001
"... We design lowdensity paritycheck (LDPC) codes that perform at rates extremely close to the Shannon capacity. The codes are built from highly irregular bipartite graphs with carefully chosen degree patterns on both sides. Our theoretical analysis of the codes is based on [1]. Assuming that the unde ..."
Abstract

Cited by 438 (7 self)
 Add to MetaCart
We design lowdensity paritycheck (LDPC) codes that perform at rates extremely close to the Shannon capacity. The codes are built from highly irregular bipartite graphs with carefully chosen degree patterns on both sides. Our theoretical analysis of the codes is based on [1]. Assuming that the underlying communication channel is symmetric, we prove that the probability densities at the message nodes of the graph possess a certain symmetry. Using this symmetry property we then show that, under the assumption of no cycles, the message densities always converge as the number of iterations tends to infinity. Furthermore, we prove a stability condition which implies an upper bound on the fraction of errors that a beliefpropagation decoder can correct when applied to a code induced from a bipartite graph with a given degree distribution. Our codes are found by optimizing the degree structure of the underlying graphs. We develop several strategies to perform this optimization. We also present some simulation results for the codes found which show that the performance of the codes is very close to the asymptotic theoretical bounds.
The Capacity of LowDensity ParityCheck Codes Under MessagePassing Decoding
, 2001
"... In this paper, we present a general method for determining the capacity of lowdensity paritycheck (LDPC) codes under messagepassing decoding when used over any binaryinput memoryless channel with discrete or continuous output alphabets. Transmitting at rates below this capacity, a randomly chos ..."
Abstract

Cited by 367 (8 self)
 Add to MetaCart
In this paper, we present a general method for determining the capacity of lowdensity paritycheck (LDPC) codes under messagepassing decoding when used over any binaryinput memoryless channel with discrete or continuous output alphabets. Transmitting at rates below this capacity, a randomly chosen element of the given ensemble will achieve an arbitrarily small target probability of error with a probability that approaches one exponentially fast in the length of the code. (By concatenating with an appropriate outer code one can achieve a probability of error that approaches zero exponentially fast in the length of the code with arbitrarily small loss in rate.) Conversely, transmitting at rates above this capacity the probability of error is bounded away from zero by a strictly positive constant which is independent of the length of the code and of the number of iterations performed. Our results are based on the observation that the concentration of the performance of the decoder around its average performance, as observed by Luby et al. [1] in the case of a binarysymmetric channel and a binary messagepassing algorithm, is a general phenomenon. For the particularly important case of beliefpropagation decoders, we provide an effective algorithm to determine the corresponding capacity to any desired degree of accuracy. The ideas presented in this paper are broadly applicable and extensions of the general method to lowdensity paritycheck codes over larger alphabets, turbo codes, and other concatenated coding schemes are outlined.
Efficient erasure correcting codes
 IEEE Transactions on Information Theory
, 2001
"... Abstract—We introduce a simple erasure recovery algorithm for codes derived from cascades of sparse bipartite graphs and analyze the algorithm by analyzing a corresponding discretetime random process. As a result, we obtain a simple criterion involving the fractions of nodes of different degrees on ..."
Abstract

Cited by 252 (20 self)
 Add to MetaCart
Abstract—We introduce a simple erasure recovery algorithm for codes derived from cascades of sparse bipartite graphs and analyze the algorithm by analyzing a corresponding discretetime random process. As a result, we obtain a simple criterion involving the fractions of nodes of different degrees on both sides of the graph which is necessary and sufficient for the decoding process to finish successfully with high probability. By carefully designing these graphs we can construct for any given rate and any given real number a family of linear codes of rate which can be encoded in time proportional to ��@I A times their block length. Furthermore, a codeword can be recovered with high probability from a portion of its entries of length @IC A or more. The recovery algorithm also runs in time proportional to ��@I A. Our algorithms have been implemented and work well in practice; various implementation issues are discussed. Index Terms—Erasure channel, large deviation analysis, lowdensity paritycheck codes. I.
Regular and Irregular Progressive EdgeGrowth Tanner Graphs
 IEEE TRANS. INFORM. THEORY
, 2003
"... We propose a general method for constructing Tanner graphs having a large girth by progressively establishing edges or connections between symbol and check nodes in an edgebyedge manner, called progressive edgegrowth (PEG) construction. Lower bounds on the girth of PEG Tanner graphs and on the mi ..."
Abstract

Cited by 93 (0 self)
 Add to MetaCart
We propose a general method for constructing Tanner graphs having a large girth by progressively establishing edges or connections between symbol and check nodes in an edgebyedge manner, called progressive edgegrowth (PEG) construction. Lower bounds on the girth of PEG Tanner graphs and on the minimum distance of the resulting lowdensity paritycheck (LDPC) codes are derived in terms of parameters of the graphs. The PEG construction attains essentially the same girth as Gallager's explicit construction for regular graphs, both of which meet or exceed the ErdosSachs bound. Asymptotic analysis of a relaxed version of the PEG construction is presented. We describe an empirical approach using a variant of the "downhill simplex" search algorithm to design irregular PEG graphs for short codes with fewer than a thousand of bits, complementing the design approach of "density evolution" for larger codes. Encoding of LDPC codes based on the PEG construction is also investigated. We show how to exploit the PEG principle to obtain LDPC codes that allow linear time encoding. We also investigate regular and irregular LDPC codes using PEG Tanner graphs but allowing the symbol nodes to take values over GF(q), q > 2. Analysis and simulation demonstrate that one can obtain better performance with increasing field size, which contrasts with previous observations.
Selective Avoidance of Cycles in Irregular LDPC Code Construction
 IEEE Trans. on Comm
, 2004
"... Abstract—This letter explains the effect of graph connectivity on errorfloor performance of lowdensity paritycheck (LDPC) codes under messagepassing decoding. A new metric, called extrinsic message degree (EMD), measures cycle connectivity in bipartite graphs of LDPC codes. Using an easily compu ..."
Abstract

Cited by 43 (5 self)
 Add to MetaCart
Abstract—This letter explains the effect of graph connectivity on errorfloor performance of lowdensity paritycheck (LDPC) codes under messagepassing decoding. A new metric, called extrinsic message degree (EMD), measures cycle connectivity in bipartite graphs of LDPC codes. Using an easily computed estimate of EMD, we propose a Viterbilike algorithm that selectively avoids small cycle clusters that are isolated from the rest of the graph. This algorithm is different from conventional girth conditioning by emphasizing the connectivity as well as the length of cycles. The algorithm yields codes with error floors that are orders of magnitude below those of random codes with very small degradation in capacityapproaching capability. Index Terms—Error floor, extrinsic message degree (EMD), graph cycles, irregular lowdensity paritycheck (LDPC) codes, iterative decoding, message passing, stopping sets, unstructured graph construction. I.
LDPC block and convolutional codes based on circulant matrices
 IEEE TRANS. INFORM. THEORY
, 2004
"... A class of algebraically structured quasicyclic (QC) lowdensity paritycheck (LDPC) codes and their convolutional counterparts is presented. The QC codes are described by sparse paritycheck matrices comprised of blocks of circulant matrices. The sparse paritycheck representation allows for prac ..."
Abstract

Cited by 41 (5 self)
 Add to MetaCart
A class of algebraically structured quasicyclic (QC) lowdensity paritycheck (LDPC) codes and their convolutional counterparts is presented. The QC codes are described by sparse paritycheck matrices comprised of blocks of circulant matrices. The sparse paritycheck representation allows for practical graphbased iterative messagepassing decoding. Based on the algebraic structure, bounds on the girth and minimum distance of the codes are found, and several possible encoding techniques are described. The performance of the QC LDPC block codes compares favorably with that of randomly constructed LDPC codes for short to moderate block lengths. The performance of the LDPC convolutional codes is superior to that of the QC codes on which they are based; this performance is the limiting performance obtained by increasing the circulant size of the base QC code. Finally, a continuous decoding procedure for the LDPC convolutional codes is described.
Designing LDPC Codes Using BitFilling
 in Proc. Int. Conf. Communications (ICC
, 2001
"... Bipartite graphs of bit nodes and parity check nodes arise as Tanner graphs corresponding to low density parity check codes. Given graph parameters such as the number of check nodes, the maximum checkdegree, the bitdegree, and the girth, we consider the problem of constructing bipartite graphs wit ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
Bipartite graphs of bit nodes and parity check nodes arise as Tanner graphs corresponding to low density parity check codes. Given graph parameters such as the number of check nodes, the maximum checkdegree, the bitdegree, and the girth, we consider the problem of constructing bipartite graphs with the largest number of bit nodes, that is, the highest rate. We propose a simpletoimplement heuristic BITFILLING algorithm for this problem. As a benchmark, our algorithm yields codes better or comparable to those in MacKay [1]. I.
Design and Evaluation of a Low Density Generator Matrix (LDGM) Large Block FEC Codec
 in Fifth International Workshop on Networked Group Communication (NGC’03
, 2003
"... Traditional small block Forward Error Correction (FEC) codes, like the ReedSolomon erasure (RSE) code, are known to raise e#ciency problems, in particular when they are applied to the Asynchronous Layered Coding (ALC) reliable multicast protocol. In this paper we describe the design of a simple ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
Traditional small block Forward Error Correction (FEC) codes, like the ReedSolomon erasure (RSE) code, are known to raise e#ciency problems, in particular when they are applied to the Asynchronous Layered Coding (ALC) reliable multicast protocol. In this paper we describe the design of a simple large block Low Density Generator Matrix (LDGM) codec, a particular case of LDPC code, which is capable of operating on source blocks that are several tens of megabytes long.
Iterative Encoding of LowDensity ParityCheck Codes
 in Proc. GLOBECOM 2002
, 2002
"... Motivated by the potential to reuse the decoder architecture, and thus reduce circuit space, we explore the use of iterative encoding techniques which are based upon the graphical representation of the code. We design codes by identifying associated encoder convergence constraints and also eliminati ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
Motivated by the potential to reuse the decoder architecture, and thus reduce circuit space, we explore the use of iterative encoding techniques which are based upon the graphical representation of the code. We design codes by identifying associated encoder convergence constraints and also eliminating some well known undesirable properties for sumproduct decoding such as 4cycles. In particular we show how the Jacobi method for iterative matrix inversion can be viewed as message passing and employed as the core of an iterative encoder. Example constructions of both regular and irregular LDPC codes that are encodable using this method are investigated.