Results 1  10
of
93
Regular and Irregular Progressive EdgeGrowth Tanner Graphs
 IEEE TRANS. INFORM. THEORY
, 2003
"... We propose a general method for constructing Tanner graphs having a large girth by progressively establishing edges or connections between symbol and check nodes in an edgebyedge manner, called progressive edgegrowth (PEG) construction. Lower bounds on the girth of PEG Tanner graphs and on the mi ..."
Abstract

Cited by 91 (0 self)
 Add to MetaCart
We propose a general method for constructing Tanner graphs having a large girth by progressively establishing edges or connections between symbol and check nodes in an edgebyedge manner, called progressive edgegrowth (PEG) construction. Lower bounds on the girth of PEG Tanner graphs and on the minimum distance of the resulting lowdensity paritycheck (LDPC) codes are derived in terms of parameters of the graphs. The PEG construction attains essentially the same girth as Gallager's explicit construction for regular graphs, both of which meet or exceed the ErdosSachs bound. Asymptotic analysis of a relaxed version of the PEG construction is presented. We describe an empirical approach using a variant of the "downhill simplex" search algorithm to design irregular PEG graphs for short codes with fewer than a thousand of bits, complementing the design approach of "density evolution" for larger codes. Encoding of LDPC codes based on the PEG construction is also investigated. We show how to exploit the PEG principle to obtain LDPC codes that allow linear time encoding. We also investigate regular and irregular LDPC codes using PEG Tanner graphs but allowing the symbol nodes to take values over GF(q), q > 2. Analysis and simulation demonstrate that one can obtain better performance with increasing field size, which contrasts with previous observations.
Graphcover decoding and finitelength analysis of messagepassing iterative decoding of LDPC codes
 IEEE TRANS. INFORM. THEORY
, 2005
"... The goal of the present paper is the derivation of a framework for the finitelength analysis of messagepassing iterative decoding of lowdensity paritycheck codes. To this end we introduce the concept of graphcover decoding. Whereas in maximumlikelihood decoding all codewords in a code are comp ..."
Abstract

Cited by 67 (12 self)
 Add to MetaCart
The goal of the present paper is the derivation of a framework for the finitelength analysis of messagepassing iterative decoding of lowdensity paritycheck codes. To this end we introduce the concept of graphcover decoding. Whereas in maximumlikelihood decoding all codewords in a code are competing to be the best explanation of the received vector, under graphcover decoding all codewords in all finite covers of a Tanner graph representation of the code are competing to be the best explanation. We are interested in graphcover decoding because it is a theoretical tool that can be used to show connections between linear programming decoding and messagepassing iterative decoding. Namely, on the one hand it turns out that graphcover decoding is essentially equivalent to linear programming decoding. On the other hand, because iterative, locally operating decoding algorithms like messagepassing iterative decoding cannot distinguish the underlying Tanner graph from any covering graph, graphcover decoding can serve as a model to explain the behavior of messagepassing iterative decoding. Understanding the behavior of graphcover decoding is tantamount to understanding
Explicit Construction of Families of LDPC Codes with No 4Cycles
 IEEE Trans. Inform. Theory
, 2003
"... LDPC codes are serious contenders to Turbo codes in terms of decoding performance. ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
LDPC codes are serious contenders to Turbo codes in terms of decoding performance.
A coding theorem for lossy data compression by LDPC codes
 IEEE Trans. Info. Theory
, 2003
"... © 2003 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
© 2003 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
PseudoCodeword Analysis of Tanner Graphs from Projective and Euclidean Planes
, 2006
"... In order to understand the performance of a code under maximumlikelihood (ML) decoding, one studies the codewords, in particular the minimal codewords, and their Hamming weights. In the context of linear programming (LP) decoding, one’s attention needs to be shifted to the pseudocodewords, in part ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
In order to understand the performance of a code under maximumlikelihood (ML) decoding, one studies the codewords, in particular the minimal codewords, and their Hamming weights. In the context of linear programming (LP) decoding, one’s attention needs to be shifted to the pseudocodewords, in particular to the minimal pseudocodewords, and their pseudoweights. In this paper we investigate some families of codes that have good properties under LP decoding, namely certain families of lowdensity paritycheck (LDPC) codes that are derived from projective and Euclidean planes: we study the structure of their minimal pseudocodewords and give lower bounds on their pseudoweight.
Construction of protograph LDPC codes with linear minimum distance
 in Proc. International Symposium on Information Theory
, 2006
"... Abstract — A construction method for protographbased LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a highrate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting c ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
Abstract — A construction method for protographbased LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a highrate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree2 nodes. This guarantees the linear minimum distance property for the lowerrate codes. Excluding checks connected to degree1 nodes, we show that the number of degree2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very highdegree node in the base protograph. A family of high to lowrate codes with minimum distance linearly increasing in block size and with capacityapproaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted. I.
On the computation of the minimum distance of lowdensity paritycheck codes
 In IEEE International Conference on Communications
, 2004
"... Abstract — Lowdensity paritycheck (LDPC) codes in their broadersense definition are linear codes whose paritycheck matrices have fewer 1s than 0s. Finding their minimum distance is therefore in general an NPhard problem; in other words there exists no known polynomial deterministic algorithm to ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
Abstract — Lowdensity paritycheck (LDPC) codes in their broadersense definition are linear codes whose paritycheck matrices have fewer 1s than 0s. Finding their minimum distance is therefore in general an NPhard problem; in other words there exists no known polynomial deterministic algorithm to compute the minimum distance of a particular, nontrivial LDPC code. We propose a randomized algorithm called the approximately nearest codewords (ANC) searching approach to attack this hard problem for iteratively decodable LDPC codes. The principle of the ANC searching approach is to search codewords locally around the allzero codeword perturbed by a minimum level of noise, anticipating that the resultant nearest nonzero codewords will most likely contain the minimumHammingweight codeword whose Hamming weight is equal to the minimum distance of the linear code. The effectiveness of the algorithm is demonstrated by numerous examples. minimum distance, LDPC codes, algorithm, NPhardness
Iterative Decoder Architectures
, 2002
"... Implementation constraints imposed on iterative decoders applying messagepassing algorithms are investigated. Serial implementations similar to traditional microprocessor datapaths are compared against architectures with multiple processing elements that exploit the inherent parallelism in the d ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
Implementation constraints imposed on iterative decoders applying messagepassing algorithms are investigated. Serial implementations similar to traditional microprocessor datapaths are compared against architectures with multiple processing elements that exploit the inherent parallelism in the decoding algorithm. Turbo codes and lowdensity parity check codes, in particular, are evaluated in terms of their suitability for VLSI implementation in addition to their biterror rate performance as a function of signaltonoise ratio. It is necessary to consider efficient realizations of iterative decoders when area, power, and throughput of the decoding implementation are constrained by practical design issues of communications receivers.
Shortened array codes of large girth,” in
 IEEE Transactions on Information Theory
, 2006
"... Abstract — One approach to designing structured lowdensity paritycheck (LDPC) codes with large girth is to shorten codes with small girth in such a manner that the deleted columns of the paritycheck matrix contain all the variables involved in short cycles. This approach is especially effective i ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
Abstract — One approach to designing structured lowdensity paritycheck (LDPC) codes with large girth is to shorten codes with small girth in such a manner that the deleted columns of the paritycheck matrix contain all the variables involved in short cycles. This approach is especially effective if the paritycheck matrix of a code is a matrix composed of blocks of circulant permutation matrices, as is the case for the class of codes known as array codes. We show how to shorten array codes by deleting certain columns of their paritycheck matrices so as to increase their girth. The shortening approach is based on the observation that for array codes, and in fact for a slightly more general class of LDPC codes, the cycles in the corresponding Tanner graph are governed by certain homogeneous linear equations with integer coefficients. Consequently, we can selectively eliminate cycles from an array code by only retaining those columns from the paritycheck matrix of the original code that are indexed by integer sequences that do not contain solutions to the equations governing those cycles. We provide Ramseytheoretic estimates for the maximum number of columns that can be retained from the original paritycheck matrix with the property that the sequence of their indices avoid solutions to various types of cyclegoverning equations. This translates to estimates of the rate penalty incurred in shortening a code to eliminate cycles. Simulation results show that for the codes considered, shortening them to increase the girth can lead to significant gains in signaltonoise ratio in the case of communication over an additive white Gaussian noise channel. Index Terms — Array codes, LDPC codes, shortening, cyclegoverning equations
Results on Punctured LowDensity ParityCheck Codes and Improved Iterative Decoding Techniques
 IEEE Trans. Information Theory
, 2007
"... Abstract—This paper first introduces an improved decoding algorithm for lowdensity paritycheck (LDPC) codes over binaryinput–outputsymmetric memoryless channels. Then some fundamental properties of punctured LDPC codes are presented. It is proved that for any ensemble of LDPC codes, there exists ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
Abstract—This paper first introduces an improved decoding algorithm for lowdensity paritycheck (LDPC) codes over binaryinput–outputsymmetric memoryless channels. Then some fundamental properties of punctured LDPC codes are presented. It is proved that for any ensemble of LDPC codes, there exists a puncturing threshold. It is then proved that for any rates R1 and R2 satisfying 0 < R1 < R2 < 1, there exists an ensemble of LDPC codes with the following property. The ensemble can be punctured from rate R1 to R2 resulting in asymptotically good codes for all rates R1 R R2. Specifically, this implies that rates arbitrarily close to one are achievable via puncturing. Bounds on the performance of punctured LDPC codes are also presented. It is also shown that punctured LDPC codes are as good as ordinary LDPC codes. For BEC and arbitrary positive numbers R1 <R2 < 1, the existence of the sequences of punctured LDPC codes that are capacityachieving for all rates R1 R R2 is shown. Based on the above observations, a method is proposed to design good punctured LDPC codes over a broad range of rates. Finally, it is shown that the results of this paper may be used for the proof of the existence of the capacityachieving LDPC codes over binaryinput–outputsymmetric memoryless channels. Index Terms—Bipartite graphs, capacityachieving codes, erasure channel, improved decoding, lowdensity paritycheck (LDPC) codes, iterative decoding, punctured codes, rateadaptive codes, ratecompatible codes, symmetric channels. I.