Results 1  10
of
51
On the stopping distance and the stopping redundancy of codes
 IEEE Trans. Inf. Theory
, 2006
"... Abstract — It is now well known that the performance of a linear code C under iterative decoding on a binary erasure channel (and other channels) is determined by the size of the smallest stopping set in the Tanner graph for C. Several recent papers refer to this parameter as the stopping distance s ..."
Abstract

Cited by 41 (2 self)
 Add to MetaCart
Abstract — It is now well known that the performance of a linear code C under iterative decoding on a binary erasure channel (and other channels) is determined by the size of the smallest stopping set in the Tanner graph for C. Several recent papers refer to this parameter as the stopping distance s of C. This is somewhat of a misnomer since the size of the smallest stopping set in the Tanner graph for C depends on the corresponding choice of a paritycheck matrix. It is easy to see that s � d, whered is the minimum Hamming distance of C, and we show that it is always possible to choose a paritycheck matrix for C (with sufficiently many dependent rows) such that s = d. We thus introduce a new parameter, termed the stopping redundancy of C, defined as the minimum number of rows in a paritycheck matrix H for C such that the corresponding stopping distance s(H) attains its largest possible value, namely s(H) =d. We then derive general bounds on the stopping redundancy of linear codes. We also examine several simple ways of constructing codes from other codes, and study the effect of these constructions on the stopping redundancy. Specifically, for the family of binary ReedMuller codes (of all orders), we prove that their stopping redundancy is at most a constant times their conventional redundancy. We show that the stopping redundancies of the binary and ternary extended Golay codes are at most 34 and 22, respectively. Finally, we provide upper and lower bounds on the stopping redundancy of MDS codes. I.
Improved Upper Bounds on Stopping Redundancy
, 2007
"... For a linear block code with minimum distance d, its stopping redundancy is the minimum number of check nodes in a Tanner graph representation of the code, such that all nonempty stopping sets have size d or larger. We derive new upper bounds on stopping redundancy for all linear codes in general, ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
For a linear block code with minimum distance d, its stopping redundancy is the minimum number of check nodes in a Tanner graph representation of the code, such that all nonempty stopping sets have size d or larger. We derive new upper bounds on stopping redundancy for all linear codes in general, and for maximum distance separable (MDS) codes specifically, and show how they improve upon previous results. For MDS codes, the new bounds are found by upperbounding the stopping redundancy by a combinatorial quantity closely related to Turán numbers. (The Turán number, „
Density Evolution for Asymmetric Memoryless Channels
 3rd International Symposium on Turbo Codes and Related Topics
"... Abstract — Density evolution is one of the most powerful analytical tools for lowdensity paritycheck (LDPC) codes and graph codes with message passing decoding algorithms. With channel symmetry as one of its fundamental assumptions, density evolution (DE) has been widely and successfully applied t ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
Abstract — Density evolution is one of the most powerful analytical tools for lowdensity paritycheck (LDPC) codes and graph codes with message passing decoding algorithms. With channel symmetry as one of its fundamental assumptions, density evolution (DE) has been widely and successfully applied to different channels, including binary erasure channels, binary symmetric channels, binary additive white Gaussian noise channels, etc. This paper generalizes density evolution for nonsymmetric memoryless channels, which in turn broadens the applications to general memoryless channels, e.g. zchannels, composite white Gaussian noise channels, etc. The central theorem underpinning this generalization is the convergence to perfect projection for any fixed size supporting tree. A new iterative formula of the same complexity is then presented and the necessary theorems for the performance concentration theorems are developed. Several properties of the new density evolution method are explored, including stability results for general asymmetric memoryless channels. Simulations, code optimizations, and possible new applications suggested by this new density evolution method are also provided. This result is also used to prove the typicality of linear LDPC codes among the coset code ensemble when the minimum check node degree is sufficiently large. It is shown that the convergence to perfect projection is essential to the belief propagation algorithm even when only symmetric channels are considered. Hence the proof of the convergence to perfect projection serves also as a completion of the theory of classical density evolution for symmetric memoryless channels. Index Terms — Lowdensity paritycheck (LDPC) codes, density evolution, sumproduct algorithm, asymmetric channels, zchannels, rank of random matrices. I.
Analysis of absorbing sets and fully absorbing sets of arraybased LDPC codes
 IEEE TRANS. ON INFORMATION THEORY
, 2008
"... The class of lowdensity paritycheck (LDPC) codes is attractive, since such codes can be decoded using practical messagepassing algorithms, and their performance is known to approach the Shannon limits for suitably large blocklengths. For the intermediate blocklengths relevant in applications, how ..."
Abstract

Cited by 18 (12 self)
 Add to MetaCart
The class of lowdensity paritycheck (LDPC) codes is attractive, since such codes can be decoded using practical messagepassing algorithms, and their performance is known to approach the Shannon limits for suitably large blocklengths. For the intermediate blocklengths relevant in applications, however, many LDPC codes exhibit a socalled “error floor”, corresponding to a significant flattening in the curve that relates signaltonoise ratio (SNR) to the bit error rate (BER) level. Previous work has linked this behavior to combinatorial substructures within the Tanner graph associated with an LDPC code, known as (fully) absorbing sets. These fully absorbing sets correspond to a particular type of nearcodewords or trapping sets that are stable under bitflipping operations, and exert the dominant effect on the low BER behavior of structured LDPC codes. This paper provides a detailed theoretical analysis of these (fully) absorbing sets for the class of Cp,γ arraybased LDPC codes, including the characterization of all minimal (fully) absorbing sets for the arraybased LDPC codes for γ =2, 3, 4, and moreover, it provides the development of techniques to enumerate them exactly. Theoretical results of this type provide a foundation for predicting and extrapolating the error floor behavior of LDPC codes.
Asymptotic spectra of trapping sets in regular and irregular LDPC code ensembles
 IEEE Trans. on Inform. Theory
, 2007
"... We address the problem of evaluating the asymptotic, normalized distributions of a class of combinatorial configurations in random, regular, binary lowdensity paritycheck (LDPC) code ensembles. Among the configurations considered are trapping and stopping sets 1; these sets represent induced subgr ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
We address the problem of evaluating the asymptotic, normalized distributions of a class of combinatorial configurations in random, regular, binary lowdensity paritycheck (LDPC) code ensembles. Among the configurations considered are trapping and stopping sets 1; these sets represent induced subgraphs in the Tanner graph of a code that, for certain classes of channels, exhibit a strong influence on the height and point of onset of the errorfloor. The techniques used in the derivations are based on large deviation theory and statistical methods for enumerating randomlike matrices. These techniques can also be applied in a setting that involves more general structural entities such as subcodes and/or minimal codewords, which are known to characterize other important properties of softdecision decoders of linear block codes. 1
Incremental redundancy hybrid ARQ with LDPC and raptor codes,” submitted to
 IEEE Trans. Inf. Theory
, 2005
"... Two incremental redundancy hybrid ARQ (IRHARQ) schemes are proposed, analyzed, and compared: one is based on LDPC code ensembles with random transmission assignments, the other is based on recently introduced Raptor codes. A number of important issues, such as rate and power control, Raptor code de ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
Two incremental redundancy hybrid ARQ (IRHARQ) schemes are proposed, analyzed, and compared: one is based on LDPC code ensembles with random transmission assignments, the other is based on recently introduced Raptor codes. A number of important issues, such as rate and power control, Raptor code design, and error rate performance after each transmission on time varying binaryinput, symmetricoutput channels are addressed by analyzing performance of LDPC and Raptor codes on parallel channels. The spectrum properties of LDPC code ensembles that are necessary for this analysis are derived. A set of rules for incrementing redundancy and setting the signal power at each transmission in order to maximize the throughput are derived for both schemes. The theoretical results obtained for random code ensembles are tested on several practical code examples by simulation. Both theoretical and simulation results show that both LDPC and Raptor codes are suitable for HARQ schemes. Which codes would make a better choice depends mainly on the width of the operating range of the HARQ scheme, prior knowledge of that range, and other design parameters and constraints dictated by standards.
Results on Punctured LowDensity ParityCheck Codes and Improved Iterative Decoding Techniques
 IEEE Trans. Information Theory
, 2007
"... Abstract—This paper first introduces an improved decoding algorithm for lowdensity paritycheck (LDPC) codes over binaryinput–outputsymmetric memoryless channels. Then some fundamental properties of punctured LDPC codes are presented. It is proved that for any ensemble of LDPC codes, there exists ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
Abstract—This paper first introduces an improved decoding algorithm for lowdensity paritycheck (LDPC) codes over binaryinput–outputsymmetric memoryless channels. Then some fundamental properties of punctured LDPC codes are presented. It is proved that for any ensemble of LDPC codes, there exists a puncturing threshold. It is then proved that for any rates R1 and R2 satisfying 0 < R1 < R2 < 1, there exists an ensemble of LDPC codes with the following property. The ensemble can be punctured from rate R1 to R2 resulting in asymptotically good codes for all rates R1 R R2. Specifically, this implies that rates arbitrarily close to one are achievable via puncturing. Bounds on the performance of punctured LDPC codes are also presented. It is also shown that punctured LDPC codes are as good as ordinary LDPC codes. For BEC and arbitrary positive numbers R1 <R2 < 1, the existence of the sequences of punctured LDPC codes that are capacityachieving for all rates R1 R R2 is shown. Based on the above observations, a method is proposed to design good punctured LDPC codes over a broad range of rates. Finally, it is shown that the results of this paper may be used for the proof of the existence of the capacityachieving LDPC codes over binaryinput–outputsymmetric memoryless channels. Index Terms—Bipartite graphs, capacityachieving codes, erasure channel, improved decoding, lowdensity paritycheck (LDPC) codes, iterative decoding, punctured codes, rateadaptive codes, ratecompatible codes, symmetric channels. I.
On the stopping redundancy of ReedMuller codes
 IEEE Trans. Inform. Theory
, 2006
"... Abstract—The stopping redundancy of the code is an important parameter which arises from analyzing the performance of a linear code under iterative decoding on a binary erasure channel. In this paper, we will consider the stopping redundancy of Reed–Muller codes and related codes. Let ( ) be the Ree ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
Abstract—The stopping redundancy of the code is an important parameter which arises from analyzing the performance of a linear code under iterative decoding on a binary erasure channel. In this paper, we will consider the stopping redundancy of Reed–Muller codes and related codes. Let ( ) be the Reed–Muller code of length 2 and order. Schwartz and Vardy gave a recursive construction of paritycheck matrices for the Reed–Muller codes, and asked whether the number of rows in those paritycheck matrices is the stopping redundancy of the codes. We prove that the stopping redundancy of ( 2), which is also the extended Hamming code of length 2,is2 1 and thus show that the recursive bound is tight in this case. We prove that the stopping redundancy of the simplex code equals its redundancy. Several constructions of codes for which the stopping redundancy equals the redundancy are discussed. We prove an upper bound on the stopping redundancy of (1). This bound is better than the known recursive bound and thus gives a negative answer to the question of Schwartz and Vardy. Index Terms—Iterative decoding on binary erasure channel, Reed–Muller codes, simplex codes, stopping distance, stopping redundancy, stopping sets. I.
Finite size scaling for the core of large random hypergraphs
, 2008
"... The (two) core of a hypergraph is the maximal collection of hyperedges within which no vertex appears only once. It is of importance in tasks such as efficiently solving a large linear system over GF[2], or iterative decoding of lowdensity paritycheck codes used over the binary erasure channel. Si ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
The (two) core of a hypergraph is the maximal collection of hyperedges within which no vertex appears only once. It is of importance in tasks such as efficiently solving a large linear system over GF[2], or iterative decoding of lowdensity paritycheck codes used over the binary erasure channel. Similar structures emerge in a variety of NPhard combinatorial optimization and decision problems, from vertex cover to satisfiability. For a uniformly chosen random hypergraph of m = nρ vertices and n hyperedges, each consisting of the same fixed number l ≥ 3 of vertices, the size of the core exhibits for large n a firstorder phase transition, changing from o(n) for ρ>ρc to a positive fraction of n for ρ<ρc, with a transition window size �(n −1/2) around ρc> 0. Analyzing the corresponding “leaf removal” algorithm, we determine the associated finitesize scaling behavior. In particular, if ρ is inside the scaling window (more precisely, ρ = ρc + rn −1/2), the probability of having a core of size �(n) has a limit strictly between 0 and 1, and a leading correction of order �(n −1/6). The correction admits a sharp characterization in terms of the distribution of a Brownian motion with quadratic shift, from which it inherits the scaling with n. Thisbehavioris expected to be universal for a wide collection of combinatorial problems.