Results 1  10
of
341
Good ErrorCorrecting Codes based on Very Sparse Matrices
, 1999
"... We study two families of errorcorrecting codes defined in terms of very sparse matrices. "MN" (MacKayNeal) codes are recently invented, and "Gallager codes" were first investigated in 1962, but appear to have been largely forgotten, in spite of their excellent properties. The decoding of both cod ..."
Abstract

Cited by 513 (25 self)
 Add to MetaCart
We study two families of errorcorrecting codes defined in terms of very sparse matrices. "MN" (MacKayNeal) codes are recently invented, and "Gallager codes" were first investigated in 1962, but appear to have been largely forgotten, in spite of their excellent properties. The decoding of both codes can be tackled with a practical sumproduct algorithm. We prove that these codes are "very good," in that sequences of codes exist which, when optimally decoded, achieve information rates up to the Shannon limit. This result holds not only for the binarysymmetric channel but also for any channel with symmetric stationary ergodic noise. We give experimental results for binarysymmetric channels and Gaussian channels demonstrating that practical performance substantially better than that of standard convolutional and concatenated codes can be achieved; indeed, the performance of Gallager codes is almost as close to the Shannon limit as that of turbo codes.
Near Shannon Limit Performance of Low Density Parity Check Codes
 Electronics Letters
, 1996
"... We report the empirical performance of Gallager's low density parity check codes on Gaussian channels. We show that performance substantially better than that of standard convolutional and concatenated codes can be achieved; indeed the performance is almost as close to the Shannon limit as that o ..."
Abstract

Cited by 306 (22 self)
 Add to MetaCart
We report the empirical performance of Gallager's low density parity check codes on Gaussian channels. We show that performance substantially better than that of standard convolutional and concatenated codes can be achieved; indeed the performance is almost as close to the Shannon limit as that of Turbo codes.
Expander Graphs and their Applications
, 2003
"... Contents 1 The Magical Mystery Tour 7 1.1 Some Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.1.1 Hardness results for linear transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.1.2 Error Correcting Codes . . . . . . . ..."
Abstract

Cited by 188 (5 self)
 Add to MetaCart
Contents 1 The Magical Mystery Tour 7 1.1 Some Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.1.1 Hardness results for linear transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.1.2 Error Correcting Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.1.3 Derandomizing Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.2 Magical Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.2.1 A Super Concentrator with O(n) edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.2.2 Error Correcting Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.2.3 Derandomizing Random Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
On the Optimality of Solutions of the MaxProduct Belief Propagation Algorithm in Arbitrary Graphs
, 2001
"... Graphical models, suchasBayesian networks and Markov random fields, represent statistical dependencies of variables by a graph. The maxproduct "belief propagation" algorithm is a localmessage passing algorithm on this graph that is known to converge to a unique fixed point when the graph is a tr ..."
Abstract

Cited by 185 (15 self)
 Add to MetaCart
Graphical models, suchasBayesian networks and Markov random fields, represent statistical dependencies of variables by a graph. The maxproduct "belief propagation" algorithm is a localmessage passing algorithm on this graph that is known to converge to a unique fixed point when the graph is a tree. Furthermore, when the graph is a tree, the assignment based on the fixedpoint yields the most probable a posteriori (MAP) values of the unobserved variables given the observed ones. Recently, good
Correctness of Local Probability Propagation in Graphical Models with Loops
, 2000
"... This article analyzes the behavior of local propagation rules in graphical models with a loop. ..."
Abstract

Cited by 178 (9 self)
 Add to MetaCart
This article analyzes the behavior of local propagation rules in graphical models with a loop.
Lowdensity paritycheck codes based on finite geometries: A rediscovery and new results
 IEEE Trans. Inform. Theory
, 2001
"... This paper presents a geometric approach to the construction of lowdensity paritycheck (LDPC) codes. Four classes of LDPC codes are constructed based on the lines and points of Euclidean and projective geometries over finite fields. Codes of these four classes have good minimum distances and thei ..."
Abstract

Cited by 119 (4 self)
 Add to MetaCart
This paper presents a geometric approach to the construction of lowdensity paritycheck (LDPC) codes. Four classes of LDPC codes are constructed based on the lines and points of Euclidean and projective geometries over finite fields. Codes of these four classes have good minimum distances and their Tanner graphs have girth T. Finitegeometry LDPC codes can be decoded in various ways, ranging from low to high decoding complexity and from reasonably good to very good performance. They perform very well with iterative decoding. Furthermore, they can be put in either cyclic or quasicyclic form. Consequently, their encoding can be achieved in linear time and implemented with simple feedback shift registers. This advantage is not shared by other LDPC codes in general and is important in practice. Finitegeometry LDPC codes can be extended and shortened in various ways to obtain other good LDPC codes. Several techniques of extension and shortening are presented. Long extended finitegeometry LDPC codes have been constructed and they achieve a performance only a few tenths of a decibel away from the Shannon theoretical limit with iterative decoding.
Regular and Irregular Progressive EdgeGrowth Tanner Graphs
 IEEE TRANS. INFORM. THEORY
, 2003
"... We propose a general method for constructing Tanner graphs having a large girth by progressively establishing edges or connections between symbol and check nodes in an edgebyedge manner, called progressive edgegrowth (PEG) construction. Lower bounds on the girth of PEG Tanner graphs and on the mi ..."
Abstract

Cited by 91 (0 self)
 Add to MetaCart
We propose a general method for constructing Tanner graphs having a large girth by progressively establishing edges or connections between symbol and check nodes in an edgebyedge manner, called progressive edgegrowth (PEG) construction. Lower bounds on the girth of PEG Tanner graphs and on the minimum distance of the resulting lowdensity paritycheck (LDPC) codes are derived in terms of parameters of the graphs. The PEG construction attains essentially the same girth as Gallager's explicit construction for regular graphs, both of which meet or exceed the ErdosSachs bound. Asymptotic analysis of a relaxed version of the PEG construction is presented. We describe an empirical approach using a variant of the "downhill simplex" search algorithm to design irregular PEG graphs for short codes with fewer than a thousand of bits, complementing the design approach of "density evolution" for larger codes. Encoding of LDPC codes based on the PEG construction is also investigated. We show how to exploit the PEG principle to obtain LDPC codes that allow linear time encoding. We also investigate regular and irregular LDPC codes using PEG Tanner graphs but allowing the symbol nodes to take values over GF(q), q > 2. Analysis and simulation demonstrate that one can obtain better performance with increasing field size, which contrasts with previous observations.
Low Density Parity Check Codes over GF(q)
 IEEE Communications Letters
, 1996
"... Gallager's low density parity check codes over GF (2) have been shown to have near Shannon limit performance when decoded using a probabilistic decoding algorithm. In this paper we report the empirical performance of the analogous codes defined over GF (q) for q ? 2. I. Background Codes defined in ..."
Abstract

Cited by 84 (17 self)
 Add to MetaCart
Gallager's low density parity check codes over GF (2) have been shown to have near Shannon limit performance when decoded using a probabilistic decoding algorithm. In this paper we report the empirical performance of the analogous codes defined over GF (q) for q ? 2. I. Background Codes defined in terms of a nonsystematic low density parity check matrix [1, 2] are asymptotically good, and can be practically decoded with Gallager's belief propagation algorithm [3, 4, 5]. Our proof in [5] shows that they are asymptotically good codes for a wide class of channels, not just for the memoryless binary symmetric channel. We expect the generalization of these codes to finite fields GF (q) for q ? 2 to be useful for the qary symmetric channel, and possibly for other channels such as the binary symmetric channel. Definition 1 The weight of a vector or matrix is the number of nonzero elements in it. We denote the weight of a vector x by w(x). The density of a source of random elements is ...
Good Codes based on Very Sparse Matrices
 Cryptography and Coding. 5th IMA Conference, number 1025 in Lecture Notes in Computer Science
, 1995
"... . We present a new family of errorcorrecting codes for the binary symmetric channel. These codes are designed to encode a sparse source, and are defined in terms of very sparse invertible matrices, in such a way that the decoder can treat the signal and the noise symmetrically. The decoding proble ..."
Abstract

Cited by 80 (11 self)
 Add to MetaCart
. We present a new family of errorcorrecting codes for the binary symmetric channel. These codes are designed to encode a sparse source, and are defined in terms of very sparse invertible matrices, in such a way that the decoder can treat the signal and the noise symmetrically. The decoding problem involves only very sparse matrices and sparse vectors, and so is a promising candidate for practical decoding. It can be proved that these codes are `very good', in that sequences of codes exist which, when optimally decoded, achieve information rates up to the Shannon limit. We give experimental results using a free energy minimization algorithm and a belief propagation algorithm for decoding, demonstrating practical performance superior to that of both BoseChaudhuryHocquenghem codes and ReedMuller codes over a wide range of noise levels. We regret that lack of space prevents presentation of all our theoretical and experimental results. The full text of this paper may be found elsewher...
Weaknesses of Margulis and RamanujanMargulis LowDensity ParityCheck Codes
 Electronic Notes in Theoretical Computer Science
, 2003
"... We report weaknesses in two algebraic constructions of lowdensity paritycheck codes based on expander graphs. The Margulis construction gives a code with nearcodewords, which cause problems for the sumproduct decoder; The RamanujanMargulis construction gives a code with lowweight codewords, whic ..."
Abstract

Cited by 65 (1 self)
 Add to MetaCart
We report weaknesses in two algebraic constructions of lowdensity paritycheck codes based on expander graphs. The Margulis construction gives a code with nearcodewords, which cause problems for the sumproduct decoder; The RamanujanMargulis construction gives a code with lowweight codewords, which produce an errorfloor.