## Good Codes based on Very Sparse Matrices (1995)

### Cached

### Download Links

- [wol.ra.phy.cam.ac.uk]
- [www.cs.toronto.edu]
- DBLP

### Other Repositories/Bibliography

Venue: | Cryptography and Coding. 5th IMA Conference, number 1025 in Lecture Notes in Computer Science |

Citations: | 83 - 11 self |

### BibTeX

@INPROCEEDINGS{MacKay95goodcodes,

author = {David J.C. MacKay and David J. C. Mackay and Radford M. Neal and Radford M. Neal},

title = {Good Codes based on Very Sparse Matrices},

booktitle = {Cryptography and Coding. 5th IMA Conference, number 1025 in Lecture Notes in Computer Science},

year = {1995},

pages = {100--111},

publisher = {Springer}

}

### Years of Citing Articles

### OpenURL

### Abstract

. We present a new family of error-correcting codes for the binary symmetric channel. These codes are designed to encode a sparse source, and are defined in terms of very sparse invertible matrices, in such a way that the decoder can treat the signal and the noise symmetrically. The decoding problem involves only very sparse matrices and sparse vectors, and so is a promising candidate for practical decoding. It can be proved that these codes are `very good', in that sequences of codes exist which, when optimally decoded, achieve information rates up to the Shannon limit. We give experimental results using a free energy minimization algorithm and a belief propagation algorithm for decoding, demonstrating practical performance superior to that of both Bose-Chaudhury-Hocquenghem codes and Reed-Muller codes over a wide range of noise levels. We regret that lack of space prevents presentation of all our theoretical and experimental results. The full text of this paper may be found elsewher...

### Citations

9231 |
Elements of Information Theory
- Cover, Thomas
- 1990
(Show Context)
Citation Context ...cribe empirical results with a practical decoding algorithm. 2.5 Theoretical properties proven for MN codes In [6] we prove properties of these codes by studying properties of a `typical set decoder' =-=[3]-=- for the decoding problem Ax = z, averaging over an ensemble of random matrices A. We prove two theorems (our proofs are computer-aided), whose implications are as follows. 0 0.2 0.4 0.6 0.8 1 0.001 0... |

7495 |
Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference
- Pearl
- 1988
(Show Context)
Citation Context ...We have developed a `belief net decoder' for the problem Ax = z mod 2, which generalizes the methods of Gallager [4] and Meier and Staffelbach [9] by using methods of belief propagation over networks =-=[11]-=-. We refer to the elements z n corresponding to each row n = 1 : : : N of A as checks. We think of the set of bits x and checks z as making up a `belief network', also known as a `Bayesian network', `... |

7148 |
The Mathematical Theory of Communication
- Shannon, Weaver
- 1949
(Show Context)
Citation Context ...noise levels. We regret that lack of space prevents presentation of all our theoretical and experimental results. The full text of this paper may be found elsewhere [6]. 1 Background In 1948, Shannon =-=[14]-=- proved that there exist block codes, for a given memoryless channel, that achieve arbitrarily small probability of error ffl at any communication rate R up to the capacity C of the channel. We will r... |

2089 |
The Theory of Error-Correcting Codes
- MacWilliams, Sloane
- 1977
(Show Context)
Citation Context ...is r = t + nmod2; (6) where the noise, n, is assumed to be a sparse random vector with independent identically distributed bits, density f n . The first step of the decoding is to compute: z = C n r; =-=(7)-=- which takes time of order N t. Because z = C n (t+n) = C s s+C n n, the decoding task is then to solve for x = \Theta s n the equation: Ax = z; (8) where A is the N by (K+N ) matrix [C s C n ] (see f... |

694 |
Arithmetic Coding for Data Compression
- Witten, Neal, et al.
- 1987
(Show Context)
Citation Context ...f s, less than 0.5. Consecutive source symbols are independent and identically distributed. Redundant sources of this type can be produced from other sources by using a variation on arithmetic coding =-=[16, 13]-=-; one simply reverses the role of encoder and decoder in a standard arithmetic coder based on a model corresponding to the sparse messages [6]. Given that the source is already redundant, we are no lo... |

474 |
Low Density Parity Check Codes
- Gallager
- 1963
(Show Context)
Citation Context ...e, having only t 1s per column, where t may be much less than N . One might therefore hope that it is practical to solve this decoding problem. The decoding problem is of the type studied by Gallager =-=[4]-=-. However, the sparse parity check codes studied by Gallager are bad. The trick that makes MN codes good is the construction in terms of an invertible matrix. We now describe theoretical properties th... |

464 |
Algebraic Coding Theory
- Berlekamp
- 1968
(Show Context)
Citation Context ...ors in n bits, as specified in the (n; k; t) description of the code. In principle, it may be possible in some cases to make a BCH decoder that corrects more than t errors, but according to Berlekamp =-=[2], "little is kn-=-own about: : : how to go about finding the solutions" and "if there are more than t +1 errors then the situation gets very complicated very quickly." Similarly, for RM codes of minimum ... |

303 |
Error Correcting Codes
- Peterson, Jr
- 2000
(Show Context)
Citation Context ...he free energy minimization decoder. We found that the results were best for t = 3 and became steadily worse as t increased. In figure 5 we compare two MN codes with BCH codes, which are described in =-=[12] as "-=-the best known constructive codes" for memoryless noisy channels, and with Reed-Muller (RM) codes (block sizes up to 1024). Figure 5 shows the codes' probability of block error versus their rate.... |

210 | An introduction to arithmetic coding
- Langdon
- 1984
(Show Context)
Citation Context ...f s, less than 0.5. Consecutive source symbols are independent and identically distributed. Redundant sources of this type can be produced from other sources by using a variation on arithmetic coding =-=[16, 13]-=-; one simply reverses the role of encoder and decoder in a standard arithmetic coder based on a model corresponding to the sparse messages [6]. Given that the source is already redundant, we are no lo... |

90 |
MUNIN—A causal probabilistic network for interpretation of electromyographic findings
- Andreassen, Woldbye, et al.
- 1987
(Show Context)
Citation Context ... cycles. However, it is interesting to implement the decoding algorithm that would be appropriate if there were no cycles, on the assumption that the errors introduced might be relatively small (c.f. =-=[1]-=-). As the size N of the code is increased, it becomes increasingly easy to produce codes in which there are no cycles of any given length, so we expect that, asymptotically, this algorithm will be an ... |

78 |
Fast correlation attacks on certain stream ciphers
- Meier, Staffelbach
- 1989
(Show Context)
Citation Context ... sets of bits as shown in (f). 4 Belief network decoding We have developed a `belief net decoder' for the problem Ax = z mod 2, which generalizes the methods of Gallager [4] and Meier and Staffelbach =-=[9]-=- by using methods of belief propagation over networks [11]. We refer to the elements z n corresponding to each row n = 1 : : : N of A as checks. We think of the set of bits x and checks z as making up... |

28 |
The Theory of Information and Coding: A Mathematical Framework for Communication.” Encyclopedia of Mathematics and its Applications
- McEliece
- 1977
(Show Context)
Citation Context ...e polynomial in the block length. Since 1948, few constructive and practical codes that are good have been found, fewer still that are practical, and none at all that are both practical and very good =-=[8]-=-. Goppa's recent algebraic geometry codes (reviewed in [15]) appear to be both practical and good, but we believe that the literature has not established whether they are very good. In this paper we p... |

16 | Free-energy minimization algorithm for decoding and cryptoanalysis. Electron Letters 31:445–47. [aAC] MacKay,D.M.(1956)Theepistemologicalproblemforautomata.In:Automata studies
- MacKay, C
- 1995
(Show Context)
Citation Context ...formation rates up to the Shannon limit of the binary symmetric channel [6]. In sections 3 and 4 we describe empirical results of computer experiments using first a free energy minimization algorithm =-=[5]-=- and second a `belief propagation' algorithm for decoding. Our experiments show that practical performance significantly superior to that of BCH and Reed-Muller codes (in terms of information rate for... |

10 |
Convergence of a Bayesian iterative error-correction procedure on a noisy shift register sequence
- Mihaljevic, Golic
- 1993
(Show Context)
Citation Context ...ul decoding. 4.2 Relationship to Gallager's algorithm Gallager [4] and Meier and Staffelbach [9] implemented algorithms very similar to this belief net decoder, also studied by Mihaljevi'c and Goli'c =-=[10]-=-. The main difference in their algorithms is that they did not distinguish between the probabilitiessq 0 nk and q 1 nk for different values of n; rather, they computed q 0 k and q 1 k , as given above... |

5 |
Algebraic-geometric codes and asymptotic problems
- Tsfasman
- 1991
(Show Context)
Citation Context ...tive and practical codes that are good have been found, fewer still that are practical, and none at all that are both practical and very good [8]. Goppa's recent algebraic geometry codes (reviewed in =-=[15]-=-) appear to be both practical and good, but we believe that the literature has not established whether they are very good. In this paper we present a new code family that we call `MN codes'. These cod... |

1 |
Good codes based on very sparse matrices. Available from http://131.111.48.24
- MacKay, Neal
- 1995
(Show Context)
Citation Context ...-Muller codes over a wide range of noise levels. We regret that lack of space prevents presentation of all our theoretical and experimental results. The full text of this paper may be found elsewhere =-=[6]-=-. 1 Background In 1948, Shannon [14] proved that there exist block codes, for a given memoryless channel, that achieve arbitrarily small probability of error ffl at any communication rate R up to the ... |