Results 1 - 10
of
146
Good Error-Correcting Codes based on Very Sparse Matrices
, 1999
"... We study two families of error-correcting codes defined in terms of very sparse matrices. "MN" (MacKay--Neal) codes are recently invented, and "Gallager codes" were first investigated in 1962, but appear to have been largely forgotten, in spite of their excellent properties. The ..."
Abstract
-
Cited by 750 (23 self)
- Add to MetaCart
We study two families of error-correcting codes defined in terms of very sparse matrices. "MN" (MacKay--Neal) codes are recently invented, and "Gallager codes" were first investigated in 1962, but appear to have been largely forgotten, in spite of their excellent properties. The decoding of both codes can be tackled with a practical sum-product algorithm. We prove that these codes are "very good," in that sequences of codes exist which, when optimally decoded, achieve information rates up to the Shannon limit. This result holds not only for the binary-symmetric channel but also for any channel with symmetric stationary ergodic noise. We give experimental results for binary-symmetric channels and Gaussian channels demonstrating that practical performance substantially better than that of standard convolutional and concatenated codes can be achieved; indeed, the performance of Gallager codes is almost as close to the Shannon limit as that of turbo codes.
Design of capacity-approaching irregular low-density parity-check codes
- IEEE TRANS. INFORM. THEORY
, 2001
"... We design low-density parity-check (LDPC) codes that perform at rates extremely close to the Shannon capacity. The codes are built from highly irregular bipartite graphs with carefully chosen degree patterns on both sides. Our theoretical analysis of the codes is based on [1]. Assuming that the unde ..."
Abstract
-
Cited by 588 (6 self)
- Add to MetaCart
(Show Context)
We design low-density parity-check (LDPC) codes that perform at rates extremely close to the Shannon capacity. The codes are built from highly irregular bipartite graphs with carefully chosen degree patterns on both sides. Our theoretical analysis of the codes is based on [1]. Assuming that the underlying communication channel is symmetric, we prove that the probability densities at the message nodes of the graph possess a certain symmetry. Using this symmetry property we then show that, under the assumption of no cycles, the message densities always converge as the number of iterations tends to infinity. Furthermore, we prove a stability condition which implies an upper bound on the fraction of errors that a belief-propagation decoder can correct when applied to a code induced from a bipartite graph with a given degree distribution. Our codes are found by optimizing the degree structure of the underlying graphs. We develop several strategies to perform this optimization. We also present some simulation results for the codes found which show that the performance of the codes is very close to the asymptotic theoretical bounds.
Expander Codes
- IEEE TRANSACTIONS ON INFORMATION THEORY
, 1996
"... We present a new class of asymptotically good, linear error-correcting codes based upon expander graphs. These codes have linear time sequential decoding algorithms, logarithmic time parallel decoding algorithms with a linear number of processors, and are simple to understand. We present both random ..."
Abstract
-
Cited by 340 (10 self)
- Add to MetaCart
We present a new class of asymptotically good, linear error-correcting codes based upon expander graphs. These codes have linear time sequential decoding algorithms, logarithmic time parallel decoding algorithms with a linear number of processors, and are simple to understand. We present both randomized and explicit constructions for some of these codes. Experimental results demonstrate the extremely good performance of the randomly chosen codes.
Practical Loss-Resilient Codes
, 1997
"... We present a randomized construction of linear-time encodable and decodable codes that can transmit over lossy channels at rates extremely close to capacity. The encoding and decoding algorithms for these codes have fast and simple software implementations. Partial implementations of our algorithms ..."
Abstract
-
Cited by 284 (25 self)
- Add to MetaCart
We present a randomized construction of linear-time encodable and decodable codes that can transmit over lossy channels at rates extremely close to capacity. The encoding and decoding algorithms for these codes have fast and simple software implementations. Partial implementations of our algorithms are faster by orders of magnitude than the best software implementations of any previous algorithm for this problem. We expect these codes will be extremely useful for applications such as real-time audio and video transmission over the Internet, where lossy channels are common and fast decoding is a requirement. Despite the simplicity of the algorithms, their design and analysis are mathematically intricate. The design requires the careful choice of a random irregular bipartite graph, where the structure of the irregular graph is extremely important. We model the progress of the decoding algorithm by a set of differential equations. The solution to these equations can then be expressed as p...
Regular and Irregular Progressive Edge-Growth Tanner Graphs
- IEEE TRANS. INFORM. THEORY
, 2003
"... We propose a general method for constructing Tanner graphs having a large girth by progressively establishing edges or connections between symbol and check nodes in an edge-by-edge manner, called progressive edge-growth (PEG) construction. Lower bounds on the girth of PEG Tanner graphs and on the mi ..."
Abstract
-
Cited by 193 (0 self)
- Add to MetaCart
We propose a general method for constructing Tanner graphs having a large girth by progressively establishing edges or connections between symbol and check nodes in an edge-by-edge manner, called progressive edge-growth (PEG) construction. Lower bounds on the girth of PEG Tanner graphs and on the minimum distance of the resulting low-density parity-check (LDPC) codes are derived in terms of parameters of the graphs. The PEG construction attains essentially the same girth as Gallager's explicit construction for regular graphs, both of which meet or exceed the Erdos--Sachs bound. Asymptotic analysis of a relaxed version of the PEG construction is presented. We describe an empirical approach using a variant of the "downhill simplex" search algorithm to design irregular PEG graphs for short codes with fewer than a thousand of bits, complementing the design approach of "density evolution" for larger codes. Encoding of LDPC codes based on the PEG construction is also investigated. We show how to exploit the PEG principle to obtain LDPC codes that allow linear time encoding. We also investigate regular and irregular LDPC codes using PEG Tanner graphs but allowing the symbol nodes to take values over GF(q), q > 2. Analysis and simulation demonstrate that one can obtain better performance with increasing field size, which contrasts with previous observations.
Efficient Encoding of Low-Density Parity-Check Codes
, 2001
"... Low-density parity-check (LDPC) codes can be considered serious competitors to turbo codes in terms of performance and complexity and they are based on a similar philosophy: constrained random code ensembles and iterative decoding algorithms. In this paper, we consider the encoding problem for LDPC ..."
Abstract
-
Cited by 183 (3 self)
- Add to MetaCart
Low-density parity-check (LDPC) codes can be considered serious competitors to turbo codes in terms of performance and complexity and they are based on a similar philosophy: constrained random code ensembles and iterative decoding algorithms. In this paper, we consider the encoding problem for LDPC codes. More generally, we consider the encoding problem for codes specified by sparse parity-check matrices. We show how to exploit the sparseness of the parity-check matrix to obtain efficient encoders. For the @Q TA-regular LDPC code, for example, the complexity of encoding is essentially quadratic in the block length. However, we show that the associated coefficient can be made quite small, so that encoding codes even of length IHH HHH is still quite practical. More importantly, we will show that “optimized” codes actually admit linear time encoding.
Committed Oblivious Transfer and Private Multi-Party Computation
, 1995
"... . In this paper we present an efficient protocol for "Committed Oblivious Transfer" to perform oblivious transfer on committed bits: suppose Alice is committed to bits a0 and a1 and Bob is committed to b, they both want Bob to learn and commit to a b without Alice learning b nor Bob lear ..."
Abstract
-
Cited by 67 (11 self)
- Add to MetaCart
. In this paper we present an efficient protocol for "Committed Oblivious Transfer" to perform oblivious transfer on committed bits: suppose Alice is committed to bits a0 and a1 and Bob is committed to b, they both want Bob to learn and commit to a b without Alice learning b nor Bob learning a¯ b . Our protocol, based on the properties of error correcting codes, uses Bit Commitment (bc) and one-out-of-two Oblivious Transfer (ot) as black boxes. Consequently the protocol may be implemented with or without a computational assumption, depending on the kind of bc and ot used by the participants. Assuming a Broadcast Channel is also available, we exploit this result to obtain a protocol for Private Multi-Party Computation, without making assumptions about a specific number or fraction of participants being honest. We analyze the protocol's efficiency in terms of bcs and ots performed. Our approach connects Zero Knowledge proofs on bcs, Oblivious Circuit Evaluation and Private Multi-Party ...