Results 1 - 10
of
751
Design of capacity-approaching irregular low-density parity-check codes
- IEEE TRANS. INFORM. THEORY
, 2001
"... We design low-density parity-check (LDPC) codes that perform at rates extremely close to the Shannon capacity. The codes are built from highly irregular bipartite graphs with carefully chosen degree patterns on both sides. Our theoretical analysis of the codes is based on [1]. Assuming that the unde ..."
Abstract
-
Cited by 588 (6 self)
- Add to MetaCart
(Show Context)
We design low-density parity-check (LDPC) codes that perform at rates extremely close to the Shannon capacity. The codes are built from highly irregular bipartite graphs with carefully chosen degree patterns on both sides. Our theoretical analysis of the codes is based on [1]. Assuming that the underlying communication channel is symmetric, we prove that the probability densities at the message nodes of the graph possess a certain symmetry. Using this symmetry property we then show that, under the assumption of no cycles, the message densities always converge as the number of iterations tends to infinity. Furthermore, we prove a stability condition which implies an upper bound on the fraction of errors that a belief-propagation decoder can correct when applied to a code induced from a bipartite graph with a given degree distribution. Our codes are found by optimizing the degree structure of the underlying graphs. We develop several strategies to perform this optimization. We also present some simulation results for the codes found which show that the performance of the codes is very close to the asymptotic theoretical bounds.
The Capacity of Low-Density Parity-Check Codes Under Message-Passing Decoding
, 2001
"... In this paper, we present a general method for determining the capacity of low-density parity-check (LDPC) codes under message-passing decoding when used over any binary-input memoryless channel with discrete or continuous output alphabets. Transmitting at rates below this capacity, a randomly chos ..."
Abstract
-
Cited by 574 (9 self)
- Add to MetaCart
(Show Context)
In this paper, we present a general method for determining the capacity of low-density parity-check (LDPC) codes under message-passing decoding when used over any binary-input memoryless channel with discrete or continuous output alphabets. Transmitting at rates below this capacity, a randomly chosen element of the given ensemble will achieve an arbitrarily small target probability of error with a probability that approaches one exponentially fast in the length of the code. (By concatenating with an appropriate outer code one can achieve a probability of error that approaches zero exponentially fast in the length of the code with arbitrarily small loss in rate.) Conversely, transmitting at rates above this capacity the probability of error is bounded away from zero by a strictly positive constant which is independent of the length of the code and of the number of iterations performed. Our results are based on the observation that the concentration of the performance of the decoder around its average performance, as observed by Luby et al. [1] in the case of a binary-symmetric channel and a binary message-passing algorithm, is a general phenomenon. For the particularly important case of belief-propagation decoders, we provide an effective algorithm to determine the corresponding capacity to any desired degree of accuracy. The ideas presented in this paper are broadly applicable and extensions of the general method to low-density parity-check codes over larger alphabets, turbo codes, and other concatenated coding schemes are outlined.
Near Shannon limit performance of low density parity check codes”,
- Electronics Letters,
, 1962
"... ..."
(Show Context)
Turbo decoding as an instance of Pearl’s belief propagation algorithm
- IEEE Journal on Selected Areas in Communications
, 1998
"... Abstract—In this paper, we will describe the close connection between the now celebrated iterative turbo decoding algorithm of Berrou et al. and an algorithm that has been well known in the artificial intelligence community for a decade, but which is relatively unknown to information theorists: Pear ..."
Abstract
-
Cited by 404 (16 self)
- Add to MetaCart
(Show Context)
Abstract—In this paper, we will describe the close connection between the now celebrated iterative turbo decoding algorithm of Berrou et al. and an algorithm that has been well known in the artificial intelligence community for a decade, but which is relatively unknown to information theorists: Pearl’s belief propagation algorithm. We shall see that if Pearl’s algorithm is applied to the “belief network ” of a parallel concatenation of two or more codes, the turbo decoding algorithm immediately results. Unfortunately, however, this belief diagram has loops, and Pearl only proved that his algorithm works when there are no loops, so an explanation of the excellent experimental performance of turbo decoding is still lacking. However, we shall also show that Pearl’s algorithm can be used to routinely derive previously known iterative, but suboptimal, decoding algorithms for a number of other error-control systems, including Gallager’s
On the design of low-density parity-check codes within 0.0045 dB of the Shannon limit
- IEEE COMMUNICATIONS LETTERS
, 2001
"... We develop improved algorithms to construct good low-density parity-check codes that approach the Shannon limit very closely. For rate 1/2, the best code found has a threshold within 0.0045 dB of the Shannon limit of the binary-input additive white Gaussian noise channel. Simulation results with a ..."
Abstract
-
Cited by 306 (6 self)
- Add to MetaCart
(Show Context)
We develop improved algorithms to construct good low-density parity-check codes that approach the Shannon limit very closely. For rate 1/2, the best code found has a threshold within 0.0045 dB of the Shannon limit of the binary-input additive white Gaussian noise channel. Simulation results with a somewhat simpler code show that we can achieve within 0.04 dB of the Shannon limit at a bit error rate of 10 T using a block length of 10 U.
Analysis of sum-product decoding of low-density parity-check codes using a Gaussian approximation
- IEEE TRANS. INFORM. THEORY
, 2001
"... Density evolution is an algorithm for computing the capacity of low-density parity-check (LDPC) codes under messagepassing decoding. For memoryless binary-input continuous-output additive white Gaussian noise (AWGN) channels and sum-product decoders, we use a Gaussian approximation for message densi ..."
Abstract
-
Cited by 244 (2 self)
- Add to MetaCart
(Show Context)
Density evolution is an algorithm for computing the capacity of low-density parity-check (LDPC) codes under messagepassing decoding. For memoryless binary-input continuous-output additive white Gaussian noise (AWGN) channels and sum-product decoders, we use a Gaussian approximation for message densities under density evolution to simplify the analysis of the decoding algorithm. We convert the infinite-dimensional problem of iteratively calculating message densities, which is needed to find the exact threshold, to a one-dimensional problem of updating means of Gaussian densities. This simplification not only allows us to calculate the threshold quickly and to understand the behavior of the decoder better, but also makes it easier to design good irregular LDPC codes for AWGN channels. For various regular LDPC codes we have examined, thresholds can be estimated within 0.1 dB of the exact value. For rates between 0.5 and 0.9, codes designed using the Gaussian approximation perform within 0.02 dB of the best performing codes found so far by using density evolution when the maximum variable degree is IH. We show that by using the Gaussian approximation, we can visualize the sum-product decoding algorithm. We also show that the optimization of degree distributions can be understood and done graphically using the visualization.
On the Optimality of Solutions of the Max-Product Belief Propagation Algorithm in Arbitrary Graphs
, 2001
"... Graphical models, suchasBayesian networks and Markov random fields, represent statistical dependencies of variables by a graph. The max-product "belief propagation" algorithm is a local-message passing algorithm on this graph that is known to converge to a unique fixed point when the gra ..."
Abstract
-
Cited by 241 (13 self)
- Add to MetaCart
Graphical models, suchasBayesian networks and Markov random fields, represent statistical dependencies of variables by a graph. The max-product "belief propagation" algorithm is a local-message passing algorithm on this graph that is known to converge to a unique fixed point when the graph is a tree. Furthermore, when the graph is a tree, the assignment based on the fixed-point yields the most probable a posteriori (MAP) values of the unobserved variables given the observed ones. Recently, good
Correctness of Local Probability Propagation in Graphical Models with Loops
, 2000
"... This article analyzes the behavior of local propagation rules in graphical models with a loop. ..."
Abstract
-
Cited by 231 (8 self)
- Add to MetaCart
This article analyzes the behavior of local propagation rules in graphical models with a loop.
Distributed source coding for sensor networks
- In IEEE Signal Processing Magazine
, 2004
"... n recent years, sensor research has been undergoing a quiet revolution, promising to have a significant impact throughout society that could quite possibly dwarf pre-vious milestones in the information revolution. MIT Technology Review ranked wireless sensor networks that con-sist of many tiny, low- ..."
Abstract
-
Cited by 224 (4 self)
- Add to MetaCart
n recent years, sensor research has been undergoing a quiet revolution, promising to have a significant impact throughout society that could quite possibly dwarf pre-vious milestones in the information revolution. MIT Technology Review ranked wireless sensor networks that con-sist of many tiny, low-power and cheap wireless sensors as the number one emerging technology. Unlike PCs or the Internet, which are designed to support all types of applications, sensor networks are usually mission driven and application specific (be it detection of biological agents and toxic chemicals; environmental measure-ment of temperature, pressure and vibration; or real-time area video surveillance). Thus they must operate under a set of unique constraints and requirements. For example, in contrast to many other wireless devices (e.g., cellular phones, PDAs, and laptops), in which energy can be recharged from time to time, the energy provisioned for a wireless sensor node is not expected to be renewed throughout its mission. The limited amount of energy available to wireless sensors has a significant impact on all aspects of a wireless sensor network, from the amount of information that the node can process, to the volume of wireless communication it can carry across large distances. Realizing the great promise of sensor networks requires more than a mere advance in individual technologies; it relies on many com-ponents working together in an efficient, unattended, comprehensible, and trustworthy manner. One of the enabling technologies for sensor networks is distributed source coding (DSC), which refers to the compression of multiple correlated sensor out-puts [1]–[4] that do not communicate with each other (hence distributed coding). These sensors send their compressed outputs to a central point [e.g., the base station (BS)] for joint decoding. I
Improved low-density parity-check codes using irregular graphs
- IEEE Trans. Inform. Theory
, 2001
"... Abstract—We construct new families of error-correcting codes based on Gallager’s low-density parity-check codes. We improve on Gallager’s results by introducing irregular parity-check matrices and a new rigorous analysis of hard-decision decoding of these codes. We also provide efficient methods for ..."
Abstract
-
Cited by 223 (15 self)
- Add to MetaCart
(Show Context)
Abstract—We construct new families of error-correcting codes based on Gallager’s low-density parity-check codes. We improve on Gallager’s results by introducing irregular parity-check matrices and a new rigorous analysis of hard-decision decoding of these codes. We also provide efficient methods for finding good irregular structures for such decoding algorithms. Our rigorous analysis based on martingales, our methodology for constructing good irregular codes, and the demonstration that irregular structure improves performance constitute key points of our contribution. We also consider irregular codes under belief propagation. We report the results of experiments testing the efficacy of irregular codes on both binary-symmetric and Gaussian channels. For example, using belief propagation, for rate I R codes on 16 000 bits over a binary-symmetric channel, previous low-density parity-check codes can correct up to approximately 16 % errors, while our codes correct over 17%. In some cases our results come very close to reported results for turbo codes, suggesting that variations of irregular low density parity-check codes may be able to match or beat turbo code performance. Index Terms—Belief propagation, concentration theorem, Gallager codes, irregular codes, low-density parity-check codes.