Results 1 - 10
of
12
Multilevel decoders surpassing belief propagation on the binary symmetric channel
- in Proc. Int. Symp. On Inform. Theory (ISIT
, 2010
"... Abstract—In this paper, we propose a new class of quantized message-passing decoders for LDPC codes over the BSC. The messages take values (or levels) from a finite set. The update rules do not mimic belief propagation but instead are derived using the knowledge of trapping sets. We show that the up ..."
Abstract
-
Cited by 14 (9 self)
- Add to MetaCart
(Show Context)
Abstract—In this paper, we propose a new class of quantized message-passing decoders for LDPC codes over the BSC. The messages take values (or levels) from a finite set. The update rules do not mimic belief propagation but instead are derived using the knowledge of trapping sets. We show that the update rules can be derived to correct certain error patterns that are uncorrectable by algorithms such as BP and min-sum. In some cases even with a small message set, these decoders can guarantee correction of a higher number of errors than BP and min-sum. We provide particularly good 3-bit decoders for 3-left-regular LDPC codes. They significantly outperform the BP and min-sum decoders, but more importantly, they achieve this at only a fraction of the complexity of the BP and min-sum decoders. I.
On absorbing sets of structured sparse graph codes,” presented at the Inf
- Theory Appl. Workshop
, 2010
"... Abstract—In contrast to the capacity approaching performance of iteratively decoded low-density parity check (LDPC) codes, many practical finite-length LDPC codes exhibit performance degradation, manifested in a so-called error floor. Previous work has linked this phenomenon to the presence of certa ..."
Abstract
-
Cited by 4 (3 self)
- Add to MetaCart
(Show Context)
Abstract—In contrast to the capacity approaching performance of iteratively decoded low-density parity check (LDPC) codes, many practical finite-length LDPC codes exhibit performance degradation, manifested in a so-called error floor. Previous work has linked this phenomenon to the presence of certain combinatorial structures within the Tanner graph representation of the code, termed absorbing sets. Absorbing sets are stable under the bit-flipping operations and have been shown to act as fixed points (“absorbers”) for a wider class of iterative decoding algorithms. Codes often possess absorbing sets whose size is smaller than the minimum distance: the smallest absorbing sets are deemed most detrimental culprits behind the error floor. This paper focuses on the elementary combinatorial bounds of the smallest (candidate) absorbing sets. For certain classes of practical codes we demonstrate the tightness of these bounds and show how can the structure of the code and the structure of the absorbing sets be utilized to increase the size of the smallest absorbing sets without compromising other code properties such as the node degrees and the girth. As such, this work provides a step towards a better code design by taking into account the combinatorial nature of fixed points of iterative decoding algorithms. I.
Implementation of low density parity check decoders using a new high level design methodology
- Journal of Computers
, 2010
"... Abstract—Low density parity check (LDPC) codes are error-correcting codes that offer huge advantages in terms of coding gain, throughput and power dissipation. Error correction algorithms are often implemented in hardware for fast processing to meet the real-time needs of communication systems. Howe ..."
Abstract
-
Cited by 3 (2 self)
- Add to MetaCart
Abstract—Low density parity check (LDPC) codes are error-correcting codes that offer huge advantages in terms of coding gain, throughput and power dissipation. Error correction algorithms are often implemented in hardware for fast processing to meet the real-time needs of communication systems. However hardware implementation of LDPC decoders using traditional hardware description language (HDL) based approach is a complex and time consuming task. This paper presents an efficient high level approach to designing LDPC decoders using a collection of high level modelling tools. The proposed new methodology supports programmable logic design starting from high level modelling all the way up to FPGA implementation. The methodology has been used to design and implement representative LDPC decoders. A comprehensive testing strategy has been developed to test the designed decoders at various levels. The simulation and implementation results presented in this paper prove the validity and productivity of the new high level design approach. Index Terms—Error correction coding, digital systems, digital communication, logic design, FPGA. I.
Enhanced Precision Through Multiple Reads for LDPC Decoding in Flash Memories
"... Abstract—Multiple reads of the same Flash memory cell with distinct word-line voltages provide enhanced precision for LDPC decoding. In this paper, the word-line voltages are optimized by maximizing the mutual information (MI) of the quantized channel. The enhanced precision from a few additional re ..."
Abstract
-
Cited by 3 (3 self)
- Add to MetaCart
(Show Context)
Abstract—Multiple reads of the same Flash memory cell with distinct word-line voltages provide enhanced precision for LDPC decoding. In this paper, the word-line voltages are optimized by maximizing the mutual information (MI) of the quantized channel. The enhanced precision from a few additional reads allows FER performance to approach that of full precision soft information and enables an LDPC code to significantly outperform a BCH code. A constant-ratio constraint provides a significant simplification in the optimization with no noticeable loss in performance. For a well-designed LDPC code, the quantization that maximizes the mutual information also minimizes the frame error rate in our simulations. However, for an example LDPC code with a high error floor caused by small absorbing sets, the MMI quantization does not provide the lowest frame error rate. The best quantization in this case introduces more erasures than would be optimal for the channel MI in order to mitigate the absorbing sets of the poorly designed code. The paper also identifies a trade-off in LDPC code design when decoding is performed with multiple precision levels; the best code at one level of precision will typically not be the best code at a different level of precision.
A Fast-Convergence Decoding Method and Memory-Efficient VLSI Decoder Architecture for Irregular LDPC
- Codes in the IEEE 802.16e Standards, IEEE 66th Vehicular Technology Conference, 2007
"... Abstract — In this paper, we propose a modified iterative decoding algorithm to decode a special class of quasi-cyclic low-density parity-check (QC-LDPC) codes such as QC-LDPC codes used in the IEEE 802.16e standards. The proposed decoding is implemented by serially decoding block codes with identic ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Abstract — In this paper, we propose a modified iterative decoding algorithm to decode a special class of quasi-cyclic low-density parity-check (QC-LDPC) codes such as QC-LDPC codes used in the IEEE 802.16e standards. The proposed decoding is implemented by serially decoding block codes with identical parity-check matrix Hl derived from the parity-check matrix H of the QC-LDPC codes. The dimensions of Hl are much smaller than those of H. Extrinsic values can be passed among these block codes since the code bits of these block codes are overlapped. Hence, the proposed decoding can reduce the number of iterations required by up to forty percent without error performance loss as compared to the conventional message-passing decoding algorithm. A partially-parallel very large-scale integration (VLSI) architecture is proposed to implement such a decoding algorithm. The proposed VLSI decoder can fully take advantage of the proposed decoding to increase its throughput. In addition, the proposed decoder only needs to store check-to-variable messages and hence is memory efficient.
Ldpc decoding with limited-precision soft information in flash memories
- CoRR
"... This paper investigates the application of low-density parity-check (LDPC) codes to Flash memories. Multiple cell reads with distinct word-line voltages provide limited-precision soft information for the LDPC decoder. The values of the word-line voltages (also called reference voltages) are optimize ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
This paper investigates the application of low-density parity-check (LDPC) codes to Flash memories. Multiple cell reads with distinct word-line voltages provide limited-precision soft information for the LDPC decoder. The values of the word-line voltages (also called reference voltages) are optimized by maximizing the mutual information (MI) between the input and output of the multiple-read channel. Constraining the maximum mutual-information (MMI) quantization to enforce a constant-ratio constraint provides a significant simplification with no noticeable loss in performance. Our simulation results suggest that for a well-designed LDPC code, the quantization that maximizes the mutual information will also minimize the frame error rate. However, care must be taken to design the code to perform well in the quantized channel. An LDPC code designed for a full-precision Gaussian channel may perform poorly in the quantized setting. Our LDPC code designs provide an example where
Finite Alphabet Iterative Decoders, Part I: Decoding Beyond Belief Propagation on BSC
, 2012
"... We introduce a new paradigm for finite precision iterative decoding on low-density parity-check codes over the Binary Symmetric channel. The messages take values from a finite alphabet, and unlike traditional quantized decoders which are quantized versions of the Belief propagation (BP) decoder, the ..."
Abstract
- Add to MetaCart
We introduce a new paradigm for finite precision iterative decoding on low-density parity-check codes over the Binary Symmetric channel. The messages take values from a finite alphabet, and unlike traditional quantized decoders which are quantized versions of the Belief propagation (BP) decoder, the proposed finite alphabet iterative decoders (FAIDs) do not propagate quantized probabilities or log-likelihoods and the variable node update functions do not mimic the BP decoder. Rather, the update functions are maps designed using the knowledge of potentially harmful subgraphs that could be present in a given code, thereby rendering these decoders capable of outperforming the BP in the error floor region. On certain column-weight-three codes of practical interest, we show that there exist 3-bit precision FAIDs that surpass the BP decoder in the error floor. Hence, FAIDs are able to achieve a superior performance at much lower complexity. We also provide a methodology for the selection of
Iterative Decoding Beyond Belief Propagation
"... Abstract—At the heart of modern coding theory lies the fact that low-density parity-check (LDPC) codes can be efficiently decoded by belief propagation (BP). The BP is an inference algorithm which operates on a graphical model of a code, and lends itself to low-complexity and high-speed implementati ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract—At the heart of modern coding theory lies the fact that low-density parity-check (LDPC) codes can be efficiently decoded by belief propagation (BP). The BP is an inference algorithm which operates on a graphical model of a code, and lends itself to low-complexity and high-speed implementations, making it the algorithm of choice in many applications. It has unprecedentedly good error rate performance, so good that when decoded by the BP, LDPC codes approach theoretical limits of channel capacity. However, this capacity approaching property holds only in the asymptotic limit of code length, while codes of practical lengths suffer abrupt performance degradation in the low noise regime known as the error floor phenomenon. Our study of error floor has led to an interesting and surprising finding that it is possible to design iterative decoders which are much simpler yet better than belief propagation! These decoders do not propagate beliefs but a rather different kind of messages that reflect the local structure of the code graph. This has opened a plethora of exciting theoretical problems and applications. This paper introduces this new paradigm. I.
1Quantization of Binary-Input Discrete Memoryless Channels, with Applications to LDPC Decoding
"... Abstract—The quantization of the output of a binary-input discrete memoryless channel to a smaller number of levels is considered. The optimal quantizer, in the sense of maximizing mutual information between the channel input and the quantizer output, may be found by an algorithm with complexity whi ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract—The quantization of the output of a binary-input discrete memoryless channel to a smaller number of levels is considered. The optimal quantizer, in the sense of maximizing mutual information between the channel input and the quantizer output, may be found by an algorithm with complexity which is quadratic in the number of channel outputs. This is a concave optimization problem, and results from the field of concave optimization are invoked. The quantizer design algorithm is a realization of a dynamic program. Then, this algorithm is applied to the design of message-passing decoders for low-density parity-check codes, over arbitrary discrete memoryless channels. A general, systematic method to find message-passing decoding maps which maximize mutual information at each iteration is given. This may contrasted with existing quantized message-passing algorithms which are heuristically derived. The method finds message-passing decoding maps similar to those given by Richardson and Urbanke’s Algorithm E. Using four bits per message, noise thresholds similar to belief-propagation decoding are obtained. Index Terms—discrete memoryless channel, channel quantiza-tion, mutual information maximization, LDPC decoding I.