Results 1 - 10
of
26
Analysis of absorbing sets and fully absorbing sets of array-based LDPC codes
- IEEE TRANS. ON INFORMATION THEORY
, 2008
"... The class of low-density parity-check (LDPC) codes is attractive, since such codes can be decoded using practical message-passing algorithms, and their performance is known to approach the Shannon limits for suitably large blocklengths. For the intermediate blocklengths relevant in applications, how ..."
Abstract
-
Cited by 36 (13 self)
- Add to MetaCart
(Show Context)
The class of low-density parity-check (LDPC) codes is attractive, since such codes can be decoded using practical message-passing algorithms, and their performance is known to approach the Shannon limits for suitably large blocklengths. For the intermediate blocklengths relevant in applications, however, many LDPC codes exhibit a so-called “error floor”, corresponding to a significant flattening in the curve that relates signal-to-noise ratio (SNR) to the bit error rate (BER) level. Previous work has linked this behavior to combinatorial substructures within the Tanner graph associated with an LDPC code, known as (fully) absorbing sets. These fully absorbing sets correspond to a particular type of nearcodewords or trapping sets that are stable under bit-flipping operations, and exert the dominant effect on the low BER behavior of structured LDPC codes. This paper provides a detailed theoretical analysis of these (fully) absorbing sets for the class of Cp,γ array-based LDPC codes, including the characterization of all minimal (fully) absorbing sets for the array-based LDPC codes for γ =2, 3, 4, and moreover, it provides the development of techniques to enumerate them exactly. Theoretical results of this type provide a foundation for predicting and extrapolating the error floor behavior of LDPC codes.
Towards a communication-theoretic understanding of system-level power consumption
"... Traditional communication theory focuses on minimizing transmit power. Increasingly, however, communication links are operating at shorter ranges where transmit power can drop below the power consumed in decoding. In this paper, we model the required decoding power and investigate the minimization o ..."
Abstract
-
Cited by 28 (6 self)
- Add to MetaCart
Traditional communication theory focuses on minimizing transmit power. Increasingly, however, communication links are operating at shorter ranges where transmit power can drop below the power consumed in decoding. In this paper, we model the required decoding power and investigate the minimization of total system power from two complementary perspectives. First, an isolated point-to-point link is considered. Using new lower bounds on the complexity of messagepassing decoding, lower bounds are derived on decoding power. These bounds show that 1) there is a fundamental tradeoff between transmit and decoding power; 2) unlike the implications of the traditional “waterfall ” curve which focuses on transmit power, the total power must diverge to infinity as error probability goes to zero; 3) Regular LDPCs, and not their capacity-achieving counterparts, can be shown to be power order optimal in some cases; and 4) the optimizing transmit power is bounded away from the Shannon limit. Second, we consider a collection of point-to-point links. When systems both generate and face interference, coding allows a system to support a higher density of transmitter-receiver pairs (assuming interference is treated as noise). However, at low densities, uncoded transmission may be more power efficient in some cases. I.
An Efficient 10GBASE-T Ethernet LDPC Decoder Design with Low Error Floors
, 2009
"... A grouped-parallel low-density parity-check (LDPC) decoder is designed for the (2048,1723) Reed-Solomon-based LDPC (RS-LDPC) code suitable for 10GBASE-T Ethernet. A two-step decoding scheme reduces the wordlength to 4 bits while lowering the error floor to a 10−14 BER. The proposed postprocessor is ..."
Abstract
-
Cited by 24 (2 self)
- Add to MetaCart
(Show Context)
A grouped-parallel low-density parity-check (LDPC) decoder is designed for the (2048,1723) Reed-Solomon-based LDPC (RS-LDPC) code suitable for 10GBASE-T Ethernet. A two-step decoding scheme reduces the wordlength to 4 bits while lowering the error floor to a 10−14 BER. The proposed postprocessor is conveniently integrated with the decoder adding minimal area and power. The decoder architecture is optimized by groupings so as to localize irregular interconnects and regularize global interconnects and the overall wiring overhead is minimized. The 5.35 mm2, 65nm CMOS chip achieves a decoding throughput of 47.7 Gb/s. With scaled frequency and voltage, the chip delivers a 6.67 Gb/s throughput necessary for 10GBASE-T while dissipating 144 mW of power. Index Terms Low-density parity-check (LDPC) code; message-passing decoding; iterative decoder; error floors.
Decomposition methods for large scale LP decoding
- In 49th Annual Allerton Conference on Communication, Control, and Computing
, 2011
"... Abstract When binary linear error-correcting codes are used over symmetric channels, a relaxed version of the maximum likelihood decoding problem can be stated as a linear program (LP). This LP decoder can be used to decode at bit-error-rates comparable to state-of-the-art belief propagation (BP) d ..."
Abstract
-
Cited by 15 (3 self)
- Add to MetaCart
(Show Context)
Abstract When binary linear error-correcting codes are used over symmetric channels, a relaxed version of the maximum likelihood decoding problem can be stated as a linear program (LP). This LP decoder can be used to decode at bit-error-rates comparable to state-of-the-art belief propagation (BP) decoders, but with significantly stronger theoretical guarantees. However, LP decoding when implemented with standard LP solvers does not easily scale to the block lengths of modern error correcting codes. In this paper we draw on decomposition methods from optimization theory, specifically the Alternating Directions Method of Multipliers (ADMM), to develop efficient distributed algorithms for LP decoding. The key enabling technical result is a nearly linear time algorithm for two-norm projection onto the parity polytope. This allows us to use LP decoding, with all its theoretical guarantees, to decode large-scale error correcting codes efficiently. We present numerical results for two LDPC codes. The first is the rate-0.5 [2640, 1320] "Margulis" code, the second a rate-0.77 [1057.244] code. The "waterfall" region of LP decoding is seen to initiate at a slightly higher signal-to-noise ratio than for sum-product BP, however an error-floor is not observed for either code, which is not the case for BP. Our implementation of LP decoding using ADMM executes as quickly as our baseline sum-product BP decoder, is fully parallelizable, and can be seen to implement a type of message-passing with a particularly simple schedule.
Controlling LDPC Absorbing Sets via the Null Space of the Cycle Consistency Matrix
- In Proc. IEEE Int. Conf. on Comm. (ICC
, 2011
"... Abstract — This paper focuses on controlling absorbing sets for a class of regular LDPC codes, known as separable, circulantbased (SCB) codes. For a specified circulant matrix, SCB codes all share a common mother matrix and include array-based LDPC codes and many common quasi-cyclic codes. SCB codes ..."
Abstract
-
Cited by 11 (8 self)
- Add to MetaCart
(Show Context)
Abstract — This paper focuses on controlling absorbing sets for a class of regular LDPC codes, known as separable, circulantbased (SCB) codes. For a specified circulant matrix, SCB codes all share a common mother matrix and include array-based LDPC codes and many common quasi-cyclic codes. SCB codes retain standard properties of quasi-cyclic LDPC codes such as girth, code structure, and compatibility with existing high-throughput hardware implementations. This paper uses a cycle consistency matrix (CCM) for each absorbing set of interest in an SCB LDPC code. For an absorbing set to be present in an SCB LDPC code, the associated CCM must not be full columnrank. Our approach selects rows and columns from the SCB mother matrix to systematically eliminate dominant absorbing sets by forcing the associated CCMs to be full column-rank. Simulation results demonstrate that the new codes have steeper error-floor slopes and provide at least one order of magnitude of improvement in the low FER region. Identifying absorbingset-spectrum equivalence classes within the family of SCB codes with a specified circulant matrix significantly reduces the search space of possible code matrices. I.
Quantized min-sum decoders with low error floor for LDPC codes
- in Proc. IEEE Int. Symp. on Inform. Theory
, 2012
"... Abstract—The error floor phenomenon observed with LDPC codes and their graph-based, iterative, message-passing (MP) decoders is commonly attributed to the existence of error-prone substructures in a Tanner graph representation of the code. Many approaches have been proposed to lower the error floor ..."
Abstract
-
Cited by 8 (5 self)
- Add to MetaCart
(Show Context)
Abstract—The error floor phenomenon observed with LDPC codes and their graph-based, iterative, message-passing (MP) decoders is commonly attributed to the existence of error-prone substructures in a Tanner graph representation of the code. Many approaches have been proposed to lower the error floor by designing new LDPC codes with fewer such substructures or by modifying the decoding algorithm. In this paper, we show that one source of the error floors observed in the literature may be the message quantization rule used in the iterative decoder implementation. We then propose a new quantization method to overcome the limitations of standard quantization rules. Performance simulation results for two LDPC codes commonly found to have high error floors when used with the fixed-point min-sum decoder and its variants demonstrate the validity of our findings and the effectiveness of the proposed quantization algorithm. I.
LDPC Absorbing Sets, the Null Space of the Cycle Consistency Matrix, and Tanners Constructions
- In IEEE Conf. on Info. Theory and its Appl
, 2011
"... Abstract—Dolecek et al. introduced the cycle consistency condition, which is a necessary condition for cycles – and thus the absorbing sets that contain them – to be present in separable circulant-based (SCB) LDPC codes. This paper introduces a cycle consistency matrix (CCM) for each possible absorb ..."
Abstract
-
Cited by 7 (7 self)
- Add to MetaCart
(Show Context)
Abstract—Dolecek et al. introduced the cycle consistency condition, which is a necessary condition for cycles – and thus the absorbing sets that contain them – to be present in separable circulant-based (SCB) LDPC codes. This paper introduces a cycle consistency matrix (CCM) for each possible absorbing set in an SCB LDPC code. The CCM efficiently enforces the cycle consistency condition for all cycles in a specified absorbing set by spanning its associated binary cycle space. Under certain conditions, a CCM not having full column rank is a necessary and sufficient condition for the LDPC code to contain the absorbing set associated with that CCM. This paper uses the CCM approach to carefully analyze LDPC codes based on the Tanner construction for r = 4 rows of sub-matrices (i.e., Tannerconstruction LDPC codes with column weight 4). I.
Towards Improved LDPC Code Designs Using Absorbing Set Spectrum Properties
- In Proc. of 6th International symposium on
, 2010
"... Abstract—This paper focuses on methods for a systematic modification of the parity check matrix of regular LDPC codes for improved performance in the low BER region (i.e., the error floor). A judicious elimination of dominant absorbing sets strictly improves the absorbing set spectrum and thereby im ..."
Abstract
-
Cited by 6 (5 self)
- Add to MetaCart
(Show Context)
Abstract—This paper focuses on methods for a systematic modification of the parity check matrix of regular LDPC codes for improved performance in the low BER region (i.e., the error floor). A judicious elimination of dominant absorbing sets strictly improves the absorbing set spectrum and thereby improves the code performance. This absorbing set elimination is accomplished without compromising code properties and parameters such as the girth, node degree, and the structure of the parity check matrix. For a representative class of practical codes we substantiate theoretical analysis with experimental results obtained in the low BER region. Our results demonstrate at least an order of magnitude improvement of the error floor relative to the original code designs. Given that the conventional code parameters remain intact, the new code can easily be implemented on the existing software or hardware platforms employing high-throughput, compact architectures. As such, the proposed approach provides a step towards the improved code design that is compatible with practical implementation constraints. I.
On absorbing sets of structured sparse graph codes,” presented at the Inf
- Theory Appl. Workshop
, 2010
"... Abstract—In contrast to the capacity approaching performance of iteratively decoded low-density parity check (LDPC) codes, many practical finite-length LDPC codes exhibit performance degradation, manifested in a so-called error floor. Previous work has linked this phenomenon to the presence of certa ..."
Abstract
-
Cited by 4 (3 self)
- Add to MetaCart
(Show Context)
Abstract—In contrast to the capacity approaching performance of iteratively decoded low-density parity check (LDPC) codes, many practical finite-length LDPC codes exhibit performance degradation, manifested in a so-called error floor. Previous work has linked this phenomenon to the presence of certain combinatorial structures within the Tanner graph representation of the code, termed absorbing sets. Absorbing sets are stable under the bit-flipping operations and have been shown to act as fixed points (“absorbers”) for a wider class of iterative decoding algorithms. Codes often possess absorbing sets whose size is smaller than the minimum distance: the smallest absorbing sets are deemed most detrimental culprits behind the error floor. This paper focuses on the elementary combinatorial bounds of the smallest (candidate) absorbing sets. For certain classes of practical codes we demonstrate the tightness of these bounds and show how can the structure of the code and the structure of the absorbing sets be utilized to increase the size of the smallest absorbing sets without compromising other code properties such as the node degrees and the girth. As such, this work provides a step towards a better code design by taking into account the combinatorial nature of fixed points of iterative decoding algorithms. I.
Cyclic and quasi-cyclic LDPC codes on row and column constrained parity-check matrices and their trapping sets
- IEEE Trans. Inform. Theory
, 2012
"... ar ..."
(Show Context)