## Binary intersymbol interference channels: Gallager codes, density evolution and code performance bounds (2003)

Venue: | IEEE TRANS. INFORM. THEORY |

Citations: | 49 - 4 self |

### BibTeX

@ARTICLE{Kavčić03binaryintersymbol,

author = {Aleksandar Kavčić and Xiao Ma and Michael Mitzenmacher},

title = {Binary intersymbol interference channels: Gallager codes, density evolution and code performance bounds},

journal = {IEEE TRANS. INFORM. THEORY},

year = {2003},

volume = {49},

pages = {1636--1652}

}

### Years of Citing Articles

### OpenURL

### Abstract

We study the limits of performance of Gallager codes (low-density parity-check (LDPC) codes) over binary linear intersymbol interference (ISI) channels with additive white Gaussian noise (AWGN). Using the graph representations of the channel, the code, and the sum–product message-passing detector/decoder, we prove two error concentration theorems. Our proofs expand on previous work by handling complications introduced by the channel memory. We circumvent these problems by considering not just linear Gallager codes but also their cosets and by distinguishing between different types of message flow neighborhoods depending on the actual transmitted symbols. We compute the noise tolerance threshold using a suitably developed density evolution algorithm and verify, by simulation, that the thresholds represent accurate predictions of the performance of the iterative sum–product algorithm for finite (but large) block lengths. We also demonstrate that for high rates, the thresholds are very close to the theoretical limit of performance for Gallager codes over ISI channels. If g denotes the capacity of a binary ISI channel and if g � � � denotes the maximal achievable mutual information rate when the channel inputs are independent and identically distributed (i.i.d.) binary random variables @g � � � gA, we prove that the maximum information rate achievable by the sum–product decoder of a Gallager (coset) code is upper-bounded by g � � �. The last topic investigated is the performance limit of the decoder if the trellis portion of the sum–product algorithm is executed only once; this demonstrates the potential for trading off the computational requirements and the performance of the decoder.

### Citations

9087 | Elements of Information Theory - Cover, Thomas - 1991 |

7407 |
Probabilistic reasoning in intelligent systems: Networks of plausible inference
- Pearl
- 1988
(Show Context)
Citation Context ...BINARY INTERSYMBOL INTERFERENCE CHANNELS 1639 Fig. 3. Joint code/channel graph. referred to in the coding literature as the “sum–product” algorithm [18], [28], but is also known as belief propagation =-=[35]-=-, [36]. When applied specifically to ISI channels, the algorithm also takes the name “turbo equalization” [37]. For convenience in the later sections, we describe here the “windowed” version of the al... |

1909 | Randomized Algorithms
- Motwani, Raghavan
- 1995
(Show Context)
Citation Context ...nsemble of channel noise realizations (which uniquely define the channel outputs since is known). Following [23], [25], we form a Doob edge-and-noise-revealing martingale and apply Azuma’s inequality =-=[40]-=- to get (21) where depends only on , , and . Next, we show that the second term on the right-hand side of (20) equals by using inequality (18). Again, this is adopted from [25], but adapted to a chann... |

1542 |
Information Theory and Reliable Communications
- Gallager
- 1968
(Show Context)
Citation Context ...-mail: xma@ee.cityu.hk). Communicated by R. Urbanke, Associate Editor for Coding Techniques. Digital Object Identifier 10.1109/TIT.2003.813563 0018-9448/03$17.00 © 2003 IEEE the water-filling theorem =-=[1]-=-, [2]. In many applications, the physics of the channel do not allow continuous input alphabets. A prime example of a two-level (binary) ISI channel is the saturation magnetic recording channel, becau... |

1392 | Near Shannon limit error-correcting coding and decoding: Turbo Codes
- Berrou, Glavieux, et al.
- 1993
(Show Context)
Citation Context ... unifying theory of codes on graphs by Wiberg et al. [18] and Forney [19]. MacKay and Neal [20], [21] showed that there exist good Gallager codes with performances about 0.5 dB worse than turbo codes =-=[22]-=-. A major breakthrough was the construction of irregular Gallager codes [23], and the development of a method to analyze them for erasure channels [14], [24]. These methods were adapted to memoryless ... |

1270 | Factor graphs and the sum-product algorithm
- Kschischang, Frey, et al.
- 2001
(Show Context)
Citation Context ...ly once. The paper is organized as follows. In Section II, we describe the channel model, introduce the various capacity and information rate definitions, and briefly describe the sum–product decoder =-=[28]-=-. In Section III, we introduce the necessary notation for handling the analysis of Gallager codes for channels with memory and prove two key concentration theorems. Section IV is devoted to describing... |

1269 |
Optimal decoding of linear codes for minimizing symbol error rate
- Jelinek, Raviv, et al.
- 1974
(Show Context)
Citation Context ...mmetric information rate [5]–[7]. Recently, a Monte Carlo method for numerically evaluating the symmetric information rate using the forward recursion of the Bahl–Cocke–Jelinek–Raviv (BCJR) algorithm =-=[8]-=- (also known as the Baum–Welch algorithm, the sum–product algorithm, or the forward–backward algorithm) has been proposed by Arnold and Loeliger [9], and independently by Pfister, Soriaga, and Siegel ... |

974 | Low-Density Parity-Check Codes
- Gallager
- 1963
(Show Context)
Citation Context ...lity to achieve (near) channel capacity has recently been numerically demonstrated for various memoryless [14], [15] channels using Gallager codes, also known as low-density parity-check (LDPC) codes =-=[16]-=-. The theory of Gallager codes has vastly benefitted from the notion of codes on graphs first introduced by Tanner [17] and further expanded into a unifying theory of codes on graphs by Wiberg et al. ... |

560 | Good error correcting codes based on very sparse matrices
- MacKay
- 1999
(Show Context)
Citation Context ...nefitted from the notion of codes on graphs first introduced by Tanner [17] and further expanded into a unifying theory of codes on graphs by Wiberg et al. [18] and Forney [19]. MacKay and Neal [20], =-=[21]-=- showed that there exist good Gallager codes with performances about 0.5 dB worse than turbo codes [22]. A major breakthrough was the construction of irregular Gallager codes [23], and the development... |

473 |
A recursive approach to low complexity codes
- Tanner
- 1980
(Show Context)
Citation Context ...annels using Gallager codes, also known as low-density parity-check (LDPC) codes [16]. The theory of Gallager codes has vastly benefitted from the notion of codes on graphs first introduced by Tanner =-=[17]-=- and further expanded into a unifying theory of codes on graphs by Wiberg et al. [18] and Forney [19]. MacKay and Neal [20], [21] showed that there exist good Gallager codes with performances about 0.... |

353 | Near “Shannon Limit Performance of Low Density Parity
- MacKay, Neal
- 1996
(Show Context)
Citation Context ...tly benefitted from the notion of codes on graphs first introduced by Tanner [17] and further expanded into a unifying theory of codes on graphs by Wiberg et al. [18] and Forney [19]. MacKay and Neal =-=[20]-=-, [21] showed that there exist good Gallager codes with performances about 0.5 dB worse than turbo codes [22]. A major breakthrough was the construction of irregular Gallager codes [23], and the devel... |

339 | Turbo decoding as an instance of Pearl’s belief propagation algorithm
- McEliece, MacKay, et al.
- 1998
(Show Context)
Citation Context ... INTERSYMBOL INTERFERENCE CHANNELS 1639 Fig. 3. Joint code/channel graph. referred to in the coding literature as the “sum–product” algorithm [18], [28], but is also known as belief propagation [35], =-=[36]-=-. When applied specifically to ISI channels, the algorithm also takes the name “turbo equalization” [37]. For convenience in the later sections, we describe here the “windowed” version of the algorith... |

287 |
Maximum-likelihood Sequence Estimation of Digital Sequences in the Presence of Intersymbol Interference
- Forney
- 1972
(Show Context)
Citation Context ...bet . The channel’s probabilistic law is captured by the equation where is a zero-mean AWGN sequence with variance whose realizations are . The channel in (1) is conveniently represented by a trellis =-=[29]-=-, or, equivalently, by a graph where for each variable there is a single trellis node [18], [19]. Define the state at time as the vector that collects the input variables through , i.e., . The realiza... |

250 |
A Viterbi Algorithm with Soft-decision Outputs and its Applications
- Hagenauer, Hoeher
- 1989
(Show Context)
Citation Context ...rchangeably to describe either of the two vectors. C. Sum–Product Decoding by Message Passing In the literature, several methods exist for soft detection of symbols transmitted over ISI channels [8], =-=[30]-=-–[34]. There also exist several message-passing algorithms that decode codes on graphs [16], [17], [23], [25]. Here, we will adopt the algorithm 2 The true code rate of a code defined by a graph will ... |

231 | On the Design of Low-Density Parity-Check Codes Within 0.0045 db of Shannon Limit
- Chung, Forney, et al.
- 2001
(Show Context)
Citation Context ... optimize codes whose performance is proven to get very close to the capacity, culminating in a remarkable 0.0045-dB distance from the capacity of the memoryless AWGN channel reported by Chung et al. =-=[27]-=-. In this paper, we focus on developing the density evolution method for channels with binary inputs and ISI memory. The computed thresholds are used for lower-bounding the capacity, as well as for up... |

230 |
Iterative correction of intersymbol interference
- Douillard, Jezequel, et al.
- 1995
(Show Context)
Citation Context ...rature as the “sum–product” algorithm [18], [28], but is also known as belief propagation [35], [36]. When applied specifically to ISI channels, the algorithm also takes the name “turbo equalization” =-=[37]-=-. For convenience in the later sections, we describe here the “windowed” version of the algorithm. First, we join the channel factor graph (Fig. 1) with the code graph (Fig. 2) to get the joint channe... |

178 | Analysis of sum-product decoding of low-density parity check codes using a Gaussian approximation - Chung, Richardson, et al. - 2001 |

176 | Improved lowdensity parity-check codes using irregular graphs and belief propagation
- Luby, Mitzenmacher, et al.
- 1998
(Show Context)
Citation Context ...e is to devise codes that will achieve the capacity (or at least the i.i.d. capacity). The ability to achieve (near) channel capacity has recently been numerically demonstrated for various memoryless =-=[14]-=-, [15] channels using Gallager codes, also known as low-density parity-check (LDPC) codes [16]. The theory of Gallager codes has vastly benefitted from the notion of codes on graphs first introduced b... |

170 | The capacity of low-density parity check codes under message-passing decoding
- Richardson, Urbanke
- 2001
(Show Context)
Citation Context ...nalyze them for erasure channels [14], [24]. These methods were adapted to memoryless channels with continuous output alphabets (e.g., AWGN channels, Laplace channels, etc.) by Richardson and Urbanke =-=[25]-=-, who also coined the term “denKAVČIĆ et al.: BINARY INTERSYMBOL INTERFERENCE CHANNELS 1637 sity evolution” for a tool to analyze the asymptotic performance of Gallager and turbo codes over these cha... |

150 | Proakis, Digital Communications - G - 1995 |

126 |
Codes on Graphs: Normal Realizations
- Forney
- 2001
(Show Context)
Citation Context ...Gallager codes has vastly benefitted from the notion of codes on graphs first introduced by Tanner [17] and further expanded into a unifying theory of codes on graphs by Wiberg et al. [18] and Forney =-=[19]-=-. MacKay and Neal [20], [21] showed that there exist good Gallager codes with performances about 0.5 dB worse than turbo codes [22]. A major breakthrough was the construction of irregular Gallager cod... |

103 |
Codes and iterative decoding on general graphs
- Wiberg, Loileger, et al.
- 1995
(Show Context)
Citation Context .... The theory of Gallager codes has vastly benefitted from the notion of codes on graphs first introduced by Tanner [17] and further expanded into a unifying theory of codes on graphs by Wiberg et al. =-=[18]-=- and Forney [19]. MacKay and Neal [20], [21] showed that there exist good Gallager codes with performances about 0.5 dB worse than turbo codes [22]. A major breakthrough was the construction of irregu... |

102 | An intuitive justification and a simplified implementation of the MAP decoder for convolutional codes - Viterbi - 1998 |

78 | Analysis of low density codes and improved designs using irregular graphs - Luby, Mitzenmacher, et al. - 1998 |

76 | Codes for digital recorders
- Immink, Siegel, et al.
- 1998
(Show Context)
Citation Context ...t allow continuous input alphabets. A prime example of a two-level (binary) ISI channel is the saturation magnetic recording channel, because the magnetization domains can have only two stable phases =-=[3]-=-. Other examples include digital communication channels where the input alphabet is confined to a finite set [4]. The computation of the capacity of discrete-time ISI channels with a finite number of ... |

76 | Analysis of random processes via and-or tree evaluation - Luby, Mitzenmacher, et al. - 1998 |

75 | On the Information Rate of BinaryInput Channels with Memory - Arnold, Loeliger - 2001 |

58 | Optimum Soft-Output Detection for Channels With Intersymbol Interference - Li, Vucetic, et al. - 1995 |

45 | On the achievable information rates of finite state ISI channels
- Pfister, Soriaga, et al.
- 2001
(Show Context)
Citation Context ... (also known as the Baum–Welch algorithm, the sum–product algorithm, or the forward–backward algorithm) has been proposed by Arnold and Loeliger [9], and independently by Pfister, Soriaga, and Siegel =-=[10]-=-. The same procedure can be used to numerically evaluate the i.i.d. capacity, which is defined as the maximal achievable information rate when the inputs are independent and identically distributed. T... |

41 |
The Intersymbol Interference Channel: Lower Bounds on Capacity and Channel Precoding Loss
- Shamai, Laroia
- 1996
(Show Context)
Citation Context ...mputation of the capacity of discrete-time ISI channels with a finite number of allowed signaling levels is an open problem. In the past, the strategy has been to obtain numeric [5] and analytic [6], =-=[7]-=- bounds on the capacity. Very often authors have concentrated on obtaining bounds on the achievable information rate when the inputs are independent and uniformly distributed (i.u.d.)—the so-called sy... |

40 |
On the capacity of Markov sources over noisy channels
- Kavcic
(Show Context)
Citation Context ...the first (arbitrarily close in the probability- sense) approximation to the exact result involving the channel capacity of a discrete-time ISI channel with binary inputs. Also, recently, tight lower =-=[11]-=- and upper [12], [13] bounds have been computed using Monte Carlo methods for Markov channel inputs. The remaining issue is to devise codes that will achieve the capacity (or at least the i.i.d. capac... |

40 | Maximum-likelihood sequence estimation of digital sequences in the presence of intersymbol interference Information Theory - Jr - 1972 |

36 |
Information Rates for a Discrete-Time Gaussian Channel with Intersymbol Interference and Stationary Inputs
- Shamai, Ozarow, et al.
- 1991
(Show Context)
Citation Context ...he computation of the capacity of discrete-time ISI channels with a finite number of allowed signaling levels is an open problem. In the past, the strategy has been to obtain numeric [5] and analytic =-=[6]-=-, [7] bounds on the capacity. Very often authors have concentrated on obtaining bounds on the achievable information rate when the inputs are independent and uniformly distributed (i.u.d.)—the so-call... |

26 |
An upper bound on the capacity of channels with memory and contraint input
- Vontobel, Arnold
- 2001
(Show Context)
Citation Context ...trarily close in the probability- sense) approximation to the exact result involving the channel capacity of a discrete-time ISI channel with binary inputs. Also, recently, tight lower [11] and upper =-=[12]-=-, [13] bounds have been computed using Monte Carlo methods for Markov channel inputs. The remaining issue is to devise codes that will achieve the capacity (or at least the i.i.d. capacity). The abili... |

26 | On the Equvalence Between SOVA and Max-Log-MAP Decoding - Fossorier, Burkert, et al. - 1998 |

24 | Design of capacity-approaching low-density parity-check codes - Richardson, Shokrollahi, et al. - 2001 |

22 |
Algorithms for continuous decoding of turbo codes
- Benedetto, Divsalar, et al.
- 1996
(Show Context)
Citation Context ...ch that the vector obtained by successive multiplication from the left all have the property that the sum of their elements equal to [8]. For other implementations of the windowed BCJR algorithm, see =-=[46]-=-–[48]. ACKNOWLEDGMENT The authors would like to thank Sae-Young Chung for extremely helpful discussions on implementations of the density evolution algorithm, Dieter Arnold for providing the latest re... |

19 | Urbanke, “Thresholds for turbo codes - Richardson, L - 2000 |

18 | Matched spectral-null codes for partial-response channels
- Karabed, Siegel
- 1991
(Show Context)
Citation Context ... our belief that the strict inequality must hold because binary linear codes cannot achieve spectral shaping required to match the spectral nulls of the code to the spectral nulls of the channel (see =-=[43]-=- for matched spectral null codes), but we cannot back up this statement with a proof. Further, we know (at least for some ISI channels) that . An example can be constructed by concatenating an outer r... |

16 |
Capacity and Information Rates of Discrete-Time Channels with Memory
- Hirt
- 1988
(Show Context)
Citation Context ...finite set [4]. The computation of the capacity of discrete-time ISI channels with a finite number of allowed signaling levels is an open problem. In the past, the strategy has been to obtain numeric =-=[5]-=- and analytic [6], [7] bounds on the capacity. Very often authors have concentrated on obtaining bounds on the achievable information rate when the inputs are independent and uniformly distributed (i.... |

14 | The compound channel capacity of a class of finite-state channels
- Lapidoth, Telatar
- 1998
(Show Context)
Citation Context ...enerate the codebook, where codewords are chosen independently at random and the coded symbols are governed by the optimal input distribution. For a generic finite-state channel, see [1, Sec. 5.9] or =-=[42]-=- for a detailed description of the problem and the the proof of the coding theorem. For the channel in (1) with binary inputs, we present a somewhat stronger result involving linear codes. 4 Theorem 3... |

13 |
Low density parity check codes for magnetic recording
- Fan, Friedmann, et al.
- 1999
(Show Context)
Citation Context ... III, we must adopt a message-passing schedule because the schedule affects the structure of the message-flow neighborhood defined in Section III. Here, we describe the scheduling choice presented in =-=[38]-=- often referred to as turbo equalization [37] due to the resemblance to turbo decoding [22]. Trellis-to-Variable Messages: Assume that the received vector is . In the th round of the algorithm, we com... |

11 | Improved Low Density Parity Check Codes Using Irregular Graphs and Belief Propagation
- Luby, Mitzenmacher, et al.
- 2001
(Show Context)
Citation Context ...acKay and Neal [20], [21] showed that there exist good Gallager codes with performances about 0.5 dB worse than turbo codes [22]. A major breakthrough was the construction of irregular Gallager codes =-=[23]-=-, and the development of a method to analyze them for erasure channels [14], [24]. These methods were adapted to memoryless channels with continuous output alphabets (e.g., AWGN channels, Laplace chan... |

7 |
Markov sources achieve the feedback capacity of finite-state machine channels
- Yang, Kavčić
(Show Context)
Citation Context ...y close in the probability- sense) approximation to the exact result involving the channel capacity of a discrete-time ISI channel with binary inputs. Also, recently, tight lower [11] and upper [12], =-=[13]-=- bounds have been computed using Monte Carlo methods for Markov channel inputs. The remaining issue is to devise codes that will achieve the capacity (or at least the i.i.d. capacity). The ability to ... |

6 |
Low-complexity iterative decoding with decision-aided equalization for magnetic recording channels
- Wu, Cioffi
- 2001
(Show Context)
Citation Context ...n thresholds and to the Shamai–Laroia conjectured bound. C. The BCJR-Once Bound Due to the high computational complexity of the BCJR algorihm, several authors suggest applying the BCJR step only once =-=[33]-=-, [45] and subsequently iterating the message-passing decoding algorithm only within the code subgraph of the joint channel/code graph (see Fig. 3). Clearly, this strategy is suboptimal to fully itera... |

6 |
Iterative correction of isi via equalization and decoding with priors
- Tuchler, Koetter, et al.
- 2000
(Show Context)
Citation Context ...geably to describe either of the two vectors. C. Sum–Product Decoding by Message Passing In the literature, several methods exist for soft detection of symbols transmitted over ISI channels [8], [30]–=-=[34]-=-. There also exist several message-passing algorithms that decode codes on graphs [16], [17], [23], [25]. Here, we will adopt the algorithm 2 The true code rate of a code defined by a graph will alway... |

6 |
Optimized LDPC codes for partial response channels
- Varnica, Kavčić
- 2000
(Show Context)
Citation Context ...nnels are a necessity [3]. The codes studied in this paper do not provide tight bounds in the low-rate region, but the threshold bounds can be tightened by optimizing the degree polynomials and , see =-=[44]-=-. In this paper, we present thresholds only for regular Gallager codes5 in the family , where and is allowed to vary in order to get a variable code rate . This family of codes provides a curve versus... |

5 |
Turbo codes for PR4: Parallel versus serial concatenation
- Souvignier, Friedman, et al.
(Show Context)
Citation Context ...sholds and to the Shamai–Laroia conjectured bound. C. The BCJR-Once Bound Due to the high computational complexity of the BCJR algorihm, several authors suggest applying the BCJR step only once [33], =-=[45]-=- and subsequently iterating the message-passing decoding algorithm only within the code subgraph of the joint channel/code graph (see Fig. 3). Clearly, this strategy is suboptimal to fully iterating b... |

4 | The minimum description length principle for modeling recording channels - Kavčić, Srinivasan - 2001 |

2 |
Novel algorithm for continuous decoding of turbo codes
- Bai, Ma, et al.
- 1999
(Show Context)
Citation Context ...at the vector obtained by successive multiplication from the left all have the property that the sum of their elements equal to [8]. For other implementations of the windowed BCJR algorithm, see [46]–=-=[48]-=-. ACKNOWLEDGMENT The authors would like to thank Sae-Young Chung for extremely helpful discussions on implementations of the density evolution algorithm, Dieter Arnold for providing the latest results... |