## Simulation-based computation of information rates for channels with memory (2006)

### Cached

### Download Links

- [www.isiweb.ee.ethz.ch]
- [www.isi.ee.ethz.ch]
- [people.ee.ethz.ch]
- DBLP

### Other Repositories/Bibliography

Venue: | IEEE TRANS. INFORM. THEORY |

Citations: | 56 - 11 self |

### BibTeX

@ARTICLE{Arnold06simulation-basedcomputation,

author = {Dieter M. Arnold and Hans-Andrea Loeliger and Pascal O. Vontobel and Aleksandar Kavčić and Wei Zeng},

title = {Simulation-based computation of information rates for channels with memory},

journal = {IEEE TRANS. INFORM. THEORY},

year = {2006},

volume = {52},

number = {8},

pages = {3498--3508}

}

### Years of Citing Articles

### OpenURL

### Abstract

The information rate of finite-state source/channel models can be accurately estimated by sampling both a long channel input sequence and the corresponding channel output sequence, followed by a forward sum–product recursion on the joint source/channel trellis. This method is extended to compute upper and lower bounds on the information rate of very general channels with memory by means of finite-state approximations. Further upper and lower bounds can be computed by reduced-state methods.

### Citations

9126 |
Elements of Information Theory
- Cover, Thomas
- 1991
(Show Context)
Citation Context ...o the source, the computation (17) needs only the source model rather than the joint source/channel model. In this case, if (6) holds, can be computed in closed form as the entropy of a Markov source =-=[12]-=-. Define the state metric . By straightforward application of the sum–product algorithm [26], we recursively compute the messages (state metrics) (12) IV. NUMERICAL EXAMPLES We will focus here on chan... |

1276 | Factor graphs and sum-product algorithm
- Kschischang, Frey, et al.
(Show Context)
Citation Context ...for that case. The factorization (3) is expressed by the factor graph of Fig. 1. (This graph is a Forney-style factor graph, see [16], [28]; add a circle on each branch to obtain a factor graph as in =-=[26]-=-.) Example 1 (Channel With Binary-Input Finite Impulse Response (FIR) Filter and With AWGN): Let with fixed real coefficients , with taking values in , and where is white Gaussian noise. If is Markov ... |

1274 |
Optimal decoding of linear codes for minimizing symbol error rate
- Bahl, Cocke, et al.
- 1974
(Show Context)
Citation Context ...gh the factor graph of (3), as illustrated in Fig. 3. Since the graph represents a trellis, this computation is just the forward sum–product recursion of the Bahl–Cocke–Jelinek–Raviv (BCJR) algorithm =-=[8]-=-. Consider, for example, the computation of (11) for , as illustrated in Fig. 3. The desired quantity (11) is then obtained as (14) the sum of all final state metrics. For large , the state metrics co... |

192 | Computation of channel capacity and rate-distortion functions
- Blahut
- 1972
(Show Context)
Citation Context ...The proof is straightforward. Let be the righthand side of (27). Then (35) where the sum in (34) should be read as running over the support of . This bound is implicit in the classic papers by Blahut =-=[10]-=- and Arimoto [1]. Moreover, it may also be obtained as a special case of a bound due to Fischer [15] on mismatched decoding, which in turn is a special case of a general result by Ganti et al. [17, eq... |

184 | Hidden Markov processes - Ephraim, Merhav - 2002 |

128 | An Introduction to Factor Graphs
- Loeliger
- 2004
(Show Context)
Citation Context ...t, but all results of this paper are easily reformulated to hold for that case. The factorization (3) is expressed by the factor graph of Fig. 1. (This graph is a Forney-style factor graph, see [16], =-=[28]-=-; add a circle on each branch to obtain a factor graph as in [26].) Example 1 (Channel With Binary-Input Finite Impulse Response (FIR) Filter and With AWGN): Let with fixed real coefficients , with ta... |

127 |
Codes on graphs: normal realizations
- Forney
- 2001
(Show Context)
Citation Context ...lphabet, but all results of this paper are easily reformulated to hold for that case. The factorization (3) is expressed by the factor graph of Fig. 1. (This graph is a Forney-style factor graph, see =-=[16]-=-, [28]; add a circle on each branch to obtain a factor graph as in [26].) Example 1 (Channel With Binary-Input Finite Impulse Response (FIR) Filter and With AWGN): Let with fixed real coefficients , w... |

120 |
A Probabilistic Distance Measure for Hidden Markov Models
- Juang, Rabiner
- 1985
(Show Context)
Citation Context ...) is of the same form. 3) In addition to (4), we also have for all , , . Quantities very similar to (43) and (44) seem to have been computed by essentially the same algorithm as far back as 1985, cf. =-=[25]-=-. VII. NUMERICAL EXAMPLES FOR THE BOUNDS We illustrate the methods of Sections V-C and VI by some numerical examples. As in Section IV, we focus on channels as in Example 1 (and we will use the same d... |

91 | Maximum-likelihood estimation for hidden Markov models. Stochastic Process
- Leroux
- 1992
(Show Context)
Citation Context ...ry state process satisfy (3) Fig. 1. The factor graph of (3). Fig. 2. Finite-state machine describing a run-length constraint. for all , , , in order to guarantee the existence of certain limits, cf. =-=[27]-=-. This condition formally excludes a finite channel output alphabet, but all results of this paper are easily reformulated to hold for that case. The factorization (3) is expressed by the factor graph... |

82 |
Capacity and coding for the Gilbert- Elliott channels
- Mushkin, Bar-David
- 1989
(Show Context)
Citation Context ...channel and derived various closedform bounds on the capacity and on the i.u.d. information rate as well as a lower bound conjecture. The Gilbert–Elliott channel was analyzed by Mushkin and Bar-David =-=[29]-=-. Goldsmith and Varaiya extended that work to general channels with a freely evolving state [18] (cf. Example 2); they gave expressions for the channel capacity and the information rate as well as rec... |

78 |
An algorithm for computing the capacity of arbitrary discrete memoryless channels
- Arimoto
- 1972
(Show Context)
Citation Context ...ightforward. Let be the righthand side of (27). Then (35) where the sum in (34) should be read as running over the support of . This bound is implicit in the classic papers by Blahut [10] and Arimoto =-=[1]-=-. Moreover, it may also be obtained as a special case of a bound due to Fischer [15] on mismatched decoding, which in turn is a special case of a general result by Ganti et al. [17, eq. (12) for ]. It... |

75 |
On the information rate of binary-input channels with memory
- Arnold, Loeliger
- 2001
(Show Context)
Citation Context ...nels could not be computed accurately enough for most engineering purposes except for the Gilbert–Elliott channel and its generalizations. The first and main result of our own work (first reported in =-=[3]-=-) is a practical algorithm to compute information rates for general finite-state source/channel models (to be defined in Section II). This algorithm was independently discovered also by Sharma and Sin... |

58 |
mutual information, and coding for finite-state Markov channels
- Goldsmith, Varaiya, et al.
- 1996
(Show Context)
Citation Context ... as well as a lower bound conjecture. The Gilbert–Elliott channel was analyzed by Mushkin and Bar-David [29]. Goldsmith and Varaiya extended that work to general channels with a freely evolving state =-=[18]-=- (cf. Example 2); they gave expressions for the channel capacity and the information rate as well as recursive methods for their evaluation. Zehavi and Wolf studied the binary symmetric channel with r... |

46 | Entropy and channel capacity in the regenerative setup with applications to Markov channels
- Sharma, Singh
- 2001
(Show Context)
Citation Context ...s a practical algorithm to compute information rates for general finite-state source/channel models (to be defined in Section II). This algorithm was independently discovered also by Sharma and Singh =-=[38]-=- and by Pfister et al. [32]. We will review this algorithm in Section III. Since the original submission of this paper, this algorithm has been used and extended in various ways. For example, Zhang et... |

41 |
Probabilistic functions of finite state Markov chains
- Petrie
- 1969
(Show Context)
Citation Context ...differential entropy rate , and converges with probability to , cf. [9], [27], and [14, Sec. IV-D]. The corresponding results for the case of a finite channel output alphabet are contained already in =-=[31]-=-. III. COMPUTING FOR FINITE-STATE CHANNELS From the remarks above, an obvious algorithm for the numerical computation of is as follows: 1) Sample two “very long” sequences and . (The meaning of “very ... |

41 |
The intersymbol interference channel: Lower bounds on capacity and channel precoding loss
- Shamai, Laroia
- 1996
(Show Context)
Citation Context ...linear ISI channel was investigated by Hirt [21], who proposed a Monte Carlo method to evaluate certain quantities closely related to the i.u.d. information rate (cf. Section IV). Shamai et al. [36], =-=[37]-=- also investigated the ISI channel and derived various closedform bounds on the capacity and on the i.u.d. information rate as well as a lower bound conjecture. The Gilbert–Elliott channel was analyze... |

41 |
The feedback capacity of finitestate machine channels
- Yang, Kavčić, et al.
- 2005
(Show Context)
Citation Context ...rk on optimizing the process over finite-state hidden-Markov sources (cf. [24]) will be reported in a separate paper [43]. Computational upper bounds on the channel capacity were proposed in [42] and =-=[45]-=-. We will use the notation and . II. FINITE-STATE SOURCE/CHANNEL MODELS In this section, we will assume that the channel input process , the channel output process , and some auxiliary state process s... |

40 |
On the capacity of Markov sources over noisy channels
- Kavcic
(Show Context)
Citation Context ...ssume that the channel input process is given; in the numerical examples, we will often assume it to be i.u.d. Our parallel work on optimizing the process over finite-state hidden-Markov sources (cf. =-=[24]-=-) will be reported in a separate paper [43]. Computational upper bounds on the channel capacity were proposed in [42] and [45]. We will use the notation and . II. FINITE-STATE SOURCE/CHANNEL MODELS In... |

38 |
The strong ergodic theorem for densities: Generalized Shannon–McMillan–Breiman theorem
- Barron
- 1985
(Show Context)
Citation Context ... (1) exists. Moreover, the sequence converges with probability to the entropy rate , the sequence converges with probability to the differential entropy rate , and converges with probability to , cf. =-=[9]-=-, [27], and [14, Sec. IV-D]. The corresponding results for the case of a finite channel output alphabet are contained already in [31]. III. COMPUTING FOR FINITE-STATE CHANNELS From the remarks above, ... |

37 |
Information rates for a discrete-time gaussian channel with intersymbol interference and stationary inputs
- Ozarow, Wyner
- 1991
(Show Context)
Citation Context ...input linear ISI channel was investigated by Hirt [21], who proposed a Monte Carlo method to evaluate certain quantities closely related to the i.u.d. information rate (cf. Section IV). Shamai et al. =-=[36]-=-, [37] also investigated the ISI channel and derived various closedform bounds on the capacity and on the i.u.d. information rate as well as a lower bound conjecture. The Gilbert–Elliott channel was a... |

26 |
An upper bound on the capacity of channels with memory and contraint input
- Vontobel, Arnold
- 2001
(Show Context)
Citation Context ...2], [23]; the latter explore, in particular, the relation to Lyapunov exponents of the product of random matrices. Further related work by the authors of the present paper (not covered here) includes =-=[42]-=-, [13], [47]; see also [43] and [5]. In this paper, after describing the basic algorithm, we extend the method to very general (non-finite-state) channels with memory. In Section V-C and Appendix III,... |

23 |
On runlength codes
- Zehavi, Wolf
- 1988
(Show Context)
Citation Context ...ave expressions for the channel capacity and the information rate as well as recursive methods for their evaluation. Zehavi and Wolf studied the binary symmetric channel with run-length limited input =-=[46]-=-; they derived a set of lower bounds for Markovian input and demonstrated some numerical results. Both the binary symmetric channel and the Gaussian channel with run-length limited binary input were s... |

22 | Mismatched Decoding Revisited: General Alphabets, Channels with Memory, and the Wide-Band Limit
- Ganti, Lapidoth, et al.
- 2000
(Show Context)
Citation Context ...ecial case of a bound due to Fischer [15] on mismatched decoding, which in turn is a special case of a general result by Ganti et al. [17, eq. (12) for ]. It then follows from the results in [15] and =-=[17]-=- that the lower bound is achievable by a maximum-likelihood decoder for the auxiliary channel. A simple proof of (34) goes as follows. Let be the right-hand side of (34) and for satisfying (which by t... |

21 |
On the achievable information rates of finite-state ISI channels
- Pfister, Soriaga, et al.
- 2001
(Show Context)
Citation Context ...compute information rates for general finite-state source/channel models (to be defined in Section II). This algorithm was independently discovered also by Sharma and Singh [38] and by Pfister et al. =-=[32]-=-. We will review this algorithm in Section III. Since the original submission of this paper, this algorithm has been used and extended in various ways. For example, Zhang et al. investigate informatio... |

16 |
Capacity and Information Rates of Discrete-Time Channels with Memory
- Hirt
- 1988
(Show Context)
Citation Context ...) channels, ii) generalizations of the Gilbert–Elliott channel, and iii) channels with constrained input (cf. the examples in Section II). The binary-input linear ISI channel was investigated by Hirt =-=[21]-=-, who proposed a Monte Carlo method to evaluate certain quantities closely related to the i.u.d. information rate (cf. Section IV). Shamai et al. [36], [37] also investigated the ISI channel and deriv... |

16 | A generalization of the Blahut-Arimoto algorithm to finite-state channels
- Vontobel, Kavcǐć, et al.
- 2008
(Show Context)
Citation Context ...e, in particular, the relation to Lyapunov exponents of the product of random matrices. Further related work by the authors of the present paper (not covered here) includes [42], [13], [47]; see also =-=[43]-=- and [5]. In this paper, after describing the basic algorithm, we extend the method to very general (non-finite-state) channels with memory. In Section V-C and Appendix III, we demonstrate the use of ... |

15 | Capacity, mutual information, and coding for finite-state Markov channels - Goldsmith, Varaiya - 1996 |

14 |
An information theoretical identity and a problem involving capacity
- Topsøe
- 1967
(Show Context)
Citation Context ... Upper Bound): (27) (28) where the sum in (27) should be read as running over the support of . Equality holds in (27) if and only if for all . This bound appears to have been observed first by Topsøe =-=[41]-=-. The proof is straightforward. Let be the righthand side of (27). Then (35) where the sum in (34) should be read as running over the support of . This bound is implicit in the classic papers by Blahu... |

10 | Computation of information rates from finite-state source/channel models - Arnold, Loeliger, et al. - 2002 |

10 | On the symmetric information rate of two-dimensional finite-state ISI channels
- Cheng, Siegel
- 2006
(Show Context)
Citation Context ... multiple-output (MIMO) channels with ISI [49]; magnetic recording is also considered by Ryan et al. [34] as well as by Pighi et al. [33]. Two-dimensional ISI channels are considered by Siegel et al. =-=[11]-=-, [40] and by Shental et al. [39]. Related analytical results were presented by Sharma and Singh [38] as well as by Holliday et al. [22], [23]; the latter explore, in particular, the relation to Lyapu... |

8 |
On the capacity of binary and Gaussian channels with run-length limited inputs
- Shamai, Kofman
- 1990
(Show Context)
Citation Context .... Both the binary symmetric channel and the Gaussian channel with run-length limited binary input were studied by Shamai and Kofman, who obtained upper and lower bounds on the i.u.d. information rate =-=[35]-=-. A related topic is the continuous-time additive white Gaussian noise (AWGN) channel with peak-amplitude-constrained input, which was addressed by Heegard et al. [19], [20]. Despite all this work, in... |

7 |
Optimal code rates for the Lorentzian channel: Shannon codes and LDPC codes
- Ryan, Wang, et al.
- 2004
(Show Context)
Citation Context ...ATION OF INFORMATION RATES FOR CHANNELS WITH MEMORY 3499 channels [48] and of fading multiple-input multiple-output (MIMO) channels with ISI [49]; magnetic recording is also considered by Ryan et al. =-=[34]-=- as well as by Pighi et al. [33]. Two-dimensional ISI channels are considered by Siegel et al. [11], [40] and by Shental et al. [39]. Related analytical results were presented by Sharma and Singh [38]... |

6 |
The binary jitter channel: a new model for magnetic recording
- Arnold, Kavcic, et al.
- 2000
(Show Context)
Citation Context ...messages (state metrics) (12) IV. NUMERICAL EXAMPLES We will focus here on channels as in Example 1. Further numerical examples (including channels as in Example 3 as well as the nonlinear channel of =-=[2]-=-) are given in [5] and [43]. The filter coefficients in Example 1 are often compactly represented by the formal sum (13)ARNOLD et al.: SIMULATION-BASED COMPUTATION OF INFORMATION RATES FOR CHANNELS W... |

6 |
Some remarks on the role of inaccuracy in Shannon’s theory of information transmission
- Fischer
- 1978
(Show Context)
Citation Context ...uld be read as running over the support of . This bound is implicit in the classic papers by Blahut [10] and Arimoto [1]. Moreover, it may also be obtained as a special case of a bound due to Fischer =-=[15]-=- on mismatched decoding, which in turn is a special case of a general result by Ganti et al. [17, eq. (12) for ]. It then follows from the results in [15] and [17] that the lower bound is achievable b... |

6 | Shitz), “On the achievable information rates of finite-state input two-dimensional channels with memory
- Shental, Shental, et al.
- 2005
(Show Context)
Citation Context ...with ISI [49]; magnetic recording is also considered by Ryan et al. [34] as well as by Pighi et al. [33]. Two-dimensional ISI channels are considered by Siegel et al. [11], [40] and by Shental et al. =-=[39]-=-. Related analytical results were presented by Sharma and Singh [38] as well as by Holliday et al. [22], [23]; the latter explore, in particular, the relation to Lyapunov exponents of the product of r... |

6 |
Kurtas, “Information rates of binaryinput intersymbol interference channels with signal-dependent media noise
- Zhang, Duman, et al.
(Show Context)
Citation Context ...t al. investigate information rates both of magnetic recording (2) 0018-9448/$20.00 © 2006 IEEEARNOLD et al.: SIMULATION-BASED COMPUTATION OF INFORMATION RATES FOR CHANNELS WITH MEMORY 3499 channels =-=[48]-=- and of fading multiple-input multiple-output (MIMO) channels with ISI [49]; magnetic recording is also considered by Ryan et al. [34] as well as by Pighi et al. [33]. Two-dimensional ISI channels are... |

5 |
Computing Information Rates of Finite-State Models with Application to Magnetic Recording
- Arnold
- 2002
(Show Context)
Citation Context ...ticular, the relation to Lyapunov exponents of the product of random matrices. Further related work by the authors of the present paper (not covered here) includes [42], [13], [47]; see also [43] and =-=[5]-=-. In this paper, after describing the basic algorithm, we extend the method to very general (non-finite-state) channels with memory. In Section V-C and Appendix III, we demonstrate the use of reduced-... |

5 | Simulation-based computation of information rates: upper and lower bounds - Arnold, Kavčić, et al. |

5 |
On the capacity and normalization of ISI channels
- Xiang, Pietrobon
- 2003
(Show Context)
Citation Context ... ratio (SNR) will be defined as (19) (It is clear that this SNR definition is inadequate for some applications, but this qualification seems to apply also to alternative definitions including that of =-=[44]-=-.) For channels, as in Example 1, is known analytically, which means that the algorithm of Section III is only needed to compute . In all numerical examples reported in this paper, the sequence length... |

5 | Entropy and mutual information for Markov channels with general inputs
- Goldsmith, Holliday, et al.
- 2002
(Show Context)
Citation Context ...]. Two-dimensional ISI channels are considered by Siegel et al. [11], [40] and by Shental et al. [39]. Related analytical results were presented by Sharma and Singh [38] as well as by Holliday et al. =-=[22]-=-, [23]; the latter explore, in particular, the relation to Lyapunov exponents of the product of random matrices. Further related work by the authors of the present paper (not covered here) includes [4... |

5 |
Information rates of multidimensional front-ends for digital storage channels with data-dependent transition noise
- Pighi, Raheli, et al.
- 2006
(Show Context)
Citation Context ...HANNELS WITH MEMORY 3499 channels [48] and of fading multiple-input multiple-output (MIMO) channels with ISI [49]; magnetic recording is also considered by Ryan et al. [34] as well as by Pighi et al. =-=[33]-=-. Two-dimensional ISI channels are considered by Siegel et al. [11], [40] and by Shental et al. [39]. Related analytical results were presented by Sharma and Singh [38] as well as by Holliday et al. [... |

4 | Computation of information rates by particle methods
- Dauwels, Loeliger
- 2004
(Show Context)
Citation Context ...3]; the latter explore, in particular, the relation to Lyapunov exponents of the product of random matrices. Further related work by the authors of the present paper (not covered here) includes [42], =-=[13]-=-, [47]; see also [43] and [5]. In this paper, after describing the basic algorithm, we extend the method to very general (non-finite-state) channels with memory. In Section V-C and Appendix III, we de... |

4 | On achievable rates of multistage decoding on two-dimensional isi channels
- Soriaga, Siegel, et al.
- 2005
(Show Context)
Citation Context ...ple-output (MIMO) channels with ISI [49]; magnetic recording is also considered by Ryan et al. [34] as well as by Pighi et al. [33]. Two-dimensional ISI channels are considered by Siegel et al. [11], =-=[40]-=- and by Shental et al. [39]. Related analytical results were presented by Sharma and Singh [38] as well as by Holliday et al. [22], [23]; the latter explore, in particular, the relation to Lyapunov ex... |

3 |
On the capacity of the noisy runlength channel
- Heegard, Duel-Hallen, et al.
- 1991
(Show Context)
Citation Context ...s on the i.u.d. information rate [35]. A related topic is the continuous-time additive white Gaussian noise (AWGN) channel with peak-amplitude-constrained input, which was addressed by Heegard et al. =-=[19]-=-, [20]. Despite all this work, information rates of such channels could not be computed accurately enough for most engineering purposes except for the Gilbert–Elliott channel and its generalizations. ... |

3 | On entropy and Lyapunov exponents for finite-state channels.” Submitted to - Holliday, Glynn, et al. |

2 | On finite-state information rates from channel simulations - Arnold, Loeliger |

2 | finite-state information rates from channel simulations - “On |

2 |
Capacity of finite-state channels based on Lyapunov exponents of random matrices
- Holliday, Goldsmith, et al.
- 2006
(Show Context)
Citation Context ...-dimensional ISI channels are considered by Siegel et al. [11], [40] and by Shental et al. [39]. Related analytical results were presented by Sharma and Singh [38] as well as by Holliday et al. [22], =-=[23]-=-; the latter explore, in particular, the relation to Lyapunov exponents of the product of random matrices. Further related work by the authors of the present paper (not covered here) includes [42], [1... |

2 |
Bounds on mutual information rates of noisy channels with timing errors
- Zeng, Tokas, et al.
- 2005
(Show Context)
Citation Context ...e latter explore, in particular, the relation to Lyapunov exponents of the product of random matrices. Further related work by the authors of the present paper (not covered here) includes [42], [13], =-=[47]-=-; see also [43] and [5]. In this paper, after describing the basic algorithm, we extend the method to very general (non-finite-state) channels with memory. In Section V-C and Appendix III, we demonstr... |

2 |
information rates and coding for MIMO systems over ISI channels and frequency-selective fading channels
- “Achievable
- 2004
(Show Context)
Citation Context ...8/$20.00 © 2006 IEEEARNOLD et al.: SIMULATION-BASED COMPUTATION OF INFORMATION RATES FOR CHANNELS WITH MEMORY 3499 channels [48] and of fading multiple-input multiple-output (MIMO) channels with ISI =-=[49]-=-; magnetic recording is also considered by Ryan et al. [34] as well as by Pighi et al. [33]. Two-dimensional ISI channels are considered by Siegel et al. [11], [40] and by Shental et al. [39]. Related... |