## Capacity, mutual information, and coding for finite-state Markov channels (1996)

### Cached

### Download Links

- [wsl.stanford.edu]
- [www.path.berkeley.edu]
- DBLP

### Other Repositories/Bibliography

Venue: | IEEE Trans. Inform. Theory |

Citations: | 15 - 2 self |

### BibTeX

@ARTICLE{Goldsmith96capacity,mutual,

author = {Andrea J. Goldsmith and Pravin P. Varaiya},

title = {Capacity, mutual information, and coding for finite-state Markov channels},

journal = {IEEE Trans. Inform. Theory},

year = {1996},

pages = {868--886}

}

### OpenURL

### Abstract

Abstract The Finite-State Markov Channel (FSMC) is a discrete-time varying channel whose variation is determined by a finite-state Markov process. These channels have memory due to the Markov channel variation. We obtain the FSMC capacity as a function of the conditional channel state probability. We also show that for i.i.d. channel inputs, this conditional probability converges weakly, and the channel's mutual information is then a closed-form continuous function of the input distribution. We next consider coding for FSMCs. In general, the complexity of maximum-likelihood decoding grows exponentially with the channel memory length. Therefore, in practice, interleaving and memoryless channel codes are used. This technique results in some performance loss relative to the inherent capacity of channels with memory. We propose a maximum-likelihood decisionfeedback decoder with complexity that is independent of the channel memory. We calculate the capacity and cutoff rate of our technique, and show that it preserves the capacity of certain FSMCs. We also compare the performance of the decision-feedback decoder with that of interleaving and memoryless channel coding on a fading channel with 4PSK modulation.

### Citations

9134 |
Elements of Information Theory
- Cover, Thomas
- 1991
(Show Context)
Citation Context ...stributions on X n . The mutual information can be written as I(X n ; Y n ) = H(Y n ) \Gamma H(Y n jX n ); (18) where H(Y ) = E[\Gamma log p(y)] and H(Y jX) = E[\Gamma log p(yjx)]. It is easily shown =-=[14]-=- that H(Y n ) = n X i=1 H(Y i jY i\Gamma1 ) (19) and H(Y n jX n ) = n X i=1 H(Y i jX i ; Y i\Gamma1 ; X i\Gamma1 ): (20) The following lemma, proved in Appendix 3, allows the mutual information to be ... |

3432 |
of probability measures
- Billingsley
- 1968
(Show Context)
Citation Context ...ion when the channel inputs are i.i.d. By definition, the Markov chain S n is aperiodic and irreducible over a finite state space, so the effect of its initial state dies away exponentially with time =-=[12]-=-. Thus, the FSMC is an 1 Note that B(yn) has an implicit dependence on the distribution of xn . indecomposable channel. The capacity of an indecomposable channel is independent of its initial state, a... |

1553 | Information theory and reliable communication - Gallager - 1968 |

913 |
Information Theory: Coding Theorems for Discrete Memoryless Systems
- Csiszár, Körner
- 1981
(Show Context)
Citation Context ...s of the corresponding channel capacity [2]. On the other hand, with no information about the channel state or its transition structure, capacity is reduced to that of the Arbitrarily Varying Channel =-=[3]-=-. We consider the intermediate case, where the channel transition structure of the FSMC is known. The memory of the FSMC comes from the dependence of the current channel state on past inputs and outpu... |

116 |
Delayed decision-feedback sequence estimation
- Duel-Hallen, Heegard
- 1989
(Show Context)
Citation Context ...lel path corresponds to a different estimate of the received symbol. The number of parallel paths will grow exponentially in this case, however we may be able to apply some of the methods outlined in =-=[19]-=- and [20] to reduce the number of paths sustained through the trellis. VII Two-State Variable Noise Channel We now compute the capacity and cutoff rates of a two-state Q-AWN channel with variable SNR,... |

116 |
Reduced-state Sequence Estimation with Set Partitioning and Decision Feedback
- Eyuboglu, Qureshi
- 1986
(Show Context)
Citation Context ...corresponds to a different estimate of the received symbol. The number of parallel paths will grow exponentially in this case, however we may be able to apply some of the methods outlined in [19] and =-=[20]-=- to reduce the number of paths sustained through the trellis. VII Two-State Variable Noise Channel We now compute the capacity and cutoff rates of a two-state Q-AWN channel with variable SNR, Gaussian... |

82 |
The design of trellis coded MPSK for fading channel: Set partitioning for optimum code design
- Divsalar, Simon
- 1988
(Show Context)
Citation Context ...annel using decision-feedback decoding. Most coding techniques for fading channels rely on built-in time diversity in the code to mitigate the fading effect. Code designs of this type can be found in =-=[7, 8, 9]-=- and the references therein. These codes use the same time-diversity idea as interleaving and memoryless channel encoding, except that the diversity is implemented with the code metric instead of the ... |

45 |
Moher , “Maximum likelihood sequence estimation of CPM signals transmitted over Rayleigh flat-fading channels
- Lodge, L
- 1990
(Show Context)
Citation Context ...g and memoryless channel encoding, channel correlation information is ignored with these coding schemes. Maximum-likelihood sequence estimation for fading channels without coding has been examined in =-=[10, 11]-=-. However, it is difficult to implement coding with these schemes due to the code delays. In our scheme, coding delays do not result in state decision delays, since the decisions are based on estimate... |

42 |
Capacity and Coding for the Gilbert-Elliot Channels
- Mushkin, Bar-David
- 1989
(Show Context)
Citation Context ...partment of Electrical Engineering and Computer Science, University of California, Berkeley, CA 94720. I Introduction This paper extends the capacity and coding results of M. Mushkin and I. Bar-David =-=[1]-=- for the Gilbert-Elliot channel to a more general time-varying channel model. The Gilbert-Elliot channel is a stationary two-state Markov chain, where each state is a binary symmetric channel (BSC), a... |

32 |
A limit theorem for partially observed Markov chains. Ann. Probability 3
- Kaijser
- 1975
(Show Context)
Citation Context ...dependent inputs. To obtain the weak convergence of �� n and ae n , we also assume that the channel inputs are i.i.d., since we can then apply convergence results for partially observed Markov cha=-=ins [21]-=-. Consider the new stochastic process U n 4 = (S n ; y n ; x n ) defined on the state space U = C \Theta Y \Theta X . Since S n is stationary and ergodic and x n is i.i.d., U n is stationary and ergod... |

18 |
Detection of coded modulation signals on linear, severely distorted channels using decision-feedback noise prediction with interleaving
- Eyuboglu
- 1988
(Show Context)
Citation Context ...back through our decision-feedback decoder. In particular, the structure of our decision-feedback decoder already includes the interleaver/deinterleaver pair proposed by Eyuboglu for DFEs with coding =-=[17]-=-. In his method, this pair introduced a periodic delay in the received bits such that delayed reliable decisions can be used for feedback. Applying this idea to our system effectively combines the dec... |

16 |
Finite-state Modeling, Capacity, and Joint Source/Channel Coding for Time-Varying Channels
- Wang
- 1992
(Show Context)
Citation Context ...feedback decoder for general FSMCs (ignoring error propagation), and show that this penalty vanishes for a certain class of FSMCs. The most common example of a FSMC is a correlated fading channel. In =-=[5]-=-, a FSMC model for Rayleigh fading is proposed, where the channel state varies over binary symmetric channels with different crossover probabilities. Our recursive capacity formula is a generalization... |

15 |
and D.P.Taylor, "An adaptive maximum likelihood receiver for correlated Rayleigh-fading channels
- Dam
- 1994
(Show Context)
Citation Context ...g and memoryless channel encoding, channel correlation information is ignored with these coding schemes. Maximum-likelihood sequence estimation for fading channels without coding has been examined in =-=[10, 11]-=-. However, it is difficult to implement coding with these schemes due to the code delays. In our scheme, coding delays do not result in state decision delays, since the decisions are based on estimate... |

14 |
Coded modulation for fading channels: An overview
- Sundberg, Seshadri
- 1993
(Show Context)
Citation Context ...annel using decision-feedback decoding. Most coding techniques for fading channels rely on built-in time diversity in the code to mitigate the fading effect. Code designs of this type can be found in =-=[7, 8, 9]-=- and the references therein. These codes use the same time-diversity idea as interleaving and memoryless channel encoding, except that the diversity is implemented with the code metric instead of the ... |

12 |
Coded M-DPSK with built-in time diversity for fading channels
- Wei
- 1993
(Show Context)
Citation Context ...annel using decision-feedback decoding. Most coding techniques for fading channels rely on built-in time diversity in the code to mitigate the fading effect. Code designs of this type can be found in =-=[7, 8, 9]-=- and the references therein. These codes use the same time-diversity idea as interleaving and memoryless channel encoding, except that the diversity is implemented with the code metric instead of the ... |

9 |
Digital Communications. 2nd ed
- Proakis
- 1989
(Show Context)
Citation Context ...eedback decoder to update the b �� j value. This is exactly the difficulty faced by an adaptive decision-feedback equalizer (DFE), where decoding decisions are used to update the DFE tap coefficie=-=nts [16]-=-. New methods to combine DFEs and coding have recently been proposed, and several of these methods can be used to obtain some coding gain in the estimate of x j fed back through our decision-feedback ... |

6 |
The cut-off rate of the time correlated fading channels
- Leeuwin-Boullé, Belfiore
- 1992
(Show Context)
Citation Context ...he practical achievable information rate of a channel with coding. The cutoff rate for correlated fading channels with MPSK inputs, assuming channel state information at the receiver, was obtained in =-=[6]-=-: we obtain the same cutoff rate on this channel using decision-feedback decoding. Most coding techniques for fading channels rely on built-in time diversity in the code to mitigate the fading effect.... |

5 |
Soft-Decision Feedback Equalizer for Continuous Phase Modulated Signals in Wideband Mobile Radio Channels
- Cheung, Steele
- 1994
(Show Context)
Citation Context ...pproach to implement coding gain uses soft decisions on the received symbols to update �� n , then later corrects this initial �� n estimate if the decoded symbols differ from their initial es=-=timates [18]. Th-=-is method truncates the number of symbols affected by an incorrect decision, at a cost of increased complexity to recalculate and update the �� n values. Finally, decision-feedback decoding can be... |

1 | The Capacity of Time-Varying Multipath Channels - Goldsmith - 1991 |

1 |
13] R.G. Gallager, Information Theory and Reliable Communication
- Probability, Wiley
- 1986
(Show Context)
Citation Context ...tion when the channel inputs are i.i.d. By definition, the Markov chain Sn is aperiodic and irreducible over a finite state space, so the effect of its initial state dies away exponentially with time =-=[12]-=-. Thus, the FSMC is an 1Note that B(yn) has an implicit dependence on the distribution of xn. 6sindecomposable channel. The capacity of an indecomposable channel is independent of its initial state, a... |