Results 1  10
of
44
Capacity of Fading Channels with Channel Side Information
, 1997
"... We obtain the Shannon capacity of a fading channel with channel side information at the transmitter and receiver, and at the receiver alone. The optimal power adaptation in the former case is "waterpouring" in time, analogous to waterpouring in frequency for timeinvariant frequencysele ..."
Abstract

Cited by 429 (22 self)
 Add to MetaCart
(Show Context)
We obtain the Shannon capacity of a fading channel with channel side information at the transmitter and receiver, and at the receiver alone. The optimal power adaptation in the former case is "waterpouring" in time, analogous to waterpouring in frequency for timeinvariant frequencyselective fading channels. Inverting the channel results in a large capacity penalty in severe fading.
The effect upon channel capacity in wireless communications of perfect and imperfect knowledge of the channel
 IEEE Trans. Inf. Theory
, 2000
"... Abstract—We present a model for timevarying communication singleaccess and multipleaccess channels without feedback. We consider the difference between mutual information when the receiver knows the channel perfectly and mutual information when the receiver only has an estimate of the channel. We ..."
Abstract

Cited by 198 (4 self)
 Add to MetaCart
(Show Context)
Abstract—We present a model for timevarying communication singleaccess and multipleaccess channels without feedback. We consider the difference between mutual information when the receiver knows the channel perfectly and mutual information when the receiver only has an estimate of the channel. We relate the variance of the channel measurement error at the receiver to upper and lower bounds for this difference in mutual information. We illustrate the use of our bounds on a channel modeled by a Gauss–Markov process, measured by a pilot tone. We relate the rate of time variation of the channel to the loss in mutual information due to imperfect knowledge of the measured channel. Index Terms—Channel uncertainty, multipleaccess channels, mutual information, timevarying channels, wireless communications. I.
The capacity of channels with feedback
 IEEE Trans. Information Theory
, 2009
"... We introduce a general framework for treating channels with memory and feedback. First, we generalize Massey’s concept of directed information [23] and use it to characterize the feedback capacity of general channels. Second, we present coding results for Markov channels. This requires determining a ..."
Abstract

Cited by 39 (2 self)
 Add to MetaCart
We introduce a general framework for treating channels with memory and feedback. First, we generalize Massey’s concept of directed information [23] and use it to characterize the feedback capacity of general channels. Second, we present coding results for Markov channels. This requires determining appropriate sufficient statistics at the encoder and decoder. Third, a dynamic programming framework for computing the capacity of Markov channels is presented. Fourth, it is shown that the average cost optimality equation (ACOE) can be viewed as an implicit singleletter characterization of the capacity. Fifth, scenarios
On the Optimality of Symbol by Symbol Filtering and Denoising
, 2003
"... We consider the problem of optimally recovering a finitealphabet discretetime stochastic process {X t } from its noisecorrupted observation process {Z t }. In general, the optimal estimate of X t will depend on all the components of {Z t } on which it can be based. We characterize nontrivial s ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
We consider the problem of optimally recovering a finitealphabet discretetime stochastic process {X t } from its noisecorrupted observation process {Z t }. In general, the optimal estimate of X t will depend on all the components of {Z t } on which it can be based. We characterize nontrivial situations (i.e., beyond the case where (X t , Z t ) are independent) for which optimum performance is attained using "symbol by symbol" operations (a.k.a.
Capacity, mutual information, and coding for finitestate Markov channels
 IEEE Trans. Inform. Theory
, 1996
"... Abstract The FiniteState Markov Channel (FSMC) is a discretetime varying channel whose variation is determined by a finitestate Markov process. These channels have memory due to the Markov channel variation. We obtain the FSMC capacity as a function of the conditional channel state probability. W ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
(Show Context)
Abstract The FiniteState Markov Channel (FSMC) is a discretetime varying channel whose variation is determined by a finitestate Markov process. These channels have memory due to the Markov channel variation. We obtain the FSMC capacity as a function of the conditional channel state probability. We also show that for i.i.d. channel inputs, this conditional probability converges weakly, and the channel's mutual information is then a closedform continuous function of the input distribution. We next consider coding for FSMCs. In general, the complexity of maximumlikelihood decoding grows exponentially with the channel memory length. Therefore, in practice, interleaving and memoryless channel codes are used. This technique results in some performance loss relative to the inherent capacity of channels with memory. We propose a maximumlikelihood decisionfeedback decoder with complexity that is independent of the channel memory. We calculate the capacity and cutoff rate of our technique, and show that it preserves the capacity of certain FSMCs. We also compare the performance of the decisionfeedback decoder with that of interleaving and memoryless channel coding on a fading channel with 4PSK modulation.
A successive decoding strategy for channels with memory
 in Proc. IEEE International symposium on Information Theory
, 2005
"... Abstract — This paper presents both an information lossless coding scheme and a method to evaluate constrained capacity for channels with memory and unknown state. The fundamental idea is to decompose the original channel into a bank of memoryless subchannels with partially known states, then succe ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
(Show Context)
Abstract — This paper presents both an information lossless coding scheme and a method to evaluate constrained capacity for channels with memory and unknown state. The fundamental idea is to decompose the original channel into a bank of memoryless subchannels with partially known states, then successively decode these subchannels. The receiver of each subchannel consists of an optimal estimator followed by a memoryless channel decoder. The coding scheme translates the codes and decoders designed for memoryless channels with near capacity performance to channels with memory. The results are applied to both finite state Markov channels and correlated flat fading channels. I. INTRODUCTION AND SUMMARY Many communication channels, including intersymbol interference (ISI) channels and correlated fading channels with
Capacity region of the finitestate multiple access channel with and without feedback
 IEEE Trans. Inform
"... ..."
(Show Context)
New bounds on the entropy rate of hidden Markov process
 IEEE Information Theory Workshop
"... ..."
(Show Context)
Design and performance of highspeed communication systems over timevarying radio channels
 ELEC. ENGIN. COMPUT. SCIENCE
, 1994
"... ..."
(Show Context)
Compressive Oversampling for Robust Data Transmission in Sensor Networks
"... Abstract—Data loss in wireless sensing applications is inevitable and while there have been many attempts at coping with this issue, recent developments in the area of Compressive Sensing (CS) provide a new and attractive perspective. Since many physical signals of interest are known to be sparse or ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
Abstract—Data loss in wireless sensing applications is inevitable and while there have been many attempts at coping with this issue, recent developments in the area of Compressive Sensing (CS) provide a new and attractive perspective. Since many physical signals of interest are known to be sparse or compressible, employing CS, not only compresses the data and reduces effective transmission rate, but also improves the robustness of the system to channel erasures. This is possible because reconstruction algorithms for compressively sampled signals are not hampered by the stochastic nature of wireless link disturbances, which has traditionally plagued attempts at proactively handling the effects of these errors. In this paper, we propose that if CS is employed for source compression, then CS can further be exploited as an application layer erasure coding strategy for recovering missing data. We show that CS erasure encoding (CSEC) with random sampling is efficient for handling missing data in erasure channels, paralleling the performance of BCH codes, with the added benefit of graceful degradation of the reconstruction error even when the amount of missing data far exceeds the designed redundancy. Further, since CSEC is equivalent to nominal oversampling in the incoherent measurement basis, it is computationally cheaper than conventional erasure coding. We support our proposal through extensive performance studies. Keywordserasure coding; compressive sensing. I.