Results 1  10
of
214
Fading Channels: InformationTheoretic And Communications Aspects
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 1998
"... In this paper we review the most peculiar and interesting informationtheoretic and communications features of fading channels. We first describe the statistical models of fading channels which are frequently used in the analysis and design of communication systems. Next, we focus on the information ..."
Abstract

Cited by 289 (1 self)
 Add to MetaCart
In this paper we review the most peculiar and interesting informationtheoretic and communications features of fading channels. We first describe the statistical models of fading channels which are frequently used in the analysis and design of communication systems. Next, we focus on the information theory of fading channels, by emphasizing capacity as the most important performance measure. Both singleuser and multiuser transmission are examined. Further, we describe how the structure of fading channels impacts code design, and finally overview equalization of fading multipath channels.
SpatioTemporal Coding for Wireless Communication
 IEEE Trans. Commun
, 1998
"... Multipath signal propagation has long been viewed as an impairment to reliable communication in wireless channels. This paper shows that the presence of multipath greatly improves achievable data rate if the appropriate communication structure is employed. A compact model is developed for the multip ..."
Abstract

Cited by 276 (14 self)
 Add to MetaCart
Multipath signal propagation has long been viewed as an impairment to reliable communication in wireless channels. This paper shows that the presence of multipath greatly improves achievable data rate if the appropriate communication structure is employed. A compact model is developed for the multipleinput multipleoutput (MIMO) dispersive spatially selective wireless communication channel. The multivariate information capacity is analyzed. For high signaltonoise ratio (SNR) conditions, the MIMO channel can exhibit a capacity slope in bits per decibel of power increase that is proportional to the minimum of the number multipath components, the number of input antennas, or the number of output antennas. This desirable result is contrasted with the lower capacity slope of the wellstudied case with multiple antennas at only one side of the radio link. A spatiotemporal vectorcoding (STVC) communication structure is suggested as a means for achieving MIMO channel capacity. The complexity of STVC motivates a more practical reducedcomplexity discrete matrix multitone (DMMT) spacefrequency coding approach. Both of these structures are shown to be asymptotically optimum. An adaptivelattice trelliscoding technique is suggested as a method for coding across the space and frequency dimensions that exist in the DMMT channel. Experimental examples that support the theoretical results are presented. Index TermsAdaptive arrays, adaptive coding, adaptive modulation, antenna arrays, broadband communication, channel coding, digital modulation, information rates, MIMO systems, multipath channels. I.
"Turbo equalization": principles and new results
, 2000
"... Since the invention of \turbo codes" by Berrou et al. in 1993, the \turbo principle" has been adapted to several communication problems such as \turbo equalization", \turbo trellis coded modulation", and iterative multi user detection. In this paper we study the \turbo equalization" approach, which ..."
Abstract

Cited by 172 (19 self)
 Add to MetaCart
Since the invention of \turbo codes" by Berrou et al. in 1993, the \turbo principle" has been adapted to several communication problems such as \turbo equalization", \turbo trellis coded modulation", and iterative multi user detection. In this paper we study the \turbo equalization" approach, which can be applied to coded data transmission over channels with intersymbol interference (ISI). In the original system invented by Douillard et al., the data is protected by a convolutional code and a receiver consisting of two trellisbased detectors are used, one for the channel (the equalizer) and one for the code (the decoder). It has been shown that iterating equalization and decoding tasks can yield tremendous improvements in bit error rate (BER). We introduce new approaches to combining equalization based on linear ltering with the decoding. The result is a receiver that is capable of improving BER performance through iterations of equalization and decoding in a manner similar to turbo ...
PerSurvivor Processing: A General Approach to MLSE in Uncertain Environments
 IEEE Trans. Commun
, 1995
"... PerSurvivor Processing (PSP) provides a general framework for the approximation of Maximum Likelihood Sequence Estimation (MLSE) algorithms whenever the presence of unknown quantities prevents the precise use of the classical Viterbi algorithm. This principle stems from the idea that dataaided est ..."
Abstract

Cited by 131 (12 self)
 Add to MetaCart
PerSurvivor Processing (PSP) provides a general framework for the approximation of Maximum Likelihood Sequence Estimation (MLSE) algorithms whenever the presence of unknown quantities prevents the precise use of the classical Viterbi algorithm. This principle stems from the idea that dataaided estimation of unknown parameters may be embedded into the structure of the Viterbi algorithm itself. Among the numerous possible applications, we concentrate here on (a) adaptive MLSE, (b) simultaneous Trellis Coded Modulation (TCM) decoding and phase synchronization, (c) adaptive Reduced State Sequence Estimation (RSSE). As a matter of fact, PSP is interpretable as a generalization of decision feedback techniques of RSSE to decoding in the presence of unknown parameters. A number of algorithms for the simultaneous estimation of data sequence andunknown channel parameters are presented and compared with "conventional" techniques based on the use of tentative decisions. Results for uncoded modu...
Minimum mean squared error equalization using a priori information
 IEEE Trans. Signal Processing
, 2002
"... Abstract—A number of important advances have been made in the area of joint equalization and decoding of data transmitted over intersymbol interference (ISI) channels. Turbo equalization is an iterative approach to this problem, in which a maximum a posteriori probability (MAP) equalizer and a MAP d ..."
Abstract

Cited by 84 (9 self)
 Add to MetaCart
Abstract—A number of important advances have been made in the area of joint equalization and decoding of data transmitted over intersymbol interference (ISI) channels. Turbo equalization is an iterative approach to this problem, in which a maximum a posteriori probability (MAP) equalizer and a MAP decoder exchange soft information in the form of prior probabilities over the transmitted symbols. A number of reducedcomplexity methods for turbo equalization have recently been introduced in which MAP equalization is replaced with suboptimal, lowcomplexity approaches. In this paper, we explore a number of lowcomplexity softinput/softoutput (SISO) equalization algorithms based on the minimum mean square error (MMSE) criterion. This includes the extension of existing approaches to general signal constellations and the derivation of a novel approach requiring less complexity than the MMSEoptimal solution. All approaches are qualitatively analyzed by observing the meansquare error averaged over a sequence of equalized data. We show that for the turbo equalization application, the MMSEbased SISO equalizers perform well compared with a MAP equalizer while providing a tremendous complexity reduction. Index Terms—Equalization, iterative decoding, low complexity, minimum mean square error. I.
Binary intersymbol interference channels: Gallager codes, density evolution and code performance bounds
 IEEE TRANS. INFORM. THEORY
, 2003
"... We study the limits of performance of Gallager codes (lowdensity paritycheck (LDPC) codes) over binary linear intersymbol interference (ISI) channels with additive white Gaussian noise (AWGN). Using the graph representations of the channel, the code, and the sum–product messagepassing detector/d ..."
Abstract

Cited by 49 (4 self)
 Add to MetaCart
We study the limits of performance of Gallager codes (lowdensity paritycheck (LDPC) codes) over binary linear intersymbol interference (ISI) channels with additive white Gaussian noise (AWGN). Using the graph representations of the channel, the code, and the sum–product messagepassing detector/decoder, we prove two error concentration theorems. Our proofs expand on previous work by handling complications introduced by the channel memory. We circumvent these problems by considering not just linear Gallager codes but also their cosets and by distinguishing between different types of message flow neighborhoods depending on the actual transmitted symbols. We compute the noise tolerance threshold using a suitably developed density evolution algorithm and verify, by simulation, that the thresholds represent accurate predictions of the performance of the iterative sum–product algorithm for finite (but large) block lengths. We also demonstrate that for high rates, the thresholds are very close to the theoretical limit of performance for Gallager codes over ISI channels. If g denotes the capacity of a binary ISI channel and if g � � � denotes the maximal achievable mutual information rate when the channel inputs are independent and identically distributed (i.i.d.) binary random variables @g � � � gA, we prove that the maximum information rate achievable by the sum–product decoder of a Gallager (coset) code is upperbounded by g � � �. The last topic investigated is the performance limit of the decoder if the trellis portion of the sum–product algorithm is executed only once; this demonstrates the potential for trading off the computational requirements and the performance of the decoder.
Detection of Stochastic Processes
 IEEE Trans. Inform. Theory
, 1998
"... This paper reviews two streams of development, from the 1940's to the present, in signal detection theory: the structure of the likelihood ratio for detecting signals in noise and the role of dynamic optimization in detection problems involving either very large signal sets or the joint optimization ..."
Abstract

Cited by 41 (6 self)
 Add to MetaCart
This paper reviews two streams of development, from the 1940's to the present, in signal detection theory: the structure of the likelihood ratio for detecting signals in noise and the role of dynamic optimization in detection problems involving either very large signal sets or the joint optimization of observation time and performance. This treatment deals exclusively with basic results developed for the situation in which the observations are modeled as continuoustime stochastic processes. The mathematics and intuition behind such developments as the matched filter, the RAKE receiver, the estimatorcorrelator, maximumlikelihood sequence detectors, multiuser detectors, sequential probability ratio tests, and cumulativesum quickest detectors, are described. Index Terms Dynamic programming, innovations processes, likelihood ratios, martingale theory, matched filters, optimal stopping, reproducing kernel Hilbert spaces, sequence detection, sequential methods, signal detection, signal estimation.
ComplexField Coding for OFDM Over Fading Wireless Channels
 IEEE Trans. Inform. Theory
, 2003
"... Orthogonal frequencydivision multiplexing (OFDM) converts a timedispersive channel into parallel subchannels, and thus facilitates equalization and (de)coding. But when the channel has nulls close to or on the fast Fourier transform (FFT) grid, uncoded OFDM faces serious symbol recovery problems. ..."
Abstract

Cited by 32 (1 self)
 Add to MetaCart
Orthogonal frequencydivision multiplexing (OFDM) converts a timedispersive channel into parallel subchannels, and thus facilitates equalization and (de)coding. But when the channel has nulls close to or on the fast Fourier transform (FFT) grid, uncoded OFDM faces serious symbol recovery problems. As an alternative to various errorcontrol coding techniques that have been proposed to ameliorate the problem, we perform complexfield coding (CFC) before the symbols are multiplexed. We quantify the maximum achievable diversity order for independent and identically distributed (i.i.d.) or correlated Rayleighfading channels, and also provide design rules for achieving the maximum diversity order. The maximum coding gain is given, and the encoder enabling the maximum coding gain is also found. Simulated performance comparisons of CFCOFDM with existing block and convolutionally coded OFDM alternatives favor CFCOFDM for the code rates used in a HiperLAN2 experiment.
Great expectations: The value of spatial diversity in wireless networks
 PROCEEDINGS OF THE IEEE
, 2004
"... In this paper, the effect of spatial diversity on the throughput and reliability of wireless networks is examined. Spatial diversity is realized through multiple independently fading transmit/receive antenna paths in singleuser communication and through independently fading links in multiuser commu ..."
Abstract

Cited by 28 (7 self)
 Add to MetaCart
In this paper, the effect of spatial diversity on the throughput and reliability of wireless networks is examined. Spatial diversity is realized through multiple independently fading transmit/receive antenna paths in singleuser communication and through independently fading links in multiuser communication. Adopting spatial diversity as a central theme, we start by studying its informationtheoretic foundations, then we illustrate its benefits across the physical (signal transmission/coding and receiver signal processing) and networking (resource allocation, routing, and applications) layers. Throughout the paper, we discuss engineering intuition and tradeoffs, emphasizing the strong interactions between the various network functionalities.
Conditional distribution learning with neural networks and its application to channel equalization," to appear
 IEEE Trans. Signal Processing
, 1997
"... Abstract — We present a conditional distribution learning formulation for realtime signal processing with neural networks based on a recent extension of maximum likelihood theory—partial likelihood (PL) estimation—which allows for i) dependent observations and ii) sequential processing. For a gener ..."
Abstract

Cited by 26 (11 self)
 Add to MetaCart
Abstract — We present a conditional distribution learning formulation for realtime signal processing with neural networks based on a recent extension of maximum likelihood theory—partial likelihood (PL) estimation—which allows for i) dependent observations and ii) sequential processing. For a general neural network conditional distribution model, we establish a fundamental informationtheoretic connection, the equivalence of maximum PL estimation, and accumulated relative entropy (ARE) minimization, and obtain large sample properties of PL for the general case of dependent observations. As an example, the binary case with the sigmoidal perceptron as the probability model is presented. It is shown that the single and multilayer perceptron (MLP) models satisfy conditions for the equivalence of the two cost functions: ARE and negative log partial likelihood. The practical issue of their gradient descent minimization is then studied within the wellformed cost functions framework. It is shown that these are wellformed cost functions for networks without hidden units; hence, their gradient descent minimization is guaranteed to converge to a solution if one exists on such networks. The formulation is applied to adaptive channel equalization, and simulation results are presented to show the ability of the least relative entropy equalizer to realize complex decision boundaries and to recover during training from convergence at the wrong extreme in cases where the mean square errorbased MLP equalizer cannot. I.