Results 1  10
of
388
Capacity Limits of MIMO Channels
 IEEE J. SELECT. AREAS COMMUN
, 2003
"... We provide an overview of the extensive recent results on the Shannon capacity of singleuser and multiuser multipleinput multipleoutput (MIMO) channels. Although enormous capacity gains have been predicted for such channels, these predictions are based on somewhat unrealistic assumptions about t ..."
Abstract

Cited by 409 (17 self)
 Add to MetaCart
We provide an overview of the extensive recent results on the Shannon capacity of singleuser and multiuser multipleinput multipleoutput (MIMO) channels. Although enormous capacity gains have been predicted for such channels, these predictions are based on somewhat unrealistic assumptions about the underlying timevarying channel model and how well it can be tracked at the receiver, as well as at the transmitter. More realistic assumptions can dramatically impact the potential capacity gains of MIMO techniques. For timevarying MIMO channels there are multiple Shannon theoretic capacity definitions and, for each definition, different correlation models and channel information assumptions that we consider. We first provide a comprehensive summary of ergodic and capacity versus outage results for singleuser MIMO channels. These results indicate that the capacity gain obtained from multiple antennas heavily depends
Mutual information and minimum meansquare error in Gaussian channels
 IEEE TRANS. INFORM. THEORY
, 2005
"... This paper deals with arbitrarily distributed finitepower input signals observed through an additive Gaussian noise channel. It shows a new formula that connects the inputoutput mutual information and the minimum meansquare error (MMSE) achievable by optimal estimation of the input given the out ..."
Abstract

Cited by 285 (32 self)
 Add to MetaCart
(Show Context)
This paper deals with arbitrarily distributed finitepower input signals observed through an additive Gaussian noise channel. It shows a new formula that connects the inputoutput mutual information and the minimum meansquare error (MMSE) achievable by optimal estimation of the input given the output. That is, the derivative of the mutual information (nats) with respect to the signaltonoise ratio (SNR) is equal to half the MMSE, regardless of the input statistics. This relationship holds for both scalar and vector signals, as well as for discretetime and continuoustime noncausal MMSE estimation. This fundamental informationtheoretic result has an unexpected consequence in continuoustime nonlinear estimation: For any input signal with finite power, the causal filtering MMSE achieved at SNR is equal to the average value of the noncausal smoothing MMSE achieved with a channel whose signaltonoise ratio is chosen uniformly distributed between 0 and SNR.
Communication on the Grassmann Manifold: A Geometric Approach to the Noncoherent MultipleAntenna Channel
 IEEE Trans. Inform. Theory
, 2002
"... In this paper, we study the capacity of multipleantenna fading channels. We focus on the scenario where the fading coefficients vary quickly; thus an accurate estimation of the coefficients is generally not available to either the transmitter or the receiver. We use a noncoherent block fading model ..."
Abstract

Cited by 271 (8 self)
 Add to MetaCart
In this paper, we study the capacity of multipleantenna fading channels. We focus on the scenario where the fading coefficients vary quickly; thus an accurate estimation of the coefficients is generally not available to either the transmitter or the receiver. We use a noncoherent block fading model proposed by Marzetta and Hochwald. The model does not assume any channel side information at the receiver or at the transmitter, but assumes that the coefficients remain constant for a coherence interval of length symbol periods. We compute the asymptotic capacity of this channel at high signaltonoise ratio (SNR) in terms of the coherence time , the number of transmit antennas , and the number of receive antennas . While the capacity gain of the coherent multiple antenna channel is min bits per second per hertz for every 3dB increase in SNR, the corresponding gain for the noncoherent channel turns out to be (1 ) bits per second per herz, where = min 2 . The capacity expression has a geometric interpretation as sphere packing in the Grassmann manifold.
Energyconstrained modulation optimization
 IEEE Transactions on Wireless Communications
, 2005
"... Abstract — We consider wireless systems where the nodes operate on batteries so that energy consumption must be minimized while satisfying given throughput and delay requirements. In this context, we analyze the best modulation strategy to minimize the total energy consumption required to send a giv ..."
Abstract

Cited by 156 (11 self)
 Add to MetaCart
(Show Context)
Abstract — We consider wireless systems where the nodes operate on batteries so that energy consumption must be minimized while satisfying given throughput and delay requirements. In this context, we analyze the best modulation strategy to minimize the total energy consumption required to send a given number of bits. The total energy consumption includes both the transmission energy and the circuit energy consumption. For uncoded systems, by optimizing the transmission time and the modulation parameters we show that up to 80 % energy savings is achievable over nonoptimized systems. For coded systems, we show that the benefit of coding varies with the transmission distance and the underlying modulation schemes. Index Terms — Energy efficiency, modulation optimization, MQAM, MFSK.
Capacity bounds via duality with applications to multipleantenna systems on flatfading channels
 IEEE Trans. Inform. Theory
, 2003
"... A general technique is proposed for the derivation of upper bounds on channel capacity. The technique is based on a dual expression for channel capacity where the maximization (of mutual information) over distributions on the channel input alphabet is replaced with a minimization (of average relativ ..."
Abstract

Cited by 147 (39 self)
 Add to MetaCart
(Show Context)
A general technique is proposed for the derivation of upper bounds on channel capacity. The technique is based on a dual expression for channel capacity where the maximization (of mutual information) over distributions on the channel input alphabet is replaced with a minimization (of average relative entropy) over distributions on the channel output alphabet. Every choice of an output distribution — even if not the channel image of some input distribution — leads to an upper bound on mutual information. The proposed approach is used in order to study multiantenna flat fading channels with memory where the realization of the fading process is unknown at the transmitter and unknown (or only partially known) at the receiver. It is demonstrated that, for high signaltonoise ratio (SNR), the capacity of such channels typically grows only doublelogarithmically in the SNR. This is in stark contrast to the case with perfect receiver side information where capacity grows logarithmically in the SNR. To better understand this phenomenon
Bounds on capacity and minimum energyperbit for AWGN relay channels
 IEEE TRANS. INF. THEORY
, 2006
"... Upper and lower bounds on the capacity and minimum energyperbit for general additive white Gaussian noise (AWGN) and frequencydivision AWGN (FDAWGN) relay channel models are established. First, the maxflow mincut bound and the generalized blockMarkov coding scheme are used to derive upper an ..."
Abstract

Cited by 108 (2 self)
 Add to MetaCart
Upper and lower bounds on the capacity and minimum energyperbit for general additive white Gaussian noise (AWGN) and frequencydivision AWGN (FDAWGN) relay channel models are established. First, the maxflow mincut bound and the generalized blockMarkov coding scheme are used to derive upper and lower bounds on capacity. These bounds are never tight for the general AWGN model and are tight only under certain conditions for the FDAWGN model. Two coding schemes that do not require the relay to decode any part of the message are then investigated. First, it is shown that the “sideinformation coding scheme ” can outperform the blockMarkov coding scheme. It is also shown that the achievable rate of the sideinformation coding scheme can be improved via time sharing. In the second scheme, the relaying functions are restricted to be linear. The problem is reduced to a “singleletter ” nonconvex optimization problem for the FDAWGN model. The paper also establishes a relationship between the minimum energyperbit and capacity of the AWGN relay channel. This relationship together with the lower and upper bounds on capacity are used to establish corresponding lower and upper bounds on the minimum energyperbit that do not differ by more than a factor of 1 45 for the FDAWGN relay channel model and 1 7 for the general AWGN model.
Impact of antenna correlation on the capacity of multiantenna channels
 IEEE TRANS. INFORM. THEORY
, 2005
"... This paper applies random matrix theory to obtain analytical characterizations of the capacity of correlated multiantenna channels. The analysis is not restricted to the popular separable correlation model, but rather it embraces a more general representation that subsumes most of the channel model ..."
Abstract

Cited by 101 (6 self)
 Add to MetaCart
This paper applies random matrix theory to obtain analytical characterizations of the capacity of correlated multiantenna channels. The analysis is not restricted to the popular separable correlation model, but rather it embraces a more general representation that subsumes most of the channel models that have been treated in the literature. For arbitrary signaltonoise ratios @ A, the characterization is conducted in the regime of large numbers of antennas. For the low and high regions, in turn, we uncover compact capacity expansions that are valid for arbitrary numbers of antennas and that shed insight on how antenna correlation impacts the tradeoffs among power, bandwidth, and rate.
Transporting information and energy simultaneously
 in Proc. 2008 IEEE Int. Symposium on Inform. Theory
"... Abstract—The fundamental tradeoff between the rates at which energy and reliable information can be transmitted over a single noisy line is studied. Engineering inspiration for this problem is provided by powerline communication, RFID systems, and covert packet timing systems as well as communicatio ..."
Abstract

Cited by 95 (2 self)
 Add to MetaCart
(Show Context)
Abstract—The fundamental tradeoff between the rates at which energy and reliable information can be transmitted over a single noisy line is studied. Engineering inspiration for this problem is provided by powerline communication, RFID systems, and covert packet timing systems as well as communication systems that scavenge received energy. A capacityenergy function is defined and a coding theorem is given. The capacityenergy function is a nonincreasing concave ∩ function. Capacityenergy functions for several channels are computed. I.
Network Information Flow with Correlated Sources
 TO APPEAR IN THE IEEE TRANSACTIONS ON INFORMATION THEORY
, 2005
"... Consider the following network communication setup, originating in a sensor networking application we refer to as the “sensor reachback ” problem. We have a directed graph G = (V, E), where V = {v0v1...vn} and E ⊆ V × V. If (vi, vj) ∈ E, then node i can send messages to node j over a discrete memor ..."
Abstract

Cited by 94 (9 self)
 Add to MetaCart
(Show Context)
Consider the following network communication setup, originating in a sensor networking application we refer to as the “sensor reachback ” problem. We have a directed graph G = (V, E), where V = {v0v1...vn} and E ⊆ V × V. If (vi, vj) ∈ E, then node i can send messages to node j over a discrete memoryless channel (Xij, pij(yx), Yij), of capacity Cij. The channels are independent. Each node vi gets to observe a source of information Ui (i = 0...M), with joint distribution p(U0U1...UM). Our goal is to solve an incast problem in G: nodes exchange messages with their neighbors, and after a finite number of communication rounds, one of the M + 1 nodes (v0 by convention) must have received enough information to reproduce the entire field of observations (U0U1...UM), with arbitrarily small probability of error. In this paper, we prove that such perfect reconstruction is possible if and only if H(USUS c) < i∈S,j∈S c for all S ⊆ {0...M}, S = ∅, 0 ∈ S c. Our main finding is that in this setup a general source/channel separation theorem holds, and that Shannon information behaves as a classical network flow, identical in nature to the flow of water in pipes. At first glance, it might seem surprising that separation holds in a
Optimum power allocation for parallel Gaussian channels with arbitrary input distributions
 IEEE TRANS. INF. THEORY
, 2006
"... The mutual information of independent parallel Gaussiannoise channels is maximized, under an average power constraint, by independent Gaussian inputs whose power is allocated according to the waterfilling policy. In practice, discrete signaling constellations with limited peaktoaverage ratios (m ..."
Abstract

Cited by 94 (10 self)
 Add to MetaCart
The mutual information of independent parallel Gaussiannoise channels is maximized, under an average power constraint, by independent Gaussian inputs whose power is allocated according to the waterfilling policy. In practice, discrete signaling constellations with limited peaktoaverage ratios (mPSK, mQAM, etc.) are used in lieu of the ideal Gaussian signals. This paper gives the power allocation policy that maximizes the mutual information over parallel channels with arbitrary input distributions. Such policy admits a graphical interpretation, referred to as mercury/waterfilling, which generalizes the waterfilling solution and allows retaining some of its intuition. The relationship between mutual information of Gaussian channels and nonlinear minimum meansquare error (MMSE) proves key to solving the power allocation problem.