Results 1  10
of
18
Massive MIMO Systems with NonIdeal Hardware: Energy Efficiency, Estimation, and Capacity Limits
, 2014
"... The use of largescale antenna arrays can bring substantial improvements in energy and/or spectral efficiency to wireless systems due to the greatly improved spatial resolution and array gain. Recent works in the field of massive multipleinput multipleoutput (MIMO) show that the user channels dec ..."
Abstract

Cited by 29 (6 self)
 Add to MetaCart
The use of largescale antenna arrays can bring substantial improvements in energy and/or spectral efficiency to wireless systems due to the greatly improved spatial resolution and array gain. Recent works in the field of massive multipleinput multipleoutput (MIMO) show that the user channels decorrelate when the number of antennas at the base stations (BSs) increases, thus strong signal gains are achievable with little interuser interference. Since these results rely on asymptotics, it is important to investigate whether the conventional system models are reasonable in this asymptotic regime. This paper considers a new system model that incorporates general transceiver hardware impairments at both the BSs (equipped with large antenna arrays) and the singleantenna user equipments (UEs). As opposed to the conventional case of ideal hardware, we show that hardware impairments create finite ceilings on the channel estimation accuracy and on the downlink/uplink capacity of each UE. Surprisingly, the capacity is mainly limited by the hardware at the UE, while the impact of impairments in the largescale arrays vanishes asymptotically and interuser interference (in particular, pilot contamination) becomes negligible. Furthermore, we prove that the huge degrees of freedom offered by massive MIMO can be used to reduce the transmit power and/or to tolerate larger hardware impairments, which allows for the use of inexpensive and energyefficient antenna elements.
Capacity limits and multiplexing gains of MIMO channels with transceiver impairments
 IEEE Commun. Lett
, 2013
"... Abstract—The capacity of ideal MIMO channels has an highSNR slope that equals the minimum of the number of transmit and receive antennas. This letter shows that physical MIMO channels behave fundamentally different, due to distortions from transceiver impairments. The capacity has a finite upper li ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
(Show Context)
Abstract—The capacity of ideal MIMO channels has an highSNR slope that equals the minimum of the number of transmit and receive antennas. This letter shows that physical MIMO channels behave fundamentally different, due to distortions from transceiver impairments. The capacity has a finite upper limit that holds for any channel distribution and SNR. The highSNR slope is thus zero, but the relative capacity gain of employing multiple antennas is at least as large as for ideal transceivers. Index Terms—Channel capacity, highSNR analysis, multiantenna communication, transceiver impairments. I.
The price of certainty: “waterslide curves” and the gap to capacity
 IEEE Trans. Inform. Theory, Submitted 2007. [Online]. Available: http://arxiv.org/abs/0801.0352
"... The classical problem of reliable pointtopoint digital communication is to achieve a low probability of error while keeping the rate high and the total power consumption small. Traditional informationtheoretic analysis uses explicit models for the communication channel to study the power spent in ..."
Abstract

Cited by 14 (9 self)
 Add to MetaCart
The classical problem of reliable pointtopoint digital communication is to achieve a low probability of error while keeping the rate high and the total power consumption small. Traditional informationtheoretic analysis uses explicit models for the communication channel to study the power spent in transmission. The resulting bounds are expressed using ‘waterfall ’ curves that convey the revolutionary idea that unboundedly low probabilities of biterror are attainable using only finite transmit power. However, practitioners have long observed that the decoder complexity, and hence the total power consumption, goes up when attempting to use sophisticated codes that operate close to the waterfall curve. This paper gives an explicit model for power consumption at an idealized decoder that allows for extreme parallelism in implementation. The decoder architecture is in the spirit of message passing and iterative decoding for sparsegraph codes, but is further idealized in that it allows for more computational power than is currently known to be implementable. Generalized spherepacking arguments are used to derive lower bounds on the decoding power needed for any possible code given only the gap from the Shannon limit and the desired probability of error. As the gap goes to zero, the energy per bit spent in decoding is shown to go to infinity. This suggests that to optimize total power, the transmitter should operate at a power that is strictly above the minimum demanded by the Shannon capacity. The lower bound is plotted to show an unavoidable tradeoff between the average biterror probability and the total power used in transmission and decoding. In the spirit of conventional waterfall curves, we call these ‘waterslide’ curves. The bound is shown to be order optimal by showing the existence of codes that can achieve similarly shaped waterslide curves under the proposed idealized model of decoding. 1 The price of certainty: “waterslide curves ” and the gap to capacity I.
Green Codes: EnergyEfficient ShortRange Communication
"... Abstract — A green code attempts to minimize the total energy perbit required to communicate across a noisy channel. The classical informationtheoretic approach neglects the energy expended in processing the data at the encoder and the decoder and only minimizes the energy required for transmissio ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
(Show Context)
Abstract — A green code attempts to minimize the total energy perbit required to communicate across a noisy channel. The classical informationtheoretic approach neglects the energy expended in processing the data at the encoder and the decoder and only minimizes the energy required for transmissions. Since there is no cost associated with using more degrees of freedom, the traditionally optimal strategy is to communicate at rate zero. In this work, we use our recently proposed model for the power consumed by iterative message passing. Using generalized spherepacking bounds on the decoding power, we find lower bounds on the total energy consumed in the transmissions and the decoding, allowing for freedom in the choice of the rate. We show that contrary to the classical intuition, the rate for green codes is bounded away from zero for any given error probability. In fact, as the desired biterror probability goes to zero, the optimizing rate for our bounds converges to 1. I.
Unreliable and ResourceConstrained Decoding
, 2010
"... Traditional information theory and communication theory assume that decoders are noiseless and operate without transient or permanent faults. Decoders are also traditionally assumed to be unconstrained in physical resources like materiel, memory, and energy. This thesis studies how constraining reli ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
Traditional information theory and communication theory assume that decoders are noiseless and operate without transient or permanent faults. Decoders are also traditionally assumed to be unconstrained in physical resources like materiel, memory, and energy. This thesis studies how constraining reliability and resources in the decoder limits the performance of communication systems. Five communication problems are investigated. Broadly speaking these are communication using decoders that are wiring costlimited, that are memorylimited, that are noisy, that fail catastrophically,
On multipath fading channels at high SNR
 in Proc. IEEE Internat. Symposium on Information Theory
, 2008
"... This work studies the capacity of multipath fading channels. A noncoherent channel model is considered, where neither the transmitter nor the receiver is cognizant of the realization of the path gains, but both are cognizant of their statistics. It is shown that if the delay spread is large in the s ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
This work studies the capacity of multipath fading channels. A noncoherent channel model is considered, where neither the transmitter nor the receiver is cognizant of the realization of the path gains, but both are cognizant of their statistics. It is shown that if the delay spread is large in the sense that the variances of the path gains decay exponentially or slower, then capacity is bounded in the signaltonoise ratio (SNR). For such channels, capacity does not tend to infinity as the SNR tends to infinity. In contrast, if the variances of the path gains decay faster than exponentially, then capacity is unbounded in the SNR. It is further demonstrated that if the number of paths is finite, then at high SNR capacity grows doublelogarithmically with the SNR, and the capacity preloglog, defined as the limiting ratio of capacity to log log SNR as SNR tends to infinity, is 1 irrespective of the number of paths. 1
An InformationTheoretic Characterization of Channels That Die
"... Abstract—Given the possibility of communication systems failing catastrophically, we investigate limits to communicating over channels that fail at random times. These channels are finitestate semiMarkov channels. We show that communication with arbitrarily small probability of error is not possib ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract—Given the possibility of communication systems failing catastrophically, we investigate limits to communicating over channels that fail at random times. These channels are finitestate semiMarkov channels. We show that communication with arbitrarily small probability of error is not possible. Making use of results in finite blocklength channel coding, we determine sequences of blocklengths that optimize transmission volume communicated at fixed maximum message error probabilities. We provide a partial ordering of communication channels. A dynamic programming formulation is used to show the structural result that channel state feedback does not improve performance. Index Terms—Channel coding, communication channels, dynamic programming, finite blocklength regime, reliability theory. “a communication channel … might be inoperative because of an amplifier failure, a broken or cut telephone wire,…” —I. M. Jacobs
A hot channel
 in Proc. Inform. Theory Workshop (ITW), Lake Tahoe
"... Abstract — Motivated by onchip communication, a channel model is proposed where the variance of the additive noise depends on the weighted sum of the past channel input powers. It is shown that, depending on the weights, the capacity can be either bounded or unbounded in the input power. A necessar ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract — Motivated by onchip communication, a channel model is proposed where the variance of the additive noise depends on the weighted sum of the past channel input powers. It is shown that, depending on the weights, the capacity can be either bounded or unbounded in the input power. A necessary condition and a sufficient condition for the capacity to be bounded are presented. I.
Performance Analysis of Noisy MessagePassing Decoding of LowDensity ParityCheck Codes
, 2010
"... AbstractA noisy messagepassing decoding scheme is considered for lowdensity paritycheck (LDPC) codes over additive white Gaussian noise (AWGN) channels. The internal decoder noise is motivated by the quantization noise in digital implementations or the intrinsic noise of analog LDPC decoders. W ..."
Abstract
 Add to MetaCart
(Show Context)
AbstractA noisy messagepassing decoding scheme is considered for lowdensity paritycheck (LDPC) codes over additive white Gaussian noise (AWGN) channels. The internal decoder noise is motivated by the quantization noise in digital implementations or the intrinsic noise of analog LDPC decoders. We modelled the decoder noise as AWGN on the exchanged messages in the iterative LDPC decoder. This is shown to render the message densities in the noisy LDPC decoder inconsistent. We then invoke Gaussian approximation and formulate a twodimensional density evolution analysis for the noisy LDPC decoder. This allows for not only tracking the mean, but also the variance of the message densities, and hence, quantifying the threshold of the LDPC code. According to the results, a decoder noise of unit variance, increases the threshold for a regular , code by 1.672dB.