Results 1  10
of
21
How Much Training is Needed in MultipleAntenna Wireless Links?
 IEEE Trans. Inform. Theory
, 2000
"... .... ..."
Spectral Efficiency of CDMA with Random Spreading
 IEEE TRANS. INFORM. THEORY
, 1999
"... The CDMA channel with randomly and independently chosen spreading sequences accurately models the situation where pseudonoise sequences span many symbol periods. Furthermore, its analysis provides a comparison baseline for CDMA channels with deterministic signature waveforms spanning one symbol per ..."
Abstract

Cited by 220 (24 self)
 Add to MetaCart
The CDMA channel with randomly and independently chosen spreading sequences accurately models the situation where pseudonoise sequences span many symbol periods. Furthermore, its analysis provides a comparison baseline for CDMA channels with deterministic signature waveforms spanning one symbol period. We analyze the spectral efficiency (total capacity per chip) as a function of the number of users, spreading gain, and signaltonoise ratio, and we quantify the loss in efficiency relative to an optimally chosen set of signature sequences and relative to multiaccess with no spreading. White Gaussian background noise and equalpower synchronous users are assumed. The following receivers are analyzed: a) optimal joint processing, b) singleuser matched filtering, c) decorrelation, and d) MMSE linear processing.
Sum Capacity of a Gaussian Vector Broadcast Channel
 IEEE Trans. Inform. Theory
, 2002
"... This paper characterizes the sum capacity of a class of nondegraded Gaussian vectB broadcast channels where a singletransmitter with multiple transmit terminals sends independent information to multiple receivers. Coordinat+[ is allowed among the transmit teminals, but not among the different recei ..."
Abstract

Cited by 193 (21 self)
 Add to MetaCart
This paper characterizes the sum capacity of a class of nondegraded Gaussian vectB broadcast channels where a singletransmitter with multiple transmit terminals sends independent information to multiple receivers. Coordinat+[ is allowed among the transmit teminals, but not among the different receivers. The sum capacity is shown t be a saddlepoint of a Gaussian mu al informat]R game, where a signal player chooses a tansmit covariance matrix to maximize the mutual information, and a noise player chooses a fictitious noise correlation to minimize the mutual information. This result holds fort he class of Gaussian channels whose saddlepoint satisfies a full rank condition. Furt her,t he sum capacity is achieved using a precoding method for Gaussian channels with additive side information noncausally known at the transmitter. The optimal precoding structure is shown t correspond to a decisionfeedback equalizer that decomposes t e broadcast channel into a series of singleuser channels with intk ference presubtract] at the transmiter.
Bits Through Queues
 IEEE Trans. Inform. Theory
, 1996
"... The Shannon capacity of the singleserver queue is analyzed. We show that the capacity is lowest, equal to e r natS per average service time, when the service time distribution is exponential. Further, this capacity cannot be increased by feedback. For general service time distributions, upper bound ..."
Abstract

Cited by 75 (7 self)
 Add to MetaCart
The Shannon capacity of the singleserver queue is analyzed. We show that the capacity is lowest, equal to e r natS per average service time, when the service time distribution is exponential. Further, this capacity cannot be increased by feedback. For general service time distributions, upper bounds' for the Shannon capacity are determined. The capacities of the telephone signaling channel and of queues with informationbearing packets are also analyzed.
Capacity bounds for the Gaussian interference channel
 IEEE TRANS. INFORM. THEORY
"... The capacity region of the twouser Gaussian Interference Channel (IC) is studied. Three classes of channels are considered: weak, onesided, and mixed Gaussian ICs. For the weak Gaussian IC, a new outer bound on the capacity region is obtained that outperforms previously known outer bounds. The cha ..."
Abstract

Cited by 62 (4 self)
 Add to MetaCart
The capacity region of the twouser Gaussian Interference Channel (IC) is studied. Three classes of channels are considered: weak, onesided, and mixed Gaussian ICs. For the weak Gaussian IC, a new outer bound on the capacity region is obtained that outperforms previously known outer bounds. The channel sum capacity for some certain range of the channel parameters is derived. It is shown that when Gaussian codebooks are used, the full HanKobayashi achievable rate region can be obtained by using the naive HanKobayashi achievable scheme over three frequency bands (equivalently, three subspaces). For the onesided Gaussian IC, a new proof for Sato’s outer bound is presented. We derive the full HanKobayashi achievable rate region when Gaussian code books are utilized. For the mixed Gaussian IC, a new outer bound is obtained that again outperforms previously known outer bounds. For this case, the channel sum capacity for all ranges of parameters is derived. It is proved that the full HanKobayashi achievable rate region using Gaussian codebooks is equivalent to that of the onesided Gaussian IC for a particular range of the channel gains.
An extremal inequality motivated by multiterminal information theoretic problems
, 2006
"... We prove a new extremal inequality, motivated by the vector Gaussian broadcast channel and the distributed source coding with a single quadratic distortion constraint problem. As a corollary, this inequality yields a generalization of the classical vector entropypower inequality (EPI). As another c ..."
Abstract

Cited by 33 (3 self)
 Add to MetaCart
We prove a new extremal inequality, motivated by the vector Gaussian broadcast channel and the distributed source coding with a single quadratic distortion constraint problem. As a corollary, this inequality yields a generalization of the classical vector entropypower inequality (EPI). As another corollary, this inequality sheds insight into maximizing the differential entropy of the sum of two jointly distributed random variables.
The Secrecy Capacity Region of the Gaussian MIMO MultiReceiver Wiretap Channel
, 2009
"... In this paper, we consider the Gaussian multipleinput multipleoutput (MIMO) multireceiver wiretap channel in which a transmitter wants to have confidential communication with an arbitrary number of users in the presence of an external eavesdropper. We derive the secrecy capacity region of this ch ..."
Abstract

Cited by 32 (18 self)
 Add to MetaCart
In this paper, we consider the Gaussian multipleinput multipleoutput (MIMO) multireceiver wiretap channel in which a transmitter wants to have confidential communication with an arbitrary number of users in the presence of an external eavesdropper. We derive the secrecy capacity region of this channel for the most general case. We first show that even for the singleinput singleoutput (SISO) case, existing converse techniques for the Gaussian scalar broadcast channel cannot be extended to this secrecy context, to emphasize the need for a new proof technique. Our new proof technique makes use of the relationships between the minimummeansquareerror and the mutual information, and equivalently, the relationships between the Fisher information and the differential entropy. Using the intuition gained from the converse proof of the SISO channel, we first prove the secrecy capacity region of the degraded MIMO channel, in which all receivers have the same number of antennas, and the noise covariance matrices can be arranged according to a positive semidefinite order. We then generalize this result to the aligned case, in which all receivers have the same number of antennas, however there is no order among the noise covariance matrices. We accomplish this task by using the channel enhancement technique. Finally, we find the secrecy capacity region of the general MIMO channel by using some limiting arguments on the secrecy capacity region of the aligned MIMO channel. We show that the capacity achieving coding scheme is a variant of dirtypaper coding with Gaussian signals.
On capacity scaling in arbitrary wireless networks,” IEEE Trans. Inf. Theory, submitted for publication
 IEEE Trans. Inf. Theory
, 2004
"... ..."
Information theoretic proofs of entropy power inequalities,” arXiv:0704.1751v1 [cs.IT
, 2007
"... Abstract—While most useful information theoretic inequalities can be deduced from the basic properties of entropy or mutual information, up to now Shannon’s entropy power inequality (EPI) is an exception: Existing information theoretic proofs of the EPI hinge on representations of differential entro ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
Abstract—While most useful information theoretic inequalities can be deduced from the basic properties of entropy or mutual information, up to now Shannon’s entropy power inequality (EPI) is an exception: Existing information theoretic proofs of the EPI hinge on representations of differential entropy using either Fisher information or minimum meansquare error (MMSE), which are derived from de Bruijn’s identity. In this paper, we first present an unified view of these proofs, showing that they share two essential ingredients: 1) a data processing argument applied to a covariancepreserving linear transformation; 2) an integration over a path of a continuous Gaussian perturbation. Using these ingredients, we develop a new and brief proof of the EPI through a mutual information inequality, which replaces Stam and Blachman’s Fisher information inequality (FII) and an inequality for MMSE by Guo, Shamai, and Verdú used in earlier proofs. The result has the advantage of being very simple in that it relies only on the basic properties of mutual information. These ideas are then generalized to various extended versions of the EPI: Zamir and Feder’s generalized EPI for linear transformations of the random variables, Takano and Johnson’s EPI for dependent variables, Liu and Viswanath’s covarianceconstrained EPI, and Costa’s concavity inequality for the entropy power. Index Terms—Data processing inequality, de Bruijn’s identity, differential entropy, divergence, entropy power inequality (EPI),
Degraded Compound Multireceiver Wiretap Channels ∗
, 2009
"... In this paper, we study the degraded compound multireceiver wiretap channel. The degraded compound multireceiver wiretap channel consists of two groups of users and a group of eavesdroppers, where, if we pick an arbitrary user from each group of users and an arbitrary eavesdropper, they satisfy a ..."
Abstract

Cited by 11 (9 self)
 Add to MetaCart
In this paper, we study the degraded compound multireceiver wiretap channel. The degraded compound multireceiver wiretap channel consists of two groups of users and a group of eavesdroppers, where, if we pick an arbitrary user from each group of users and an arbitrary eavesdropper, they satisfy a certain Markov chain. We study two different communication scenarios for this channel. In the first scenario, the transmitter wants to send a confidential message to users in the first (stronger) group and a different confidential message to users in the second (weaker) group, where both messages need to be kept confidential from the eavesdroppers. For this scenario, we assume that there is only one eavesdropper. We obtain the secrecy capacity region for the general discrete memoryless channel model, the parallel channel model, and the Gaussian parallel channel model. For the Gaussian multipleinput multipleoutput (MIMO) channel model, we obtain the secrecy capacity region when there is only one user in the second group. In the second scenario we study, the transmitter sends a confidential message to users in the first group which needs to be kept confidential from the second group of users and the eavesdroppers. Furthermore, the transmitter sends a different confidential message to users in the second group which needs to be kept confidential only from the eavesdroppers. For this scenario, we do not put any restriction on the number of eavesdroppers. As in the first scenario, we obtain the secrecy capacity region for the general discrete memoryless channel model, the parallel channel model, and the Gaussian parallel channel model. For the Gaussian MIMO channel model, we establish the secrecy capacity region when there is only one user in the second group.