Results 1  10
of
234
Opportunistic Beamforming Using Dumb Antennas
 IEEE Transactions on Information Theory
, 2002
"... Multiuser diversity is a form of diversity inherent in a wireless network, provided by independent timevarying channels across the different users. The diversity benefit is exploited by tracking the channel fluctuations of the users and scheduling transmissions to users when their instantaneous cha ..."
Abstract

Cited by 802 (1 self)
 Add to MetaCart
(Show Context)
Multiuser diversity is a form of diversity inherent in a wireless network, provided by independent timevarying channels across the different users. The diversity benefit is exploited by tracking the channel fluctuations of the users and scheduling transmissions to users when their instantaneous channel quality is near the peak. The diversity gain increases with the dynamic range of the fluctuations and is thus limited in environments with little scattering and/or slow fading. In such environments, we propose the use of multiple transmit antennas to induce large and fast channel fluctuations so that multiuser diversity can still be exploited. The scheme can be interpreted as opportunistic beamforming and we show that true beamforming gains can be achieved when there are sufficient users, even though very limited channel feedback is needed. Furthermore, in a cellular system, the scheme plays an additional role of opportunistic nulling of the interference created on users of adjacent cells. We discuss the design implications of implementing this scheme in a complete wireless system.
Cooperative strategies and capacity theorems for relay networks
 IEEE Trans. Inform. Theory
, 2005
"... Abstract—Coding strategies that exploit node cooperation are developed for relay networks. Two basic schemes are studied: the relays decodeandforward the source message to the destination, or they compressandforward their channel outputs to the destination. The decodeandforward scheme is a va ..."
Abstract

Cited by 733 (19 self)
 Add to MetaCart
Abstract—Coding strategies that exploit node cooperation are developed for relay networks. Two basic schemes are studied: the relays decodeandforward the source message to the destination, or they compressandforward their channel outputs to the destination. The decodeandforward scheme is a variant of multihopping, but in addition to having the relays successively decode the message, the transmitters cooperate and each receiver uses several or all of its past channel output blocks to decode. For the compressandforward scheme, the relays take advantage of the statistical dependence between their channel outputs and the destination’s channel output. The strategies are applied to wireless channels, and it is shown that decodeandforward achieves the ergodic capacity with phase fading if phase information is available only locally, and if the relays are near the source node. The ergodic capacity coincides with the rate of a distributed antenna array with full cooperation even though the transmitting antennas are not colocated. The capacity results generalize broadly, including to multiantenna transmission with Rayleigh fading, singlebounce fading, certain quasistatic fading problems, cases where partial channel knowledge is available at the transmitters, and cases where local user cooperation is permitted. The results further extend to multisource and multidestination networks such as multiaccess and broadcast relay channels. Index Terms—Antenna arrays, capacity, coding, multiuser channels, relay channels. I.
Nested Linear/Lattice Codes for Structured Multiterminal Binning
, 2002
"... Network information theory promises high gains over simple pointtopoint communication techniques, at the cost of higher complexity. However, lack of structured coding schemes limited the practical application of these concepts so far. One of the basic elements of a network code is the binning sch ..."
Abstract

Cited by 354 (15 self)
 Add to MetaCart
Network information theory promises high gains over simple pointtopoint communication techniques, at the cost of higher complexity. However, lack of structured coding schemes limited the practical application of these concepts so far. One of the basic elements of a network code is the binning scheme. Wyner and other researchers proposed various forms of coset codes for efficient binning, yet these schemes were applicable only for lossless source (or noiseless channel) network coding. To extend the algebraic binning approach to lossy source (or noisy channel) network coding, recent work proposed the idea of nested codes, or more specifically, nested paritycheck codes for the binary case and nested lattices in the continuous case. These ideas connect network information theory with the rich areas of linear codes and lattice codes, and have strong potential for practical applications. We review these recent developments and explore their tight relation to concepts such as combined shaping and precoding, coding for memories with defects, and digital watermarking. We also propose a few novel applications adhering to a unified approach.
The capacity region of the Gaussian multipleinput multipleoutput broadcast channel
 IEEE TRANS. INF. THEORY
, 2006
"... The Gaussian multipleinput multipleoutput (MIMO) broadcast channel (BC) is considered. The dirtypaper coding (DPC) rate region is shown to coincide with the capacity region. To that end, a new notion of an enhanced broadcast channel is introduced and is used jointly with the entropy power inequa ..."
Abstract

Cited by 341 (7 self)
 Add to MetaCart
The Gaussian multipleinput multipleoutput (MIMO) broadcast channel (BC) is considered. The dirtypaper coding (DPC) rate region is shown to coincide with the capacity region. To that end, a new notion of an enhanced broadcast channel is introduced and is used jointly with the entropy power inequality, to show that a superposition of Gaussian codes is optimal for the degraded vector broadcast channel and that DPC is optimal for the nondegraded case. Furthermore, the capacity region is characterized under a wide range of input constraints, accounting, as special cases, for the total power and the perantenna power constraints.
Sum capacity of the vector Gaussian broadcast channel and uplinkdownlink duality
 IEEE Trans. on Inform. Theory
, 1912
"... We characterize the sum capacity of the vector Gaussian broadcast channel by showing that the existing inner bound of Marton and the existing upper bound of Sato are tight for this channel. We exploit an intimate fourway connection between the vector broadcast channel, the corresponding pointtopo ..."
Abstract

Cited by 324 (2 self)
 Add to MetaCart
(Show Context)
We characterize the sum capacity of the vector Gaussian broadcast channel by showing that the existing inner bound of Marton and the existing upper bound of Sato are tight for this channel. We exploit an intimate fourway connection between the vector broadcast channel, the corresponding pointtopoint channel (where the receivers can cooperate), the multiple access channel (where the role of transmitters and receivers are reversed), and the corresponding pointtopoint channel (where the transmitters can cooperate). 1
Sum Capacity of a Gaussian Vector Broadcast Channel
 IEEE Trans. Inform. Theory
, 2002
"... This paper characterizes the sum capacity of a class of nondegraded Gaussian vectB broadcast channels where a singletransmitter with multiple transmit terminals sends independent information to multiple receivers. Coordinat+[ is allowed among the transmit teminals, but not among the different recei ..."
Abstract

Cited by 280 (21 self)
 Add to MetaCart
(Show Context)
This paper characterizes the sum capacity of a class of nondegraded Gaussian vectB broadcast channels where a singletransmitter with multiple transmit terminals sends independent information to multiple receivers. Coordinat+[ is allowed among the transmit teminals, but not among the different receivers. The sum capacity is shown t be a saddlepoint of a Gaussian mu al informat]R game, where a signal player chooses a tansmit covariance matrix to maximize the mutual information, and a noise player chooses a fictitious noise correlation to minimize the mutual information. This result holds fort he class of Gaussian channels whose saddlepoint satisfies a full rank condition. Furt her,t he sum capacity is achieved using a precoding method for Gaussian channels with additive side information noncausally known at the transmitter. The optimal precoding structure is shown t correspond to a decisionfeedback equalizer that decomposes t e broadcast channel into a series of singleuser channels with intk ference presubtract] at the transmiter.
Informationtheoretic analysis of information hiding
 IEEE Transactions on Information Theory
, 2003
"... Abstract—An informationtheoretic analysis of information hiding is presented in this paper, forming the theoretical basis for design of informationhiding systems. Information hiding is an emerging research area which encompasses applications such as copyright protection for digital media, watermar ..."
Abstract

Cited by 269 (19 self)
 Add to MetaCart
(Show Context)
Abstract—An informationtheoretic analysis of information hiding is presented in this paper, forming the theoretical basis for design of informationhiding systems. Information hiding is an emerging research area which encompasses applications such as copyright protection for digital media, watermarking, fingerprinting, steganography, and data embedding. In these applications, information is hidden within a host data set and is to be reliably communicated to a receiver. The host data set is intentionally corrupted, but in a covert way, designed to be imperceptible to a casual analysis. Next, an attacker may seek to destroy this hidden information, and for this purpose, introduce additional distortion to the data set. Side information (in the form of cryptographic keys and/or information about the host signal) may be available to the information hider and to the decoder. We formalize these notions and evaluate the hiding capacity, which upperbounds the rates of reliable transmission and quantifies the fundamental tradeoff between three quantities: the achievable informationhiding rates and the allowed distortion levels for the information hider and the attacker. The hiding capacity is the value of a game between the information hider and the attacker. The optimal attack strategy is the solution of a particular ratedistortion problem, and the optimal hiding strategy is the solution to a channelcoding problem. The hiding capacity is derived by extending the Gel’fand–Pinsker theory of communication with side information at the encoder. The extensions include the presence of distortion constraints, side information at the decoder, and unknown communication channel. Explicit formulas for capacity are given in several cases, including Bernoulli and Gaussian problems, as well as the important special case of small distortions. In some cases, including the last two above, the hiding capacity is the same whether or not the decoder knows the host data set. It is shown that many existing informationhiding systems in the literature operate far below capacity. Index Terms—Channel capacity, cryptography, fingerprinting, game theory, information hiding, network information theory,
Discrete memoryless interference and broadcast channels with confidential messages: secrecy rate regions
 IEEE Transactions on Information Theory
, 2008
"... Abstract — Discrete memoryless interference and broadcast channels in which independent confidential messages are sent to two receivers are considered. Confidential messages are transmitted to each receiver with perfect secrecy, as measured by the equivocation at the other receiver. In this paper, w ..."
Abstract

Cited by 161 (12 self)
 Add to MetaCart
(Show Context)
Abstract — Discrete memoryless interference and broadcast channels in which independent confidential messages are sent to two receivers are considered. Confidential messages are transmitted to each receiver with perfect secrecy, as measured by the equivocation at the other receiver. In this paper, we derive inner and outer bounds for the achievable rate regions for these two communication systems. I.
The Gaussian Watermarking Game
, 2000
"... Watermarking models a copyright protection mechanism where an original source sequence or "covertext" is modified before distribution to the public in order to embed some extra information. The embedding should be transparent (i.e., the modified data sequence or "stegotext" shoul ..."
Abstract

Cited by 139 (9 self)
 Add to MetaCart
Watermarking models a copyright protection mechanism where an original source sequence or "covertext" is modified before distribution to the public in order to embed some extra information. The embedding should be transparent (i.e., the modified data sequence or "stegotext" should be similar to the covertext) and robust (i.e., the extra information should be recoverable even if the stegotext is modified further, possibly by a malicious "attacker"). We compute the coding capacity of the watermarking game for a Gaussian covertext and squarederror distortions. Both the public version of the game (covertext known to neither attacker nor decoder) and the private version of the game (covertext unknown to attacker but known to decoder) are treated. While the capacity of the former cannot, of course, exceed the capacity of the latter, we show that the two are, in fact, identical. These capacities depend critically on whether the distortion constraints are required to be met in expectation or with probability one. In the former case the coding capacity is zero, whereas in the latter it coincides with the value of related zerosum dynamic mutual informations games of complete and perfect information. # Parts of this work were presented at the 2000 Conference on Information Sciences and Systems (CISS '00), Princeton University, Princeton, NJ, March 1517, 2000, and at the 2000 IEEE International Symposium on Information Theory (ISIT '00), Sorrento, Italy, June 2530, 2000.