Results 1  10
of
90
Discrete memoryless interference and broadcast channels with confidential messages: secrecy rate regions
 IEEE Transactions on Information Theory
, 2008
"... Abstract — Discrete memoryless interference and broadcast channels in which independent confidential messages are sent to two receivers are considered. Confidential messages are transmitted to each receiver with perfect secrecy, as measured by the equivocation at the other receiver. In this paper, w ..."
Abstract

Cited by 162 (13 self)
 Add to MetaCart
(Show Context)
Abstract — Discrete memoryless interference and broadcast channels in which independent confidential messages are sent to two receivers are considered. Confidential messages are transmitted to each receiver with perfect secrecy, as measured by the equivocation at the other receiver. In this paper, we derive inner and outer bounds for the achievable rate regions for these two communication systems. I.
The capacity of channels with feedback
 IEEE Trans. Information Theory
, 2009
"... We introduce a general framework for treating channels with memory and feedback. First, we generalize Massey’s concept of directed information [23] and use it to characterize the feedback capacity of general channels. Second, we present coding results for Markov channels. This requires determining a ..."
Abstract

Cited by 96 (4 self)
 Add to MetaCart
(Show Context)
We introduce a general framework for treating channels with memory and feedback. First, we generalize Massey’s concept of directed information [23] and use it to characterize the feedback capacity of general channels. Second, we present coding results for Markov channels. This requires determining appropriate sufficient statistics at the encoder and decoder. Third, a dynamic programming framework for computing the capacity of Markov channels is presented. Fourth, it is shown that the average cost optimality equation (ACOE) can be viewed as an implicit singleletter characterization of the capacity. Fifth, scenarios
On the DegreesofFreedom of the KUser Gaussian Interference Channel
 IEEE Transactions on Information Theory
, 2008
"... The degreesoffreedom of a Kuser Gaussian interference channel (GIFC) has been defined to be the multiple of (1/2)log 2 P at which the maximum sum of achievable rates grows with increasing P. In this paper, we establish that the degreesoffreedom of three or more user, real, scalar GIFCs, viewed ..."
Abstract

Cited by 79 (0 self)
 Add to MetaCart
(Show Context)
The degreesoffreedom of a Kuser Gaussian interference channel (GIFC) has been defined to be the multiple of (1/2)log 2 P at which the maximum sum of achievable rates grows with increasing P. In this paper, we establish that the degreesoffreedom of three or more user, real, scalar GIFCs, viewed as a function of the channel coefficients, is discontinuous at points where all of the coefficients are nonzero rational numbers. More specifically, for all K> 2, we find a class of Kuser GIFCs that is dense in the GIFC parameter space for which K/2 degreesoffreedom are exactly achievable, and we show that the degreesoffreedom for any GIFC with nonzero rational coefficients is strictly smaller than K/2. These results are proved using new connections with number theory and additive combinatorics. 1
Capacity theorems for wireless relay channels
 in 41th Allerton Conf. on Commun., Control and Computing
, 2003
"... An achievable rate region for memoryless relay networks is developed based on an existing region for additive white Gaussian noise (AWGN) channels. It is shown that multi–hopping achieves the information–theoretic capacity of wireless relay networks if the relays are in a region near the source term ..."
Abstract

Cited by 58 (4 self)
 Add to MetaCart
(Show Context)
An achievable rate region for memoryless relay networks is developed based on an existing region for additive white Gaussian noise (AWGN) channels. It is shown that multi–hopping achieves the information–theoretic capacity of wireless relay networks if the relays are in a region near the source terminal, and if phase information is available at the receivers only. 1
Finite State Channels with TimeInvariant Deterministic Feedback
"... We consider capacity of discretetime channels with feedback for the general case where the feedback is a timeinvariant deterministic function of the output samples. Under the assumption that the channel states take values in a finite alphabet, we find a sequence of achievable rates and a sequence ..."
Abstract

Cited by 49 (26 self)
 Add to MetaCart
(Show Context)
We consider capacity of discretetime channels with feedback for the general case where the feedback is a timeinvariant deterministic function of the output samples. Under the assumption that the channel states take values in a finite alphabet, we find a sequence of achievable rates and a sequence of upper bounds on the capacity. The achievable rates and the upper bounds are computable for any N, and the limits of the sequences exist. We show that when the probability of the initial state is positive for all the channelstates, then the capacity is the limit of the achievablerate sequence. We further show that when the channel is stationary, indecomposable and has no intersymbol interference (ISI), its capacity is given by the limit of the maximum of the (normalized) directed information between the input X N and the output Y N, i.e., 1 C = lim N→ ∞ N max I(XN → Y N), where the maximization is taken over the causal conditioning probability Q(x N z N−1) defined in this paper. The main idea for obtaining the results is to add causality into Gallager’s results [1] on finite state channels. The capacity results are used to show that the sourcechannel separation theorem holds for timeinvariant determinist feedback, and if the state of the channel is known both at the encoder and the decoder, then feedback does not increase capacity.
Edgecut bounds on network coding rates
 J. Network and Systems Management
, 2006
"... AbstractActive networks are network architectures with processors that are capable of executing code carried by the packets passing through them. A critical network management concern is the optimization of such networks and tight bounds on their performance serve as useful design benchmarks. A ne ..."
Abstract

Cited by 48 (4 self)
 Add to MetaCart
(Show Context)
AbstractActive networks are network architectures with processors that are capable of executing code carried by the packets passing through them. A critical network management concern is the optimization of such networks and tight bounds on their performance serve as useful design benchmarks. A new bound on communication rates is developed that applies to network coding, which is a promising active network application that has processors transmit packets that are general functions, for example a bitwise XOR, of selected received packets. The bound generalizes an edgecut bound on routing rates by progressively removing edges from the network graph and checking whether certain strengthened dseparation conditions are satised. The bound improves on the cutset bound and its efcacy is demonstrated by showing that routing is rateoptimal for some commonly cited examples in the networking literature. Index Terms Network capacity, network coding, active networks, dseparation 1.
Capacity of the Trapdoor Channel with Feedback
"... We establish that the feedback capacity of the trapdoor channel is the logarithm of the golden ratio and provide a simple communication scheme that achieves capacity. As part of the analysis, we formulate a class of dynamic programs that characterize capacities of unifilar finitestate channels. The ..."
Abstract

Cited by 40 (15 self)
 Add to MetaCart
(Show Context)
We establish that the feedback capacity of the trapdoor channel is the logarithm of the golden ratio and provide a simple communication scheme that achieves capacity. As part of the analysis, we formulate a class of dynamic programs that characterize capacities of unifilar finitestate channels. The trapdoor channel is an instance that admits a simple analytic solution.
Interpretations of Directed Information in Portfolio Theory
 Data Compression, and Hypothesis Testing”, IEEE Trans. on Inf. Th
"... Abstract—We investigate the role of directed information in portfolio theory, data compression, and statistics with causality constraints. In particular, we show that directed information is an upper bound on the increment in growth rates of optimal portfolios in a stock market due to causal side in ..."
Abstract

Cited by 23 (9 self)
 Add to MetaCart
(Show Context)
Abstract—We investigate the role of directed information in portfolio theory, data compression, and statistics with causality constraints. In particular, we show that directed information is an upper bound on the increment in growth rates of optimal portfolios in a stock market due to causal side information. This upper bound is tight for gambling in a horse race, which is an extreme case of stock markets. Directed information also characterizes the value of causal side information in instantaneous compression and quantifies the benefit of causal inference in joint compression of two stochastic processes. In hypothesis testing, directed information evaluates the best error exponent for testing whether a random process Y causally influences another process X or not. These results lead to a natural interpretation of directed information I(Y n! X n) as the amount of information that a random sequence Y n = (Y1;Y2;...;Yn) causally provides about another random sequence X n = (X1;X2;...;Xn). A new measure, directed lautum information, is also introduced and interpreted in portfolio theory, data compression, and hypothesis testing. Index Terms—Causal conditioning, causal side information, directed information, hypothesis testing, instantaneous compression, Kelly gambling, lautum information, portfolio theory. I.
Message and State Cooperation in Multiple Access Channels
"... Abstract—We investigate the capacity of a multiple access channel with cooperating encoders where partial state information is known to each encoder and full state information is known to the decoder. The cooperation between the encoders has a twofold purpose: to generate empirical state coordinati ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
(Show Context)
Abstract—We investigate the capacity of a multiple access channel with cooperating encoders where partial state information is known to each encoder and full state information is known to the decoder. The cooperation between the encoders has a twofold purpose: to generate empirical state coordination between the encoders, and to share information about the private messages that each encoder has. For twoway cooperation, this twofold purpose is achieved by doublebinning, where the first layer of binning is used to generate the state coordination similarly to the twoway source coding, and the second layer of binning is used to transmit information about the private messages. The complete result provides the framework and perspective for addressing a complex level of cooperation that mixes states and messages in an optimal way. Index Terms—Channel state information, cooperating encoders, coordination, doublebinning, messagestate cooperation, multiple access channel, superbin. I.