Results 1  10
of
101
Nested Linear/Lattice Codes for Structured Multiterminal Binning
, 2002
"... Network information theory promises high gains over simple pointtopoint communication techniques, at the cost of higher complexity. However, lack of structured coding schemes limited the practical application of these concepts so far. One of the basic elements of a network code is the binning sch ..."
Abstract

Cited by 342 (14 self)
 Add to MetaCart
Network information theory promises high gains over simple pointtopoint communication techniques, at the cost of higher complexity. However, lack of structured coding schemes limited the practical application of these concepts so far. One of the basic elements of a network code is the binning scheme. Wyner and other researchers proposed various forms of coset codes for efficient binning, yet these schemes were applicable only for lossless source (or noiseless channel) network coding. To extend the algebraic binning approach to lossy source (or noisy channel) network coding, recent work proposed the idea of nested codes, or more specifically, nested paritycheck codes for the binary case and nested lattices in the continuous case. These ideas connect network information theory with the rich areas of linear codes and lattice codes, and have strong potential for practical applications. We review these recent developments and explore their tight relation to concepts such as combined shaping and precoding, coding for memories with defects, and digital watermarking. We also propose a few novel applications adhering to a unified approach.
The Distributed KarhunenLoève Transform
 IEEE Trans. Inform. Theory
, 2003
"... The KarhunenLoeve transform (KLT) is a key element of many signal processing tasks, including approximation, compression, and classification. Many recent applications involve distributed signal processing where it is not generally possible to apply the KLT to the signal; rather, the KLT must be ..."
Abstract

Cited by 89 (15 self)
 Add to MetaCart
(Show Context)
The KarhunenLoeve transform (KLT) is a key element of many signal processing tasks, including approximation, compression, and classification. Many recent applications involve distributed signal processing where it is not generally possible to apply the KLT to the signal; rather, the KLT must be approximated in a distributed fashion.
Rate region of the quadratic Gaussian twoencoder sourcecoding problem
 IEEE Trans. Inf. Theory
, 2008
"... We determine the rate region of the quadratic Gaussian twoencoder sourcecoding problem. This rate region is achieved by a simple architecture that separates the analog and digital aspects of the compression. Furthermore, this architecture requires higher rates to send a Gaussian source than it doe ..."
Abstract

Cited by 68 (6 self)
 Add to MetaCart
(Show Context)
We determine the rate region of the quadratic Gaussian twoencoder sourcecoding problem. This rate region is achieved by a simple architecture that separates the analog and digital aspects of the compression. Furthermore, this architecture requires higher rates to send a Gaussian source than it does to send any other source with the same covariance. Our techniques can also be used to determine the sum rate of some generalizations of this classical problem. Our approach involves coupling the problem to a quadratic Gaussian “CEO problem.”
Lattices for distributed source coding: Jointly Gaussian sources and reconstruction of a linear function
 IEEE TRANSACTIONS ON INFORMATION THEORY, SUBMITTED
, 2007
"... Consider a pair of correlated Gaussian sources (X1, X2). Two separate encoders observe the two components and communicate compressed versions of their observations to a common decoder. The decoder is interested in reconstructing a linear combination of X1 and X2 to within a meansquare distortion of ..."
Abstract

Cited by 44 (2 self)
 Add to MetaCart
(Show Context)
Consider a pair of correlated Gaussian sources (X1, X2). Two separate encoders observe the two components and communicate compressed versions of their observations to a common decoder. The decoder is interested in reconstructing a linear combination of X1 and X2 to within a meansquare distortion of D. We obtain an inner bound to the optimal ratedistortion region for this problem. A portion of this inner bound is achieved by a scheme that reconstructs the linear function directly rather than reconstructing the individual components X1 and X2 first. This results in a better rate region for certain parameter values. Our coding scheme relies on lattice coding techniques in contrast to more prevalent random coding arguments used to demonstrate achievable rate regions in information theory. We then consider the case of linear reconstruction of K sources and provide an inner bound to the optimal ratedistortion region. Some parts of the inner bound are achieved using the following coding structure: lattice vector quantization followed by “correlated” latticestructured binning.
The WynerZiv Problem with Multiple Sources
 IEEE Transactions on Information Theory
, 2002
"... We consider the problem of separately compressing multiple sources in a lossy fashion for a decoder that has access to side information. For the case of a single source, this problem has been completely solved by Wyner and Ziv. For the case of two sources, we establish an achievable rate region, ..."
Abstract

Cited by 37 (2 self)
 Add to MetaCart
(Show Context)
We consider the problem of separately compressing multiple sources in a lossy fashion for a decoder that has access to side information. For the case of a single source, this problem has been completely solved by Wyner and Ziv. For the case of two sources, we establish an achievable rate region, an inner bound to the rate region, and a partial converse. The partial converse applies to the case when the sources are conditionally independent given the side information, and it di#ers significantly from prior art in that it applies also to the symmetric case where all sources are encoded with respect to fidelity criteria. Moreover, we also show that in this special case, there is no di#erence between the minimum rate needed to encode the sources jointly, and the minimum sum rate needed for separate encoding.
Sending a Bivariate Gaussian Source over a Gaussian MAC
 in Proceedings IEEE International Symposium on Information Theory
"... We study the powerversusdistortion tradeoff for the transmission of a memoryless bivariate Gaussian source over a twotoone Gaussian multipleaccess channel with perfect causal feedback. In this problem, each of two separate transmitters observes a different component of a memoryless bivariate G ..."
Abstract

Cited by 35 (3 self)
 Add to MetaCart
(Show Context)
We study the powerversusdistortion tradeoff for the transmission of a memoryless bivariate Gaussian source over a twotoone Gaussian multipleaccess channel with perfect causal feedback. In this problem, each of two separate transmitters observes a different component of a memoryless bivariate Gaussian source as well as the feedback from the channel output of the previous timeinstants. Based on the observed source sequence and the feedback, each transmitter then describes its source component to the common receiver via an averagepower constrained Gaussian multipleaccess channel. From the resulting channel output, the receiver wishes to reconstruct both source components with the least possible expected squarederror distortion. We study the set of distortion pairs that can be achieved by the receiver on the two source components. We present sufficient conditions and necessary conditions for the achievability of a distortion pair. These conditions are expressed in terms of the source correlation and of the signaltonoise ratio (SNR) of the channel. In several cases the necessary conditions and sufficient conditions coincide. This allows us to show that if the channel SNR is below a certain threshold, then an uncoded transmission scheme that ignores the feedback is optimal. Thus, below this SNRthreshold feedback is useless. We also derive the precise highSNR asymptotics of optimal schemes. 1
The Rate Region of the Quadratic Gaussian TwoTerminal SourceCoding Problem. Submitted for publication. Available from http://www.arxiv.org/abs/cs.IT/0510095
"... We consider a problem in which two encoders each observe one component of a memoryless Gaussian vectorvalued source. The encoders separately communicate with a decoder, which attempts to reproduce the vectorvalued source subject to constraints on the expected squared error of each component. We de ..."
Abstract

Cited by 34 (2 self)
 Add to MetaCart
(Show Context)
We consider a problem in which two encoders each observe one component of a memoryless Gaussian vectorvalued source. The encoders separately communicate with a decoder, which attempts to reproduce the vectorvalued source subject to constraints on the expected squared error of each component. We determine the minimum sum rate needed to meet a pair of target distortions and thereby complete the determination of the rate region for this problem. The proof involves coupling the problem to a quadratic Gaussian “CEO problem.” 1
Side information aware coding strategies for sensor networks
 IEEE J. Selected Areas Commun
"... Abstract—We develop coding strategies for estimation under communication constraints in treestructured sensor networks. The strategies have a modular and decentralized architecture. This promotes the flexibility, robustness, and scalability that wireless sensor networks need to operate in uncertain ..."
Abstract

Cited by 33 (0 self)
 Add to MetaCart
(Show Context)
Abstract—We develop coding strategies for estimation under communication constraints in treestructured sensor networks. The strategies have a modular and decentralized architecture. This promotes the flexibility, robustness, and scalability that wireless sensor networks need to operate in uncertain, changing, and resourceconstrained environments. The strategies are based on a generalization of Wyner–Ziv source coding with decoder side information. We develop solutions for general trees, and illustrate our results in serial (pipeline) and parallel (hubandspoke) networks. Additionally, the strategies can be applied to other network information theory problems. They have a successive coding structure that gives an inherently less complex way to attain a number of prior results, as well as some novel results, for the Chief Executive Officer problem, multiterminal source coding, and certain classes of relay channels. Index Terms—Chief Executive Officer (CEO) problems, data fusion, distributed detection, distributed estimation, multiterminal source coding, rate distortion theory, relay channels, sensor networks, side information, Wyner–Ziv coding. I.
Sending a BiVariate Gaussian over a Gaussian MAC
, 901
"... We study the power versus distortion tradeoff for the distributed transmission of a memoryless bivariate Gaussian source over a twotoone averagepower limited Gaussian multipleaccess channel. In this problem, each of two separate transmitters observes a different component of a memoryless biva ..."
Abstract

Cited by 30 (3 self)
 Add to MetaCart
(Show Context)
We study the power versus distortion tradeoff for the distributed transmission of a memoryless bivariate Gaussian source over a twotoone averagepower limited Gaussian multipleaccess channel. In this problem, each of two separate transmitters observes a different component of a memoryless bivariate Gaussian source. The two transmitters then describe their source component to a common receiver via an averagepower constrained Gaussian multipleaccess channel. From the output of the multipleaccess channel, the receiver wishes to reconstruct each source component with the least possible expected squarederror distortion. Our interest is in characterizing the distortion pairs that are simultaneously achievable on the two source components. We present sufficient conditions and necessary conditions for the achievability of a distortion pair. These conditions are expressed as a function of the channel signaltonoise ratio (SNR) and of the source correlation. In several cases the necessary conditions and sufficient conditions are shown to agree. In particular, we show that if the channel SNR is below a certain threshold, then an uncoded transmission scheme is optimal. We also derive the precise highSNR asymptotics of an optimal scheme. 1