Results 1  10
of
38
The Rate Loss in the WynerZiv Problem
 IEEE Trans. Inform. Theory
, 1996
"... The ratedistortion function for source coding with side information at the decoder (the "WynerZiv problem") is given in terms of an auxiliary random variable, which forms a Markov chain with the source and the side information. This Markov chain structure, typical to the solution of multiterminal ..."
Abstract

Cited by 76 (15 self)
 Add to MetaCart
The ratedistortion function for source coding with side information at the decoder (the "WynerZiv problem") is given in terms of an auxiliary random variable, which forms a Markov chain with the source and the side information. This Markov chain structure, typical to the solution of multiterminal source coding problems, corresponds to a loss in coding rate with respect to the conditional ratedistortion function, i.e., to the case where the encoder is fully informed. We show that for difference (or balanced) distortion measures, this loss is bounded by a universal constant, which is the minimax capacity of a suitable additive noise channel. Furthermore, in the worst case this loss is equal to the maximin redundancy over the rate distortion function of the additive noise "test" channel. For example, the loss in the WynerZiv problem is less than 0:5 bit per sample in the squarederror distortion case, and it is less than 0:22 bits for a binary source with Hammingdistance. These resul...
On Lattice Quantization Noise
 IEEE Trans. Inform. Theory
, 1996
"... Abstract We present several results regarding the properties of a random vector, uniformly distributed over a lattice cell. This random vector is the quantization noise of a lattice quantizer at high resolution, or the noise of a dithered lattice quantizer at all distortion levels. We find that for ..."
Abstract

Cited by 73 (20 self)
 Add to MetaCart
Abstract We present several results regarding the properties of a random vector, uniformly distributed over a lattice cell. This random vector is the quantization noise of a lattice quantizer at high resolution, or the noise of a dithered lattice quantizer at all distortion levels. We find that for the optimal lattice quantizers this noise is widesensestationary and white. Any desirable noise spectra may be realized by an appropriate linear transformation (“shaping”) of a lattice quantizer. As the dimension increases, the normalized second.moment of the optimal lattice quantizer goes to 1/2xe, and consequently the quantization noise approaches a white Gaussian process in the divergence sense. In entropycoded dithered quantization, which can be modeled accurately as passing the source through an additive noise channel, this limit behavior implies that for large lattice dimension both the error and the bit rate approach the error and the information rate of an Additive White Gaussian Noise (AWGN) channel. Index TermsLattice, quantization noise, shaping, normalized second moment, divergence from Gaussianity. I I.
On the asymptotic tightness of the Shannon lower bound
, 1997
"... New results are proved on the convergence of the Shannon lower bound (SLB) to the rate distortion function as the distortion decreases to zero. The key convergence result is proved using a fundamental property of informational divergence. As a corollary, it is shown that the SLB is asymptotically ti ..."
Abstract

Cited by 45 (15 self)
 Add to MetaCart
New results are proved on the convergence of the Shannon lower bound (SLB) to the rate distortion function as the distortion decreases to zero. The key convergence result is proved using a fundamental property of informational divergence. As a corollary, it is shown that the SLB is asymptotically tight for normbased distortions, when the source vector has a finite differential entropy and a finite ffth moment for some ff ? 0, with respect to the given norm. Moreover, we derive a theorem of Linkov on the asymptotic tightness of the SLB for general difference distortion measures with more relaxed conditions on the source density. We also show that the SLB relative to a stationary source and single letter difference distortion is asymptotically tight under very weak assumptions on the source distribution. Key words: rate distortion theory, Shannon lower bound, difference distortion measures, stationary sources T. Linder is with the Coordinated Science Laboratory, University of Illinoi...
Multiterminal Source Coding with High Resolution
 IEEE Trans. Inform. Theory
, 1999
"... We consider separate encoding and joint decoding of correlated continuous information sources, subject to a difference distortion measure. We first derive a multiterminal extension of the Shannon lower bound for the rate region. Then we show that this Shannon outer bound is asymptotically tight for ..."
Abstract

Cited by 37 (3 self)
 Add to MetaCart
We consider separate encoding and joint decoding of correlated continuous information sources, subject to a difference distortion measure. We first derive a multiterminal extension of the Shannon lower bound for the rate region. Then we show that this Shannon outer bound is asymptotically tight for small distortions. These results imply that the loss in the sum of the coding rates due to the separation of the encoders vanishes in the limit of high resolution. Furthermore, lattice quantizers followed by SlepianWolf lossless encoding are asymptotically optimal. We also investigate the highresolution rate region in the remote coding case, where the encoders observe only noisy versions of the sources. For the quadratic Gaussian case, we establish a separation result to the effect that multiterminal coding aimed at reconstructing the noisy sources subject to the rate constraints, followed by estimation of the remote sources from these reconstructions, is optimal under certain regularity ...
DataHiding Codes
 Proc. IEEE
, 2005
"... This tutorial paper reviews the theory and design of codes for hiding or embedding information in signals such as images, video, audio, graphics, and text. Such codes have also been called watermarking codes; they can be used in a variety of applications, including copyright protection for digital m ..."
Abstract

Cited by 28 (3 self)
 Add to MetaCart
This tutorial paper reviews the theory and design of codes for hiding or embedding information in signals such as images, video, audio, graphics, and text. Such codes have also been called watermarking codes; they can be used in a variety of applications, including copyright protection for digital media, content authentication, media forensics, data binding, and covert communications. Some of these applications imply the presence of an adversary attempting to disrupt the transmission of information to the receiver; other applications involve a noisy, generally unknown, communication channel. Our focus is on the mathematical models, fundamental principles, and code design techniques that are applicable to data hiding. The approach draws from basic concepts in information theory, coding theory, game theory, and signal processing, and is illustrated with applications to the problem of hiding data in images. Keywords—Coding theory, data hiding, game theory, image processing, information theory, security, signal processing, watermarking. I.
Lattices for distributed source coding: Jointly Gaussian sources and reconstruction of a linear function
 IEEE TRANSACTIONS ON INFORMATION THEORY, SUBMITTED
, 2007
"... Consider a pair of correlated Gaussian sources (X1, X2). Two separate encoders observe the two components and communicate compressed versions of their observations to a common decoder. The decoder is interested in reconstructing a linear combination of X1 and X2 to within a meansquare distortion of ..."
Abstract

Cited by 28 (2 self)
 Add to MetaCart
Consider a pair of correlated Gaussian sources (X1, X2). Two separate encoders observe the two components and communicate compressed versions of their observations to a common decoder. The decoder is interested in reconstructing a linear combination of X1 and X2 to within a meansquare distortion of D. We obtain an inner bound to the optimal ratedistortion region for this problem. A portion of this inner bound is achieved by a scheme that reconstructs the linear function directly rather than reconstructing the individual components X1 and X2 first. This results in a better rate region for certain parameter values. Our coding scheme relies on lattice coding techniques in contrast to more prevalent random coding arguments used to demonstrate achievable rate regions in information theory. We then consider the case of linear reconstruction of K sources and provide an inner bound to the optimal ratedistortion region. Some parts of the inner bound are achieved using the following coding structure: lattice vector quantization followed by “correlated” latticestructured binning.
HighResolution Source Coding for NonDifference Distortion Measures: Multidimensional Companding
 IEEE Trans. Inform. Theory
, 1999
"... Entropycoded vector quantization is studied using highresolution multidimensional companding over a class of nondifference distortion measures. For distortion measures which are "locally quadratic" a rigorous derivation of the asymptotic distortion and entropycoded rate of multidimensional compan ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
Entropycoded vector quantization is studied using highresolution multidimensional companding over a class of nondifference distortion measures. For distortion measures which are "locally quadratic" a rigorous derivation of the asymptotic distortion and entropycoded rate of multidimensional companders is given along with conditions for the optimal choice of the compressor function. This optimum compressor, when it exists, depends on the distortion measure but not on the source distribution. The ratedistortion performance of the companding scheme is studied using a recently obtained asymptotic expression for the ratedistortion function which parallels the Shannon lower bound for difference distortion measures. It is proved that the highresolution performance of the scheme is arbitrarily close to the ratedistortion limit for large quantizer dimensions if the compressor function and the lattice quantizer used in the companding scheme are optimal, extending an analogous statement for...
A ZeroDelay Sequential Scheme for Lossy Coding of Individual Sequences
, 2000
"... We consider adaptive sequential lossy coding of bounded individual sequences when the performance is measured by the sequentially accumulated mean squared distortion. The encoder and the decoder are connected via a noiseless channel of capacity R and both are assumed to have zero delay. No probabil ..."
Abstract

Cited by 21 (5 self)
 Add to MetaCart
We consider adaptive sequential lossy coding of bounded individual sequences when the performance is measured by the sequentially accumulated mean squared distortion. The encoder and the decoder are connected via a noiseless channel of capacity R and both are assumed to have zero delay. No probabilistic assumptions are made on how the sequence to be encoded is generated. For any bounded sequence of length n, the distortion redundancy is defined as the normalized cumulative distortion of the sequential scheme minus the normalized cumulative distortion of the best scalar quantizer of rate R which is matched to this particular sequence. We demonstrate the existence of a zerodelay sequential scheme which uses common randomization in the encoder and the decoder such that the normalized maximum distortion redundancy converges to zero at a rate n \Gamma1=5 log n as the length of the encoded sequence n increases without bound.
Achieving the Gaussian ratedistortion function by prediction
 IEEE Trans. Inf. Theory
, 2008
"... Abstract — The “waterfilling ” solution for the quadratic ratedistortion function of a stationary Gaussian source is given in terms of its power spectrum. This formula naturally lends itself to a frequency domain “testchannel ” realization. We provide an alternative timedomain realization for the ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
Abstract — The “waterfilling ” solution for the quadratic ratedistortion function of a stationary Gaussian source is given in terms of its power spectrum. This formula naturally lends itself to a frequency domain “testchannel ” realization. We provide an alternative timedomain realization for the ratedistortion function, based on linear prediction. The predictive testchannel has some interesting implications, including the optimality at all distortion levels of pre/post filtered vectorquantized differential pulse code modulation (DPCM), and a duality relationship with decisionfeedback equalization (DFE) for intersymbol interference (ISI) channels. I.
Information Rates of Pre/Post Filtered Dithered Quantizers
 IEEE Trans. Information Theory
, 1997
"... We consider encoding of a source with a prespecified second order statistics, but otherwise arbitrary, by Entropy Coded Dithered (lattice) Quantization (ECDQ) incorporating linear preand postfilters. In the design and analysis of this scheme we utilize the equivalent additive noise channel model o ..."
Abstract

Cited by 19 (11 self)
 Add to MetaCart
We consider encoding of a source with a prespecified second order statistics, but otherwise arbitrary, by Entropy Coded Dithered (lattice) Quantization (ECDQ) incorporating linear preand postfilters. In the design and analysis of this scheme we utilize the equivalent additive noise channel model of the ECDQ. For Gaussian sources and square error distortion measure, the coding performance of the pre/post filtered ECDQ approaches the ratedistortion function, as the dimension of the (optimal) lattice quantizer becomes large; actually, in this case the proposed coding scheme simulates the optimal forward channel realization of the ratedistortion function. For nonGaussian sources and finite dimensional lattice quantizers, the coding rate exceeds the ratedistortion function by at most the sum of two terms: the "information divergence of the source from Gaussianity" and the "information divergence of the quantization noise from Gaussianity". Additional bounds on the excess rate of the s...