Results 1  10
of
60
Quantization
 IEEE TRANS. INFORM. THEORY
, 1998
"... The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modula ..."
Abstract

Cited by 639 (11 self)
 Add to MetaCart
The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modulation systems, especially in the 1948 paper of Oliver, Pierce, and Shannon. Also in 1948, Bennett published the first highresolution analysis of quantization and an exact analysis of quantization noise for Gaussian processes, and Shannon published the beginnings of rate distortion theory, which would provide a theory for quantization as analogtodigital conversion and as data compression. Beginning with these three papers of fifty years ago, we trace the history of quantization from its origins through this decade, and we survey the fundamentals of the theory and many of the popular and promising techniques for quantization.
Quantization Index Modulation: A Class of Provably Good Methods for Digital Watermarking and Information Embedding
 IEEE TRANS. ON INFORMATION THEORY
, 1999
"... We consider the problem of embedding one signal (e.g., a digital watermark), within another "host" signal to form a third, "composite" signal. The embedding is designed to achieve efficient tradeoffs among the three conflicting goals of maximizing informationembedding rate, minimizing distortion be ..."
Abstract

Cited by 357 (12 self)
 Add to MetaCart
We consider the problem of embedding one signal (e.g., a digital watermark), within another "host" signal to form a third, "composite" signal. The embedding is designed to achieve efficient tradeoffs among the three conflicting goals of maximizing informationembedding rate, minimizing distortion between the host signal and composite signal, and maximizing the robustness of the embedding. We introduce new classes of embedding methods, termed quantization index modulation (QIM) and distortioncompensated QIM (DCQIM), and develop convenient realizations in the form of what we refer to as dither modulation. Using deterministic models to evaluate digital watermarking methods, we show that QIM is "provably good" against arbitrary bounded and fully informed attacks, which arise in several copyright applications, and in particular, it achieves provably better rate distortionrobustness tradeoffs than currently popular spreadspectrum and lowbit(s) modulation methods. Furthermore, we show that for some important classes of probabilistic models, DCQIM is optimal (capacityachieving) and regular QIM is nearoptimal. These include both additive white Gaussian noise (AWGN) channels, which may be good models for hybrid transmission applications such as digital audio broadcasting, and meansquareerrorconstrained attack channels that model privatekey watermarking applications.
The Rate Loss in the WynerZiv Problem
 IEEE Trans. Inform. Theory
, 1996
"... The ratedistortion function for source coding with side information at the decoder (the "WynerZiv problem") is given in terms of an auxiliary random variable, which forms a Markov chain with the source and the side information. This Markov chain structure, typical to the solution of multiterminal ..."
Abstract

Cited by 76 (15 self)
 Add to MetaCart
The ratedistortion function for source coding with side information at the decoder (the "WynerZiv problem") is given in terms of an auxiliary random variable, which forms a Markov chain with the source and the side information. This Markov chain structure, typical to the solution of multiterminal source coding problems, corresponds to a loss in coding rate with respect to the conditional ratedistortion function, i.e., to the case where the encoder is fully informed. We show that for difference (or balanced) distortion measures, this loss is bounded by a universal constant, which is the minimax capacity of a suitable additive noise channel. Furthermore, in the worst case this loss is equal to the maximin redundancy over the rate distortion function of the additive noise "test" channel. For example, the loss in the WynerZiv problem is less than 0:5 bit per sample in the squarederror distortion case, and it is less than 0:22 bits for a binary source with Hammingdistance. These resul...
The duality between information embedding and source coding with side information and some applications
 in Proc. IEEE Int. Symp. Information Theory
, 2001
"... Abstract—Aspects of the duality between the informationembedding problem and the Wyner–Ziv problem of source coding with side information at the decoder are developed and used to establish a spectrum new results on these and related problems, with implications for a number of important applications ..."
Abstract

Cited by 68 (11 self)
 Add to MetaCart
Abstract—Aspects of the duality between the informationembedding problem and the Wyner–Ziv problem of source coding with side information at the decoder are developed and used to establish a spectrum new results on these and related problems, with implications for a number of important applications. The singleletter characterization of the informationembedding problem is developed and related to the corresponding characterization of the Wyner–Ziv problem, both of which correspond to optimization of a common mutual information difference. Dual variables and dual Markov conditions are identified, along with the dual role of noise and distortion in the two problems. For a Gaussian context with quadratic distortion metric, a geometric interpretation of the duality is developed. From such insights, we develop a capacityachieving informationembedding system based on nested lattices. We show the resulting encoder–decoder
A Vector Quantization Approach to Universal Noiseless Coding and Quantization
 IEEE Trans. Inform. Theory
, 1996
"... AbstractA twostage code is a block code in which each block of data is coded in two stages: the first stage codes the identity of a block code among a collection of codes, and the second stage codes the data using the identified code. The collection of codes may he noiseless codes, fixedrate quan ..."
Abstract

Cited by 44 (10 self)
 Add to MetaCart
AbstractA twostage code is a block code in which each block of data is coded in two stages: the first stage codes the identity of a block code among a collection of codes, and the second stage codes the data using the identified code. The collection of codes may he noiseless codes, fixedrate quantizers, or variablerate quantizers. We take a vector quantization approach to twostage coding, in which the first stage code can be regarded as a vector quantizer that “quantizes ” the input data of length n to one of a fixed collection of block codes. We apply the generalized Lloyd algorithm to the firststage quantizer, using induced measures of rate and distortion, to design locally optimal twostage, codes. On a source of medical images, twostage variahlerate vector quantizers designed in this way outperform standard (onestage) fixedrate vector quantizers by over 9 dB. The tail of the operational distortionrate function of the firststage quantizer determines the optimal rate of convergence of the redundancy of a universal sequence of twostage codes. We show that there exist twostage universal noiseless codes, fixedrate quantizers, and variablerate quantizers whose perletter rate and distortion redundancies converge to zero as (k/2)n ’ logn, when the universe of sources has finite dimension k. This extends the achievability part of Rissanen’s theorem from universal noiseless codes to universal quantizers. Further, we show that the redundancies converge as O(n’) when the universe of sources is countable, and as O(r~l+‘) when the universe of sources is infinitedimensional, under appropriate conditions. Index TermsTwostage, adaptive, compression, minimum description length, clustering. I.
Capacity and LatticeStrategies for Cancelling Known Interference
, 2000
"... We derive capacity formulas and strategies for transmission over the the channel Y = X + S + N , 1 n EkXk 2 PX , where the interference S is a (strong) stochastic process or an arbitrarily varying sequence, known causally or with nite anticipation at the transmitter but not at the receiver. In ..."
Abstract

Cited by 39 (7 self)
 Add to MetaCart
We derive capacity formulas and strategies for transmission over the the channel Y = X + S + N , 1 n EkXk 2 PX , where the interference S is a (strong) stochastic process or an arbitrarily varying sequence, known causally or with nite anticipation at the transmitter but not at the receiver. In the causal side information case, we show that strategies associated with entropy constrained quantizers provide lower and upper bounds on the capacity. At high SNR conditions, i.e., if N is weak relative to the power constraint PX , these bounds coincide, the optimum strategies take the form of scalar lattice translations, and the capacity loss due to not having S at the receiver is shown to be exactly the \shaping gain" 0.254 bit. We also extend these ideas to any SNR and to noncausal side information, by incorporating \MMSE weighting", and by using k dimensional lattices. For Gaussian N , the capacity loss of this scheme is upper bounded by 0:5 log(2eG k ), where G k is the normalize...
The case for structured random codes in network capacity theorems
 in Proceedings of the IEEE Information Theory Workshop (ITW 2007), (Lake Tahoe, CA
, 2007
"... Random coding arguments are the backbone of most channel capacity achievability proofs. In this paper, we show that in their standard form, such arguments are insufficient for proving some network capacity theorems: structured coding arguments, such as random linear or lattice codes, attain higher r ..."
Abstract

Cited by 36 (10 self)
 Add to MetaCart
Random coding arguments are the backbone of most channel capacity achievability proofs. In this paper, we show that in their standard form, such arguments are insufficient for proving some network capacity theorems: structured coding arguments, such as random linear or lattice codes, attain higher rates. Historically, structured codes have been studied as a stepping stone to practical constructions. However, Körner and Marton demonstrated their usefulness for capacity theorems through the derivation of the optimal rate region of a distributed functional source coding problem. Here, we use multicasting over finite field and Gaussian multipleaccess networks as canonical examples to demonstrate that even if we want to send bits over a network, structured codes succeed where simple random codes fail. Beyond network coding, we also consider distributed computation over noisy channels and a special relaytype problem. I.
Lattice strategies for the dirty multiple access channel
 in Proceedings of IEEE International Symposium on Information Theory
, 2007
"... A generalization of the Gaussian dirtypaper problem to a multiple access setup is considered. There are two additive interference signals, one known to each transmitter but none to the receiver. The rates achievable using Costa’s strategies (i.e. by a random binning scheme induced by Costa’s auxili ..."
Abstract

Cited by 32 (7 self)
 Add to MetaCart
A generalization of the Gaussian dirtypaper problem to a multiple access setup is considered. There are two additive interference signals, one known to each transmitter but none to the receiver. The rates achievable using Costa’s strategies (i.e. by a random binning scheme induced by Costa’s auxiliary random variables) vanish in the limit when the interference signals are strong. In contrast, it is shown that lattice strategies (“lattice precoding”) can achieve positive rates independent of the interferences, and in fact in some cases which depend on the noise variance and power constraints they are optimal. In particular, lattice strategies are optimal in the limit of high SNR. It is also shown that the gap between the achievable rate region and the capacity region is at most 0.167 bit. Thus, the dirty MAC is another instance of a network setup, like the KornerMarton modulotwo sum problem, where linear coding is potentially better than random binning. Lattice transmission schemes and conditions for optimality for the asymmetric case, where there is only one interference which is known to one of the users (who serves as a “helper ” to the other user), and for the “common interference ” case are also derived. In the former case the gap between the helper achievable rate and its capacity is at most 0.085 bit.
Nested Linear / Lattice Codes for WynerZiv Encoding
, 1998
"... We construct structured codes for lossy compression with uncoded side information at the decoder. These codes aim at achieving the optimal performance, given by the WynerZiv (WZ) ratedistortion function, for two typical settings. For binary symmetric source / sideinformation, we show that a pair o ..."
Abstract

Cited by 30 (5 self)
 Add to MetaCart
We construct structured codes for lossy compression with uncoded side information at the decoder. These codes aim at achieving the optimal performance, given by the WynerZiv (WZ) ratedistortion function, for two typical settings. For binary symmetric source / sideinformation, we show that a pair of nested linear binary codes, each individually "good" in the classical sense, approaches the WZfunction relative to the Hamming distortion measure. For the quadratic Gaussian case we show a similar result at high SNR, using a pair of nested lattices and their associated latticedecoding functions. I. Introduction Wyner and Ziv [6] consider coding a source X, relative to some distortion measure ae(x; x), where the decoder, which outputs b X, (but not the encoder) has access to a correlated source Y , called "side information". They derive a "single letter " expression for the minimum coding rate, R WZ XjY (d), called the "WynerZiv function", under the constraint that the average distor...
DataHiding Codes
 Proc. IEEE
, 2005
"... This tutorial paper reviews the theory and design of codes for hiding or embedding information in signals such as images, video, audio, graphics, and text. Such codes have also been called watermarking codes; they can be used in a variety of applications, including copyright protection for digital m ..."
Abstract

Cited by 28 (3 self)
 Add to MetaCart
This tutorial paper reviews the theory and design of codes for hiding or embedding information in signals such as images, video, audio, graphics, and text. Such codes have also been called watermarking codes; they can be used in a variety of applications, including copyright protection for digital media, content authentication, media forensics, data binding, and covert communications. Some of these applications imply the presence of an adversary attempting to disrupt the transmission of information to the receiver; other applications involve a noisy, generally unknown, communication channel. Our focus is on the mathematical models, fundamental principles, and code design techniques that are applicable to data hiding. The approach draws from basic concepts in information theory, coding theory, game theory, and signal processing, and is illustrated with applications to the problem of hiding data in images. Keywords—Coding theory, data hiding, game theory, image processing, information theory, security, signal processing, watermarking. I.