Results 1  10
of
22
Quantization
 IEEE TRANS. INFORM. THEORY
, 1998
"... The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modula ..."
Abstract

Cited by 639 (11 self)
 Add to MetaCart
The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modulation systems, especially in the 1948 paper of Oliver, Pierce, and Shannon. Also in 1948, Bennett published the first highresolution analysis of quantization and an exact analysis of quantization noise for Gaussian processes, and Shannon published the beginnings of rate distortion theory, which would provide a theory for quantization as analogtodigital conversion and as data compression. Beginning with these three papers of fifty years ago, we trace the history of quantization from its origins through this decade, and we survey the fundamentals of the theory and many of the popular and promising techniques for quantization.
HighResolution Source Coding for NonDifference Distortion Measures: Multidimensional Companding
 IEEE Trans. Inform. Theory
, 1999
"... Entropycoded vector quantization is studied using highresolution multidimensional companding over a class of nondifference distortion measures. For distortion measures which are "locally quadratic" a rigorous derivation of the asymptotic distortion and entropycoded rate of multidimensional compan ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
Entropycoded vector quantization is studied using highresolution multidimensional companding over a class of nondifference distortion measures. For distortion measures which are "locally quadratic" a rigorous derivation of the asymptotic distortion and entropycoded rate of multidimensional companders is given along with conditions for the optimal choice of the compressor function. This optimum compressor, when it exists, depends on the distortion measure but not on the source distribution. The ratedistortion performance of the companding scheme is studied using a recently obtained asymptotic expression for the ratedistortion function which parallels the Shannon lower bound for difference distortion measures. It is proved that the highresolution performance of the scheme is arbitrarily close to the ratedistortion limit for large quantizer dimensions if the compressor function and the lattice quantizer used in the companding scheme are optimal, extending an analogous statement for...
On Source Coding with SideInformationDependent Distortion Measures
 IEEE TRANS. INFORM. THEORY
, 2000
"... Highresolution bounds in lossy coding of a real memoryless source are considered when side information is present. Let be a "smooth" source and let be the side information. First we treat the case when both the encoder and the decoder have access to and we establish an asymptotically tight (highre ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
Highresolution bounds in lossy coding of a real memoryless source are considered when side information is present. Let be a "smooth" source and let be the side information. First we treat the case when both the encoder and the decoder have access to and we establish an asymptotically tight (highresolution) formula for the conditional ratedistortion function ( ) for a class of locally quadratic distortion measures which may be functions of the side information. We then consider the case when only the decoder has access to the side information (i.e., the "WynerZiv problem"). For sideinformationdependent distortion measures, we give an explicit formula which tightly approximates the WynerZiv ratedistortion function ( ) for small under some assumptions on the joint distribution of and . These results demonstrate that for sideinformationdependent distortion measures the rate loss ( ) ( ) can be bounded away from zero in the limit of small . This contrasts the case of distortion measures which do not depend on the side information where the rate loss vanishes as 0.
The Multiple Description Rate Region at High Resolution
, 1998
"... Consider encoding a source X into two descriptions, such that the first, the second and both descriptions allow decoding of X with distortion levels d 1 , d 2 and d 0 , respectively, relative to a distortion measure ae(x; x). Ozarow have found an explicit characterization for the region R (oe ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
Consider encoding a source X into two descriptions, such that the first, the second and both descriptions allow decoding of X with distortion levels d 1 , d 2 and d 0 , respectively, relative to a distortion measure ae(x; x). Ozarow have found an explicit characterization for the region R (oe 2 ; d 1 ; d 2 ; d 0 ) of admissible rate pairs of the two descriptions, for a Gaussian source X ¸ N (0; oe 2 ), relative to the squarederror distortion measure ae(x; x) = (x \Gamma x) 2 . In fact, this is the only case for which the multiple description ratedistortion region is completely known. We show that for a general real valued source, a locally quadratic distortion measure of the form ae(x; x) = w(x) 2 (x \Gamma x) 2 + o((x \Gamma x) 2 ), and small distortion levels, the region of admissible rate pairs equals approximately R i P x 2 2Eflog w(X)g ; d 1 ; d 2 ; d 0 j where P x is the entropypower of the source. Applications to companding quantization are a...
Vector Quantization and Density Estimation
 In SEQUENCES97
, 1997
"... The connection between compression and the estimation of probability distributions has long been known for the case of discrete alphabet sources and lossless coding. A universal lossless code which does a good job of compressing must implicitly also do a good job of modeling. In particular, with a c ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
The connection between compression and the estimation of probability distributions has long been known for the case of discrete alphabet sources and lossless coding. A universal lossless code which does a good job of compressing must implicitly also do a good job of modeling. In particular, with a collection of codebooks, one for each possible class or model, if codewords are chosen from among the ensemble of codebooks so as to minimize bit rate, then the codebook selected provides an implicit estimate of the underlying class. Less is known about the corresponding connections between lossy compression and continuous sources. Here we consider aspects of estimating conditional and unconditional densities in conjunction with Bayesrisk weighted vector quantization for joint compression and classification.
Scalable Decoding on Factor Trees: A Practical Solution for Wireless Sensor Networks
 IEEE TRANSACTIONS ON COMMUNICATIONS
, 2005
"... We consider the problem of jointly decoding the correlated data picked up and transmitted by the nodes of a largescale sensor network. Assuming that each sensor node uses a very simple encoder (a scalar quantizer and a modulator), we focus on decoding algorithms that exploit the correlation struct ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
We consider the problem of jointly decoding the correlated data picked up and transmitted by the nodes of a largescale sensor network. Assuming that each sensor node uses a very simple encoder (a scalar quantizer and a modulator), we focus on decoding algorithms that exploit the correlation structure of the sensor data to produce the best possible estimates under the minimum mean square error (MMSE) criterion. Our analysis shows that a standard implementation of the optimal MMSE decoder is unfeasible for large scale sensor networks, because its complexity grows exponentially with the number of nodes in the network. Seeking a scalable alternative, we use factor graphs to obtain a simplified model for the correlation structure of the sensor data. This model allows us to use the sumproduct decoding algorithm, whose complexity can be made to grow linearly with the size of the network. Considering large sensor networks with arbitrary topologies, we focus on factor trees and give an exact characterization of the decoding complexity, as well as mathematical tools for factorizing Gaussian sources and optimization algorithms for finding optimal factor trees under the KullbackLeibler criterion.
On Source Coding with Side Information Dependent Distortion Measures
 IEEE Trans. Inform. Theory
, 1998
"... High resolution bounds in lossy coding of a real memoryless source are considered when side information is present. Let X be a "smooth" source and let Y be the side information. First we treat the case when both the encoder and the decoder have access to Y and we establish an asymptotically tight (h ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
High resolution bounds in lossy coding of a real memoryless source are considered when side information is present. Let X be a "smooth" source and let Y be the side information. First we treat the case when both the encoder and the decoder have access to Y and we establish an asymptotically tight (highresolution) formula for the conditional ratedistortion function R XjY (D) for a class of locally quadratic distortion measures which may be functions of the side information. We then consider the case when only the decoder has access to the side information (i.e., the "WynerZiv problem"). For side information dependent distortion measures, we give an explicit formula which tightly approximates the WynerZiv ratedistortion function R WZ (D) for small D under rather general assumptions on the joint distribution of X and Y . These results demonstrate that for side information dependent distortion measures the rate loss R WZ (D) \Gamma R XjY (D) can be bounded away from zero in th...
Distributed scalar quantization for computing: Highresolution analysis and extensions
 IEEE TRANS. INF. THEORY
, 2011
"... Communication of quantized information is frequently followed by a computation. We consider situations of distributed functional scalar quantization: distributed scalar quantization of (possibly correlated) sources followed by centralized computation of a function. Under smoothness conditions on th ..."
Abstract

Cited by 6 (6 self)
 Add to MetaCart
Communication of quantized information is frequently followed by a computation. We consider situations of distributed functional scalar quantization: distributed scalar quantization of (possibly correlated) sources followed by centralized computation of a function. Under smoothness conditions on the sources and function, companding scalar quantizer designs are developed to minimize meansquared error (MSE) of the computed function as the quantizer resolution is allowed to grow. Striking improvements over quantizers designed without consideration of the function are possible and are larger in the entropyconstrained setting than in the fixedrate setting. As extensions to the basic analysis, we characterize a large class of functions for which regular quantization suffices, consider certain functions for which asymptotic optimality is achieved without arbitrarily fine quantization, and allow limited collaboration between source encoders. In the entropyconstrained setting, a single bit per sample communicated between encoders can have an arbitrarily large effect on functional distortion. In contrast, such communication has very little effect in the fixedrate setting.
High Rate Vector Quantization for Detection
 IEEE Trans. Inform. Theory
, 2003
"... We investigate high rate quantization for various detection and reconstruction loss critera. A new distortion measure... ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We investigate high rate quantization for various detection and reconstruction loss critera. A new distortion measure...