Results 1 
8 of
8
Quantization
 IEEE TRANS. INFORM. THEORY
, 1998
"... The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modula ..."
Abstract

Cited by 652 (11 self)
 Add to MetaCart
The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modulation systems, especially in the 1948 paper of Oliver, Pierce, and Shannon. Also in 1948, Bennett published the first highresolution analysis of quantization and an exact analysis of quantization noise for Gaussian processes, and Shannon published the beginnings of rate distortion theory, which would provide a theory for quantization as analogtodigital conversion and as data compression. Beginning with these three papers of fifty years ago, we trace the history of quantization from its origins through this decade, and we survey the fundamentals of the theory and many of the popular and promising techniques for quantization.
Asymptotic Performance of Vector Quantizers with a Perceptual Distortion Measure
 in Proc. IEEE Int. Symp. on Information Theory, p. 55
, 1997
"... Gersho's bounds on the asymptotic performance of vector quantizers are valid for vector distortions which are powers of the Euclidean norm. Yamada, Tazaki and Gray generalized the results to distortion measures that are increasing functions of the norm of their argument. In both cases, the distortio ..."
Abstract

Cited by 28 (3 self)
 Add to MetaCart
Gersho's bounds on the asymptotic performance of vector quantizers are valid for vector distortions which are powers of the Euclidean norm. Yamada, Tazaki and Gray generalized the results to distortion measures that are increasing functions of the norm of their argument. In both cases, the distortion is uniquely determined by the vector quantization error, i.e., the Euclidean difference between the original vector and the codeword into which it is quantized. We generalize these asymptotic bounds to inputweighted quadratic distortion measures, a class of distortion measure often used for perceptually meaningful distortion. The generalization involves a more rigorous derivation of a fixed rate result of Gardner and Rao and a new result for variable rate codes. We also consider the problem of source mismatch, where the quantizer is designed using a probability density different from the true source density. The resulting asymptotic performance in terms of distortion increase in dB is shown...
HighResolution Source Coding for NonDifference Distortion Measures: Multidimensional Companding
 IEEE Trans. Inform. Theory
, 1999
"... Entropycoded vector quantization is studied using highresolution multidimensional companding over a class of nondifference distortion measures. For distortion measures which are "locally quadratic" a rigorous derivation of the asymptotic distortion and entropycoded rate of multidimensional compan ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
Entropycoded vector quantization is studied using highresolution multidimensional companding over a class of nondifference distortion measures. For distortion measures which are "locally quadratic" a rigorous derivation of the asymptotic distortion and entropycoded rate of multidimensional companders is given along with conditions for the optimal choice of the compressor function. This optimum compressor, when it exists, depends on the distortion measure but not on the source distribution. The ratedistortion performance of the companding scheme is studied using a recently obtained asymptotic expression for the ratedistortion function which parallels the Shannon lower bound for difference distortion measures. It is proved that the highresolution performance of the scheme is arbitrarily close to the ratedistortion limit for large quantizer dimensions if the compressor function and the lattice quantizer used in the companding scheme are optimal, extending an analogous statement for...
Vector Quantization and Density Estimation
 In SEQUENCES97
, 1997
"... The connection between compression and the estimation of probability distributions has long been known for the case of discrete alphabet sources and lossless coding. A universal lossless code which does a good job of compressing must implicitly also do a good job of modeling. In particular, with a c ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
The connection between compression and the estimation of probability distributions has long been known for the case of discrete alphabet sources and lossless coding. A universal lossless code which does a good job of compressing must implicitly also do a good job of modeling. In particular, with a collection of codebooks, one for each possible class or model, if codewords are chosen from among the ensemble of codebooks so as to minimize bit rate, then the codebook selected provides an implicit estimate of the underlying class. Less is known about the corresponding connections between lossy compression and continuous sources. Here we consider aspects of estimating conditional and unconditional densities in conjunction with Bayesrisk weighted vector quantization for joint compression and classification.
Yavneh: Adaptive Multiscale Redistribution for Vector Quantization
 In press, SIAM Journal on Scientific Computing
, 2004
"... Abstract. Vector quantization is a classical problem that appears in many fields. Unfortunately, the quantization problem is generally nonconvex, and therefore affords many local minima. The main problem is finding an initial approximation which is close to a “good ” local minimum. Once such an appr ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. Vector quantization is a classical problem that appears in many fields. Unfortunately, the quantization problem is generally nonconvex, and therefore affords many local minima. The main problem is finding an initial approximation which is close to a “good ” local minimum. Once such an approximation is found, the Lloyd–Max method may be used to reach a local minimum near it. In recent years, much improvement has been made with respect to reducing the computational costs of quantization algorithms, whereas the task of finding better initial approximations received somewhat less attention. We present a novel multiscale iterative scheme for the quantization problem. The scheme is based on redistributing the representation levels among aggregates of decision regions at changing scales. The rule governing the redistribution relies on the socalled point density function and on the number of representation levels in each aggregate. Our method focuses on achieving better local minima than those achieved by other contemporary methods such as LBG. When quantizing signals with sparse and patchy histograms, as may occur in color images, for example, the improvement in distortion relative to LBG may be arbitrarily large.
Towards Automatic Registration of Magnetic Resonance Images of the Brain Using Neural Networks. Part 2
, 1998
"... put of the detector plane of (c) is shown in (e). The entire surface is smoother than (d). The uncorrupted corner and the blurred feature give a less pronounced peak; the position of the corrupted corner cannot be detected with confidence and several likely locations are indicated by the smooth hill ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
put of the detector plane of (c) is shown in (e). The entire surface is smoother than (d). The uncorrupted corner and the blurred feature give a less pronounced peak; the position of the corrupted corner cannot be detected with confidence and several likely locations are indicated by the smooth hill. Thus, detection and placement can be improved by using sharp feature representations. The aim of this chapter is to develop feature sets with sharp contours. Three amendments to the previously proposed architecture are proposed: the use of spatial competition during training is outlined in x6.2, the selection of a subset of features from a larger set is suggested in x6.3, and the application of thresholdlike, feature postprocessing is discussed in x6.4. First a description of the three methods is given which is followed by an experimental investigation in x6.5. The new feature types of the three methods are given in
Chapter 9 Vector Quantizers
"... Let C be the quantization codebook for a kdimensional vector quantizer. If R is the compression rate, let N =2 kR. The codebook C is given by C = fv1;v2;:::;vN g where the vectors v1;v2;:::;vN are chosen from kdimensional Euclidean space. The resulting kdimensional vector quantizer quantizes a da ..."
Abstract
 Add to MetaCart
Let C be the quantization codebook for a kdimensional vector quantizer. If R is the compression rate, let N =2 kR. The codebook C is given by C = fv1;v2;:::;vN g where the vectors v1;v2;:::;vN are chosen from kdimensional Euclidean space. The resulting kdimensional vector quantizer quantizes a datavector of length n amultiple of k into a reproduction vector X =(X1;X2;:::;Xn) ^X = ( ^ X1; ^ X2;:::; ^ Xn) in the following way: The datavector X is partitioned into blocks of length k, and then each block is quantized into a closest vector in C in the Euclidean distance. When the quantized blocks are concatenated together, the vector ^X of length n results. In other words, letting (X (i,1)k+1;:::;Xik) denote the ith block of data in X, and letting ( ^ X (i,1)k+1;:::; ^ Xik) denote the ith block in ^ X,wehave ( ^X (i,1)k+1;:::; ^Xik) = arg mind v2C, v; (X (i,1)k+1;:::;Xik) where \d &quot; denotes Euclidean distance. The distortion D of the VQbased compression system is de ned by D = n,1 h (X1, ^X1) 2 +(X2, ^X2) 2 +:::+(Xn, ^Xn) 2 In the design of a vector quantizer for a VQbased compression system, one attempts to nd a vector quantizer yielding a desirable tradeo in the performance parameters R and D. The use of vector quantizers in lossy compression systems yields the following advantages: VQ distortion can be below scalar quantizer distortion at same rate VQ decoding is fast (table lookup) VQ is implementable at low rates The fact that VQ provides lower distortion than that possible with scalar quantization is due to a principle called dimension gain. For example, at a rate of R = 1 codebit per sample, Gaussian memoryless data is optimally compressed with distortion D =4:40 decibels via scalar quantizers, whereas distortion D =4:49 decibels is possible at this rate using 3D vector quantizers [2]. There is a limiting distortion performance as the VQ dimension grows which ischaracterized by the distortion rate function, to be discussed in Chapter
Vector Quantization in Speech Coding Invited Paper
"... Quantization, the process of approximating continuousamplitude signals by digital (discreteamplitude) signals, is an important aspect of data compression or coding, the field concerned with the reduction of the number of bits necessary to transmit or store analog data, subject to a distortion or fi ..."
Abstract
 Add to MetaCart
Quantization, the process of approximating continuousamplitude signals by digital (discreteamplitude) signals, is an important aspect of data compression or coding, the field concerned with the reduction of the number of bits necessary to transmit or store analog data, subject to a distortion or fidelity criterion. The independent quantization of each signal value or parameter is termed scalar quantization, while the joint quantization of a block of parameters is termed block or vector quantization. This tutorial review presents the basic concepts employed in vector quantization and gives a realistic assessment of its benefits and costs when compared to scalar quantization. Vector quantization is presented as a process of redundancy removal that makes effective use of four interrelated properties of vector parameters: linear dependency (correlation), nonlinear dependency, shape of the probability density function (pdf), and vector dimensionality itself. In contrast, scalar quantization can utilize effectively only linear dependency and pdf shape. The basic concepts are illustrated by means of simple examples and the theoretical limits of vector quantizer performance are reviewed, based on results from ratedistortion theory. Practical issues relating to quantizer design, implementa tion, and performance in actual applications are explored. While many of the methods presented are quite general and can be used for the coding of arbitrary signals, this paper focuses primarily on the coding of speech signals and parameters. I.