Results 1  10
of
30
Quantization
 IEEE TRANS. INFORM. THEORY
, 1998
"... The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modula ..."
Abstract

Cited by 639 (11 self)
 Add to MetaCart
The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modulation systems, especially in the 1948 paper of Oliver, Pierce, and Shannon. Also in 1948, Bennett published the first highresolution analysis of quantization and an exact analysis of quantization noise for Gaussian processes, and Shannon published the beginnings of rate distortion theory, which would provide a theory for quantization as analogtodigital conversion and as data compression. Beginning with these three papers of fifty years ago, we trace the history of quantization from its origins through this decade, and we survey the fundamentals of the theory and many of the popular and promising techniques for quantization.
Fast quantizing and decoding and algorithms for lattice quantizers and codes
 IEEE Trans. on Information Theory
, 1982
"... and their duals a very fast algorithm is given for finding the closest lattice point to an arbitrary point. If these lattices are used for vector quantizing of uniformly distributed data, the algorithm finds the minimum distortion lattice point. If the lattices are used as codes for a Gaussian chann ..."
Abstract

Cited by 64 (7 self)
 Add to MetaCart
and their duals a very fast algorithm is given for finding the closest lattice point to an arbitrary point. If these lattices are used for vector quantizing of uniformly distributed data, the algorithm finds the minimum distortion lattice point. If the lattices are used as codes for a Gaussian channel, the algorithm performs maximum likelihood decoding. T I.
Vector Quantization with Complexity Costs
, 1993
"... Vector quantization is a data compression method where a set of data points is encoded by a reduced set of reference vectors, the codebook. We discuss a vector quantization strategy which jointly optimizes distortion errors and the codebook complexity, thereby, determining the size of the codebook. ..."
Abstract

Cited by 54 (18 self)
 Add to MetaCart
Vector quantization is a data compression method where a set of data points is encoded by a reduced set of reference vectors, the codebook. We discuss a vector quantization strategy which jointly optimizes distortion errors and the codebook complexity, thereby, determining the size of the codebook. A maximum entropy estimation of the cost function yields an optimal number of reference vectors, their positions and their assignment probabilities. The dependence of the codebook density on the data density for different complexity functions is investigated in the limit of asymptotic quantization levels. How different complexity measures influence the efficiency of vector quantizers is studied for the task of image compression, i.e., we quantize the wavelet coefficients of gray level images and measure the reconstruction error. Our approach establishes a unifying framework for different quantization methods like Kmeans clustering and its fuzzy version, entropy constrained vector quantizati...
On the asymptotic tightness of the Shannon lower bound
, 1997
"... New results are proved on the convergence of the Shannon lower bound (SLB) to the rate distortion function as the distortion decreases to zero. The key convergence result is proved using a fundamental property of informational divergence. As a corollary, it is shown that the SLB is asymptotically ti ..."
Abstract

Cited by 45 (15 self)
 Add to MetaCart
New results are proved on the convergence of the Shannon lower bound (SLB) to the rate distortion function as the distortion decreases to zero. The key convergence result is proved using a fundamental property of informational divergence. As a corollary, it is shown that the SLB is asymptotically tight for normbased distortions, when the source vector has a finite differential entropy and a finite ffth moment for some ff ? 0, with respect to the given norm. Moreover, we derive a theorem of Linkov on the asymptotic tightness of the SLB for general difference distortion measures with more relaxed conditions on the source density. We also show that the SLB relative to a stationary source and single letter difference distortion is asymptotically tight under very weak assumptions on the source distribution. Key words: rate distortion theory, Shannon lower bound, difference distortion measures, stationary sources T. Linder is with the Coordinated Science Laboratory, University of Illinoi...
Transmit beamforming in multipleantenna systems with finite rate feedback: A VQbased approach
 IEEE Trans. Inform. Theory
, 2006
"... Abstract—This paper investigates quantization methods for feeding back the channel information through a lowrate feedback channel in the context of multipleinput singleoutput (MISO) systems. We propose a new quantizer design criterion for capacity maximization and develop the corresponding iterat ..."
Abstract

Cited by 36 (2 self)
 Add to MetaCart
Abstract—This paper investigates quantization methods for feeding back the channel information through a lowrate feedback channel in the context of multipleinput singleoutput (MISO) systems. We propose a new quantizer design criterion for capacity maximization and develop the corresponding iterative vector quantization (VQ) design algorithm. The criterion is based on maximizing the meansquared weighted inner product (MSwIP) between the optimum and the quantized beamforming vector. The performance of systems with quantized beamforming is analyzed for the independent fading case. This requires finding the density of the squared inner product between the optimum and the quantized beamforming vector, which is obtained by considering a simple approximation of the quantization cell. The approximate density function is used to lowerbound the capacity loss due to quantization, the outage probability, and the bit error probability. The resulting expressions provide insight into the dependence of the performance of transmit beamforming MISO systems on the number of transmit antennas and feedback rate. Computer simulations support the analytical results and indicate that the lower bounds are quite tight. Index Terms—Bit error probability, channel capacity, channel state information, multiple antennas, transmit beamforming, outage probability, vector quantization (VQ). I.
Bennett's Integral for Vector Quantizers
 IEEE Trans. Inform. Theory
, 1995
"... This paper extends Bennett's integral from scalar to vector quantizers, giving a simple formula that expresses the rthpower distortion of a manypoint vector quantizer in terms of the number of points, point density function, inertial profile and the distribution of the source. The inertial profile ..."
Abstract

Cited by 32 (6 self)
 Add to MetaCart
This paper extends Bennett's integral from scalar to vector quantizers, giving a simple formula that expresses the rthpower distortion of a manypoint vector quantizer in terms of the number of points, point density function, inertial profile and the distribution of the source. The inertial profile specifies the normalized moment of inertia of quantization cells as a function of location. The extension is formulated in terms of a sequence of quantizers whose point density and inertial profile approach known functions as the number of points increases. Precise conditions are given for the convergence of distortion (suitably normalized) to Bennett's integral. Previous extensions did not include the inertial profile and, consequently, provided only bounds or applied only to quantizers with congruent cells, such as lattice and optimal quantizers. The new version of Bennett's integral provides a framework for the analysis of suboptimal structured vector quantizers. It is shown how the loss...
Asymptotic Performance of Vector Quantizers with a Perceptual Distortion Measure
 in Proc. IEEE Int. Symp. on Information Theory, p. 55
, 1997
"... Gersho's bounds on the asymptotic performance of vector quantizers are valid for vector distortions which are powers of the Euclidean norm. Yamada, Tazaki and Gray generalized the results to distortion measures that are increasing functions of the norm of their argument. In both cases, the distortio ..."
Abstract

Cited by 28 (3 self)
 Add to MetaCart
Gersho's bounds on the asymptotic performance of vector quantizers are valid for vector distortions which are powers of the Euclidean norm. Yamada, Tazaki and Gray generalized the results to distortion measures that are increasing functions of the norm of their argument. In both cases, the distortion is uniquely determined by the vector quantization error, i.e., the Euclidean difference between the original vector and the codeword into which it is quantized. We generalize these asymptotic bounds to inputweighted quadratic distortion measures, a class of distortion measure often used for perceptually meaningful distortion. The generalization involves a more rigorous derivation of a fixed rate result of Gardner and Rao and a new result for variable rate codes. We also consider the problem of source mismatch, where the quantizer is designed using a probability density different from the true source density. The resulting asymptotic performance in terms of distortion increase in dB is shown...
HighResolution Source Coding for NonDifference Distortion Measures: Multidimensional Companding
 IEEE Trans. Inform. Theory
, 1999
"... Entropycoded vector quantization is studied using highresolution multidimensional companding over a class of nondifference distortion measures. For distortion measures which are "locally quadratic" a rigorous derivation of the asymptotic distortion and entropycoded rate of multidimensional compan ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
Entropycoded vector quantization is studied using highresolution multidimensional companding over a class of nondifference distortion measures. For distortion measures which are "locally quadratic" a rigorous derivation of the asymptotic distortion and entropycoded rate of multidimensional companders is given along with conditions for the optimal choice of the compressor function. This optimum compressor, when it exists, depends on the distortion measure but not on the source distribution. The ratedistortion performance of the companding scheme is studied using a recently obtained asymptotic expression for the ratedistortion function which parallels the Shannon lower bound for difference distortion measures. It is proved that the highresolution performance of the scheme is arbitrarily close to the ratedistortion limit for large quantizer dimensions if the compressor function and the lattice quantizer used in the companding scheme are optimal, extending an analogous statement for...
Asymptotic Distribution of the Errors in Scalar and Vector Quantizers
"... Highrate (or asymptotic) quantization theory has found formulas for the average squared length (more generally, the qth moment of the length) of the error produced by various scalar and vector quantizers with many quantization points. In contrast, this paper finds an asymptotic formula for the pro ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
Highrate (or asymptotic) quantization theory has found formulas for the average squared length (more generally, the qth moment of the length) of the error produced by various scalar and vector quantizers with many quantization points. In contrast, this paper finds an asymptotic formula for the probability density of the length of the error and, in certain special cases, for the probability density of the multidimensional error vector, itself. The latter can be used to analyze the distortion of twostage vector quantization. The former permits one to learn about the point density and cell shapes of a quantizer from a histogram of quantization error lengths. Histograms of the error lengths in simulations agree well with the derived formulas. Also presented are a number of properties of the error density, including the relationship between the error density, the point density and the cell shapes, the fact that its qth moment equals Bennett's integral (a formula for the average distortio...
Dynamic information and constraints in source and channel coding
, 2004
"... explore dynamics in source coding and channel coding. We begin by introducing the idea of distortion side information, which does not directly depend on the source but instead affects the distortion measure. Such distortion side information is not only useful at the encoder but under certain conditi ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
explore dynamics in source coding and channel coding. We begin by introducing the idea of distortion side information, which does not directly depend on the source but instead affects the distortion measure. Such distortion side information is not only useful at the encoder but under certain conditions knowing it at the encoder is optimal and knowing it at the decoder is useless. Thus distortion side information is a natural complement to WynerZiv side information and may be useful in exploiting properties of the human perceptual system as well as in sensor or control applications. In addition to developing the theoretical limits of source coding with distortion side information, we also construct practical quantizers based on lattices and codes on graphs. Our use of codes on graphs is also of independent interest since it highlights some issues in translating the success of turbo and LDPC codes into the realm of source coding. Finally, to explore the dynamics of side information correlated with the source, we consider fixed lag side information at the decoder. We focus on the special case of perfect side information with unit lag corresponding to source coding with feedforward (the dual