Results 1  10
of
27
Quantization
 IEEE TRANS. INFORM. THEORY
, 1998
"... The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modula ..."
Abstract

Cited by 639 (11 self)
 Add to MetaCart
The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modulation systems, especially in the 1948 paper of Oliver, Pierce, and Shannon. Also in 1948, Bennett published the first highresolution analysis of quantization and an exact analysis of quantization noise for Gaussian processes, and Shannon published the beginnings of rate distortion theory, which would provide a theory for quantization as analogtodigital conversion and as data compression. Beginning with these three papers of fifty years ago, we trace the history of quantization from its origins through this decade, and we survey the fundamentals of the theory and many of the popular and promising techniques for quantization.
Asymptotic Performance of Vector Quantizers with a Perceptual Distortion Measure
 in Proc. IEEE Int. Symp. on Information Theory, p. 55
, 1997
"... Gersho's bounds on the asymptotic performance of vector quantizers are valid for vector distortions which are powers of the Euclidean norm. Yamada, Tazaki and Gray generalized the results to distortion measures that are increasing functions of the norm of their argument. In both cases, the distortio ..."
Abstract

Cited by 28 (3 self)
 Add to MetaCart
Gersho's bounds on the asymptotic performance of vector quantizers are valid for vector distortions which are powers of the Euclidean norm. Yamada, Tazaki and Gray generalized the results to distortion measures that are increasing functions of the norm of their argument. In both cases, the distortion is uniquely determined by the vector quantization error, i.e., the Euclidean difference between the original vector and the codeword into which it is quantized. We generalize these asymptotic bounds to inputweighted quadratic distortion measures, a class of distortion measure often used for perceptually meaningful distortion. The generalization involves a more rigorous derivation of a fixed rate result of Gardner and Rao and a new result for variable rate codes. We also consider the problem of source mismatch, where the quantizer is designed using a probability density different from the true source density. The resulting asymptotic performance in terms of distortion increase in dB is shown...
HighResolution Source Coding for NonDifference Distortion Measures: Multidimensional Companding
 IEEE Trans. Inform. Theory
, 1999
"... Entropycoded vector quantization is studied using highresolution multidimensional companding over a class of nondifference distortion measures. For distortion measures which are "locally quadratic" a rigorous derivation of the asymptotic distortion and entropycoded rate of multidimensional compan ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
Entropycoded vector quantization is studied using highresolution multidimensional companding over a class of nondifference distortion measures. For distortion measures which are "locally quadratic" a rigorous derivation of the asymptotic distortion and entropycoded rate of multidimensional companders is given along with conditions for the optimal choice of the compressor function. This optimum compressor, when it exists, depends on the distortion measure but not on the source distribution. The ratedistortion performance of the companding scheme is studied using a recently obtained asymptotic expression for the ratedistortion function which parallels the Shannon lower bound for difference distortion measures. It is proved that the highresolution performance of the scheme is arbitrarily close to the ratedistortion limit for large quantizer dimensions if the compressor function and the lattice quantizer used in the companding scheme are optimal, extending an analogous statement for...
On Source Coding with SideInformationDependent Distortion Measures
 IEEE TRANS. INFORM. THEORY
, 2000
"... Highresolution bounds in lossy coding of a real memoryless source are considered when side information is present. Let be a "smooth" source and let be the side information. First we treat the case when both the encoder and the decoder have access to and we establish an asymptotically tight (highre ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
Highresolution bounds in lossy coding of a real memoryless source are considered when side information is present. Let be a "smooth" source and let be the side information. First we treat the case when both the encoder and the decoder have access to and we establish an asymptotically tight (highresolution) formula for the conditional ratedistortion function ( ) for a class of locally quadratic distortion measures which may be functions of the side information. We then consider the case when only the decoder has access to the side information (i.e., the "WynerZiv problem"). For sideinformationdependent distortion measures, we give an explicit formula which tightly approximates the WynerZiv ratedistortion function ( ) for small under some assumptions on the joint distribution of and . These results demonstrate that for sideinformationdependent distortion measures the rate loss ( ) ( ) can be bounded away from zero in the limit of small . This contrasts the case of distortion measures which do not depend on the side information where the rate loss vanishes as 0.
The Multiple Description Rate Region at High Resolution
, 1998
"... Consider encoding a source X into two descriptions, such that the first, the second and both descriptions allow decoding of X with distortion levels d 1 , d 2 and d 0 , respectively, relative to a distortion measure ae(x; x). Ozarow have found an explicit characterization for the region R (oe ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
Consider encoding a source X into two descriptions, such that the first, the second and both descriptions allow decoding of X with distortion levels d 1 , d 2 and d 0 , respectively, relative to a distortion measure ae(x; x). Ozarow have found an explicit characterization for the region R (oe 2 ; d 1 ; d 2 ; d 0 ) of admissible rate pairs of the two descriptions, for a Gaussian source X ¸ N (0; oe 2 ), relative to the squarederror distortion measure ae(x; x) = (x \Gamma x) 2 . In fact, this is the only case for which the multiple description ratedistortion region is completely known. We show that for a general real valued source, a locally quadratic distortion measure of the form ae(x; x) = w(x) 2 (x \Gamma x) 2 + o((x \Gamma x) 2 ), and small distortion levels, the region of admissible rate pairs equals approximately R i P x 2 2Eflog w(X)g ; d 1 ; d 2 ; d 0 j where P x is the entropypower of the source. Applications to companding quantization are a...
Vector Quantization and Density Estimation
 In SEQUENCES97
, 1997
"... The connection between compression and the estimation of probability distributions has long been known for the case of discrete alphabet sources and lossless coding. A universal lossless code which does a good job of compressing must implicitly also do a good job of modeling. In particular, with a c ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
The connection between compression and the estimation of probability distributions has long been known for the case of discrete alphabet sources and lossless coding. A universal lossless code which does a good job of compressing must implicitly also do a good job of modeling. In particular, with a collection of codebooks, one for each possible class or model, if codewords are chosen from among the ensemble of codebooks so as to minimize bit rate, then the codebook selected provides an implicit estimate of the underlying class. Less is known about the corresponding connections between lossy compression and continuous sources. Here we consider aspects of estimating conditional and unconditional densities in conjunction with Bayesrisk weighted vector quantization for joint compression and classification.
On Source Coding with Side Information Dependent Distortion Measures
 IEEE Trans. Inform. Theory
, 1998
"... High resolution bounds in lossy coding of a real memoryless source are considered when side information is present. Let X be a "smooth" source and let Y be the side information. First we treat the case when both the encoder and the decoder have access to Y and we establish an asymptotically tight (h ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
High resolution bounds in lossy coding of a real memoryless source are considered when side information is present. Let X be a "smooth" source and let Y be the side information. First we treat the case when both the encoder and the decoder have access to Y and we establish an asymptotically tight (highresolution) formula for the conditional ratedistortion function R XjY (D) for a class of locally quadratic distortion measures which may be functions of the side information. We then consider the case when only the decoder has access to the side information (i.e., the "WynerZiv problem"). For side information dependent distortion measures, we give an explicit formula which tightly approximates the WynerZiv ratedistortion function R WZ (D) for small D under rather general assumptions on the joint distribution of X and Y . These results demonstrate that for side information dependent distortion measures the rate loss R WZ (D) \Gamma R XjY (D) can be bounded away from zero in th...
Scalable Distributed Speech Recognition Using MultiFrame GMMBased Block Quantization
"... In this paper, we propose the use of the multiframe Gaussian mixture modelbased block quantizer for the coding of Mel frequencywarped cepstral coefficient (MFCC) features in distributed speech recognition (DSR) applications. This coding scheme exploits intraframe correlation via the KarhunenLo ev ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
In this paper, we propose the use of the multiframe Gaussian mixture modelbased block quantizer for the coding of Mel frequencywarped cepstral coefficient (MFCC) features in distributed speech recognition (DSR) applications. This coding scheme exploits intraframe correlation via the KarhunenLo eve transform (KLT) and interframe correlation via the joint processing of adjacent frames together with the computational simplicity of scalar quantization. The proposed coder is bitrate scalable, which means that the bitrate can be adjusted without the need for retraining of the quantizers. Static parameters such as the probability density function (PDF) model and KLT orthogonal matrices are stored at the encoder and decoder and bit allocations are calculated `onthefly' without intensive processing. This coding scheme is evaluated in this paper on the Aurora2 database in a DSR framework. It is shown that this coding scheme achieves high recognition performance at lower bitrates, with a word error rate (WER) of 2.5% at 800 bps, which is less than 1% degradation from the baseline word recognition accuracy, and graceful degradation down to a WER of 7% at 300 bps.
Quantization of LSF Parameters Using Trellis Modeling
 IEEE TRANS. SPEECH AND AUDIO PROC
, 2001
"... A low bitrate lowcomplexity Blockbased Trellis Quantization (BTQ) scheme is proposed for the quantization of the Line Spectral Frequencies (LSF) in speech coding applications. The scheme is based on the modeling of the LSF intraframe dependencies with a trellis structure. The ordering property an ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
A low bitrate lowcomplexity Blockbased Trellis Quantization (BTQ) scheme is proposed for the quantization of the Line Spectral Frequencies (LSF) in speech coding applications. The scheme is based on the modeling of the LSF intraframe dependencies with a trellis structure. The ordering property and the fact that LSF parameters are bounded within a range is explicitly incorporated in the trellis model using a fixedrate entropycoding approach. BTQ search and design algorithms are discussed and an efficient algorithm for the index generation (finding the index of a path in the trellis) is presented. Based on the proposed Blockbased Trellis Quantizer, two intraframe schemes and one interframe scheme is proposed. Comparisons to the SplitVQ [20], the Trellis Coded Quantization of LSF parameters [19], as well as the interframe scheme used in IS641 EFRC [42] are provided. These results demonstrate the superior performance of the proposed BTQ schemes.
High Rate Vector Quantization for Detection
 IEEE Trans. Inform. Theory
, 2003
"... We investigate high rate quantization for various detection and reconstruction loss critera. A new distortion measure... ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We investigate high rate quantization for various detection and reconstruction loss critera. A new distortion measure...