Results 1  10
of
110
Quantization
 IEEE TRANS. INFORM. THEORY
, 1998
"... The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modula ..."
Abstract

Cited by 639 (11 self)
 Add to MetaCart
The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modulation systems, especially in the 1948 paper of Oliver, Pierce, and Shannon. Also in 1948, Bennett published the first highresolution analysis of quantization and an exact analysis of quantization noise for Gaussian processes, and Shannon published the beginnings of rate distortion theory, which would provide a theory for quantization as analogtodigital conversion and as data compression. Beginning with these three papers of fifty years ago, we trace the history of quantization from its origins through this decade, and we survey the fundamentals of the theory and many of the popular and promising techniques for quantization.
Deterministic Annealing for Clustering, Compression, Classification, Regression, and Related Optimization Problems
 Proceedings of the IEEE
, 1998
"... this paper. Let us place it within the neural network perspective, and particularly that of learning. The area of neural networks has greatly benefited from its unique position at the crossroads of several diverse scientific and engineering disciplines including statistics and probability theory, ph ..."
Abstract

Cited by 248 (11 self)
 Add to MetaCart
this paper. Let us place it within the neural network perspective, and particularly that of learning. The area of neural networks has greatly benefited from its unique position at the crossroads of several diverse scientific and engineering disciplines including statistics and probability theory, physics, biology, control and signal processing, information theory, complexity theory, and psychology (see [45]). Neural networks have provided a fertile soil for the infusion (and occasionally confusion) of ideas, as well as a meeting ground for comparing viewpoints, sharing tools, and renovating approaches. It is within the illdefined boundaries of the field of neural networks that researchers in traditionally distant fields have come to the realization that they have been attacking fundamentally similar optimization problems.
RateDistortion Optimization for Video Compression
 IEEE Signal Processing Magazine
, 1998
"... this article. Some further techniques which go somewhat beyond this model will also be discussed. [Begin Sidebar Inset Article #1] A History of Existing Visual Coding Standards ..."
Abstract

Cited by 187 (11 self)
 Add to MetaCart
this article. Some further techniques which go somewhat beyond this model will also be discussed. [Begin Sidebar Inset Article #1] A History of Existing Visual Coding Standards
LongTerm Memory MotionCompensated Prediction For Robust Video Transmission
, 2000
"... Longterm memory prediction extends the spatial displacement vector utilized in hybrid video coding by a variable time delay permitting the use of more than one reference frame for motion compensation. This extension provides improved ratedistortion performance. However, motion compensation in comb ..."
Abstract

Cited by 85 (26 self)
 Add to MetaCart
Longterm memory prediction extends the spatial displacement vector utilized in hybrid video coding by a variable time delay permitting the use of more than one reference frame for motion compensation. This extension provides improved ratedistortion performance. However, motion compensation in combination with transmission errors leads to temporal error propagation that occurs when the reference frames at encoder and decoder dier. In this paper, we present a framework that incorporates an error estimate into rateconstrained motion estimation and mode decision. Experimental results with a Rayleigh fading channel show that longterm memory motion compensation signicantly outperforms singleframe prediction. 1. INTRODUCTION The eciency of longterm memory motioncompensated prediction (MCP) as an approach to improve coding performance has been demonstrated in [1]. The ITUT has decided to adopt this feature as Annex U to version 3 of the H.263 standard. In this paper, we show that t...
Vector Quantization with Complexity Costs
, 1993
"... Vector quantization is a data compression method where a set of data points is encoded by a reduced set of reference vectors, the codebook. We discuss a vector quantization strategy which jointly optimizes distortion errors and the codebook complexity, thereby, determining the size of the codebook. ..."
Abstract

Cited by 54 (18 self)
 Add to MetaCart
Vector quantization is a data compression method where a set of data points is encoded by a reduced set of reference vectors, the codebook. We discuss a vector quantization strategy which jointly optimizes distortion errors and the codebook complexity, thereby, determining the size of the codebook. A maximum entropy estimation of the cost function yields an optimal number of reference vectors, their positions and their assignment probabilities. The dependence of the codebook density on the data density for different complexity functions is investigated in the limit of asymptotic quantization levels. How different complexity measures influence the efficiency of vector quantizers is studied for the task of image compression, i.e., we quantize the wavelet coefficients of gray level images and measure the reconstruction error. Our approach establishes a unifying framework for different quantization methods like Kmeans clustering and its fuzzy version, entropy constrained vector quantizati...
Vector Quantization of Image Subbands: A Survey
 IEEE Transactions on Image Processing
, 1996
"... Subband and wavelet decompositions are powerful tools in image coding, because of their decorrelating effects on image pixels, the concentration of energy in a few coefficients, their multirate/multiresolution framework, and their frequency splitting which allows for efficient coding matched to the ..."
Abstract

Cited by 53 (4 self)
 Add to MetaCart
Subband and wavelet decompositions are powerful tools in image coding, because of their decorrelating effects on image pixels, the concentration of energy in a few coefficients, their multirate/multiresolution framework, and their frequency splitting which allows for efficient coding matched to the statistics of each frequency band and to the characteristics of the human visual system. Vector quantization provides a means of converting the decomposed signal into bits in a manner that takes advantage of remaining inter and intraband correlation as well as of the more flexible partitions of higher dimensional vector spaces. Since 1988 a growing body of research has examined the use of vector quantization for subband/wavelet transform coefficients. We present a survey of these methods. 1 Introduction Image compression maps an original image into a bit stream suitable for communication over or storage in a digital medium. The number of bits required to represent the coded image should b...
Lagrange multiplier selection in hybrid video coder control
 in Proc. IEEE Int. Conf. Image Process., Thessaloniki
"... The Lagrangian coder control together with the parameter choice is presented that lead to the creation of the new hybrid video coder specifications TMN10 for H.263 and TML for H.26L. An efEcient approach for the determination of the encoding parameters is developed. It is shown by means of experime ..."
Abstract

Cited by 48 (9 self)
 Add to MetaCart
The Lagrangian coder control together with the parameter choice is presented that lead to the creation of the new hybrid video coder specifications TMN10 for H.263 and TML for H.26L. An efEcient approach for the determination of the encoding parameters is developed. It is shown by means of experimental results that the Lagrange multiplier for the macroblock mode decision corre sponds to the negative slope of the distortionrate curve of the pre diction error coding. This distortionrate curve is parameterized by the quantization parameter of the DCT coefficients motivating the established dependency with the Lagrange multiplier. 1.
A Vector Quantization Approach to Universal Noiseless Coding and Quantization
 IEEE Trans. Inform. Theory
, 1996
"... AbstractA twostage code is a block code in which each block of data is coded in two stages: the first stage codes the identity of a block code among a collection of codes, and the second stage codes the data using the identified code. The collection of codes may he noiseless codes, fixedrate quan ..."
Abstract

Cited by 44 (10 self)
 Add to MetaCart
AbstractA twostage code is a block code in which each block of data is coded in two stages: the first stage codes the identity of a block code among a collection of codes, and the second stage codes the data using the identified code. The collection of codes may he noiseless codes, fixedrate quantizers, or variablerate quantizers. We take a vector quantization approach to twostage coding, in which the first stage code can be regarded as a vector quantizer that “quantizes ” the input data of length n to one of a fixed collection of block codes. We apply the generalized Lloyd algorithm to the firststage quantizer, using induced measures of rate and distortion, to design locally optimal twostage, codes. On a source of medical images, twostage variahlerate vector quantizers designed in this way outperform standard (onestage) fixedrate vector quantizers by over 9 dB. The tail of the operational distortionrate function of the firststage quantizer determines the optimal rate of convergence of the redundancy of a universal sequence of twostage codes. We show that there exist twostage universal noiseless codes, fixedrate quantizers, and variablerate quantizers whose perletter rate and distortion redundancies converge to zero as (k/2)n ’ logn, when the universe of sources has finite dimension k. This extends the achievability part of Rissanen’s theorem from universal noiseless codes to universal quantizers. Further, we show that the redundancies converge as O(n’) when the universe of sources is countable, and as O(r~l+‘) when the universe of sources is infinitedimensional, under appropriate conditions. Index TermsTwostage, adaptive, compression, minimum description length, clustering. I.
Video compression  From concepts to the H.264/AVC standard
 PROCEEDINGS OF THE IEEE
, 2005
"... Over the last one and a half decades, digital video compression technologies have become an integral part of the way we create, communicate, and consume visual information. In this paper, techniques for video compression are reviewed, starting from basic concepts. The ratedistortion performance of ..."
Abstract

Cited by 39 (0 self)
 Add to MetaCart
Over the last one and a half decades, digital video compression technologies have become an integral part of the way we create, communicate, and consume visual information. In this paper, techniques for video compression are reviewed, starting from basic concepts. The ratedistortion performance of modern video compression schemes is the result of an interaction between motion representation techniques, intrapicture prediction techniques, waveform coding of differences, and waveform coding of various refreshed regions. The paper starts with an explanation of the basic concepts of video codec design and then explains how these various features have been integrated into international standards, up to and including the most recent such standard, known as H.264/AVC.
Waveletbased image coding: An overview
 Applied and Computational Control, Signals, and Circuits
, 1998
"... ABSTRACT This paper presents an overview of waveletbased image coding. We develop the basics of image coding with a discussion of vector quantization. We motivate the use of transform coding in practical settings,and describe the properties of various decorrelating transforms. We motivate the use o ..."
Abstract

Cited by 35 (3 self)
 Add to MetaCart
ABSTRACT This paper presents an overview of waveletbased image coding. We develop the basics of image coding with a discussion of vector quantization. We motivate the use of transform coding in practical settings,and describe the properties of various decorrelating transforms. We motivate the use of the wavelet transform in coding using ratedistortion considerations as well as approximationtheoretic considerations. Finally,we give an overview of current coders in the literature. 1