Results 1  10
of
91
Successive refinement of information
 Applications
, 1989
"... AbstrocrThe successive refinement of information consists of first approximating data using a few bits of information, then iteratively improving the approximation as more and more information is supplied. The god is to achieve an optimal description at each stage. In general an ongoing description ..."
Abstract

Cited by 174 (0 self)
 Add to MetaCart
AbstrocrThe successive refinement of information consists of first approximating data using a few bits of information, then iteratively improving the approximation as more and more information is supplied. The god is to achieve an optimal description at each stage. In general an ongoing description is sought which is ratedistortion optimal whenever it is interrupted. It is shown that a rate distortion problem is successively refinable if and only if the individual solutions of the rate distortion problems can be written as a Markov chain. This implies in particular that tree structured descriptions are optimal if and only if the rate distortion problem is successively rethable. Successive refinement is shown to be possible for all fmite alphabet signals with Hamming distortion, for Gaussian signals with squarederror distortion, and for Laplacian signals with absoluteerror distortion. However, a simple counterexample witb absolute error distortion and a symmetric source distribution shows that successive refinement is not always achievable. lnder TermRate distortion, refinement, progressive transmission, multiuser information theory, squarederror distortion, tree structure. I.
Ratedistortion optimized tree structured compression algorithms for piecewise smooth images
 IEEE Trans. Image Processing
, 2005
"... IEEE Transactions on Image Processing This paper presents novel coding algorithms based on tree structured segmentation, which achieve the correct asymptotic ratedistortion (RD) behavior for a simple class of signals, known as piecewise polynomials, by using an RD based prune and join scheme. Fo ..."
Abstract

Cited by 67 (16 self)
 Add to MetaCart
IEEE Transactions on Image Processing This paper presents novel coding algorithms based on tree structured segmentation, which achieve the correct asymptotic ratedistortion (RD) behavior for a simple class of signals, known as piecewise polynomials, by using an RD based prune and join scheme. For the one dimensional (1D) case, our scheme is based on binary tree segmentation of the signal. This scheme approximates the signal segments using polynomial models and utilizes an RD optimal bit allocation strategy among the different signal segments. The scheme further encodes similar neighbors jointly to achieve the correct exponentially decaying RD behavior � D(R) ∼ c02 −c1R � , thus improving over classic wavelet schemes. We also prove that the computational complexity of the scheme is of O (N log N). We then show the extension of this scheme to the two dimensional (2D) case using a quadtree. This quadtree coding scheme also achieves an exponentially decaying RD behavior, for the polygonal image model composed of a white polygon shaped object against a uniform black background, with low computational cost of O (N log N). Again, the key is an RD optimized prune and join strategy. Finally, we conclude with numerical results, which show that the proposed quadtree coding scheme outperforms JPEG2000 by about 1 dB for real images, like cameraman, at low rates of around 0.15 bpp.
Optimal Bit Allocation via the Generalized BFOS Algorithm
 IEEE Transactions on Information Theory
, 1991
"... We analyze the use of the generalized Breiman, Friedman, Olshen, and Stone (BFOS) algorithm, a recently developed technique for variable rate vector quantizer design, for optimal bit allocation. It is shown that if each source has a convex quantizer function then the complexity of the algorithm is l ..."
Abstract

Cited by 62 (6 self)
 Add to MetaCart
We analyze the use of the generalized Breiman, Friedman, Olshen, and Stone (BFOS) algorithm, a recently developed technique for variable rate vector quantizer design, for optimal bit allocation. It is shown that if each source has a convex quantizer function then the complexity of the algorithm is low.
Vector Quantization with Complexity Costs
, 1993
"... Vector quantization is a data compression method where a set of data points is encoded by a reduced set of reference vectors, the codebook. We discuss a vector quantization strategy which jointly optimizes distortion errors and the codebook complexity, thereby, determining the size of the codebook. ..."
Abstract

Cited by 54 (18 self)
 Add to MetaCart
Vector quantization is a data compression method where a set of data points is encoded by a reduced set of reference vectors, the codebook. We discuss a vector quantization strategy which jointly optimizes distortion errors and the codebook complexity, thereby, determining the size of the codebook. A maximum entropy estimation of the cost function yields an optimal number of reference vectors, their positions and their assignment probabilities. The dependence of the codebook density on the data density for different complexity functions is investigated in the limit of asymptotic quantization levels. How different complexity measures influence the efficiency of vector quantizers is studied for the task of image compression, i.e., we quantize the wavelet coefficients of gray level images and measure the reconstruction error. Our approach establishes a unifying framework for different quantization methods like Kmeans clustering and its fuzzy version, entropy constrained vector quantizati...
Vector Quantization of Image Subbands: A Survey
 IEEE Transactions on Image Processing
, 1996
"... Subband and wavelet decompositions are powerful tools in image coding, because of their decorrelating effects on image pixels, the concentration of energy in a few coefficients, their multirate/multiresolution framework, and their frequency splitting which allows for efficient coding matched to the ..."
Abstract

Cited by 53 (4 self)
 Add to MetaCart
Subband and wavelet decompositions are powerful tools in image coding, because of their decorrelating effects on image pixels, the concentration of energy in a few coefficients, their multirate/multiresolution framework, and their frequency splitting which allows for efficient coding matched to the statistics of each frequency band and to the characteristics of the human visual system. Vector quantization provides a means of converting the decomposed signal into bits in a manner that takes advantage of remaining inter and intraband correlation as well as of the more flexible partitions of higher dimensional vector spaces. Since 1988 a growing body of research has examined the use of vector quantization for subband/wavelet transform coefficients. We present a survey of these methods. 1 Introduction Image compression maps an original image into a bit stream suitable for communication over or storage in a digital medium. The number of bits required to represent the coded image should b...
Error Control for Receiverdriven Layered Multicast of
 IEEE TRANS. MULTIMEDIA
, 2001
"... We consider the problem of error control for receiverdriven layered multicast of audio and video over the Internet. The sender injects into the network multiple source layers and multiple channel coding (parity) layers, some of which are delayed relative to the source. Each receiver subscribes t ..."
Abstract

Cited by 35 (3 self)
 Add to MetaCart
We consider the problem of error control for receiverdriven layered multicast of audio and video over the Internet. The sender injects into the network multiple source layers and multiple channel coding (parity) layers, some of which are delayed relative to the source. Each receiver subscribes to the number of source layers and the number of parity layers that optimizes the receiver's quality for its available bandwidth and packet loss probability. We augment this layered FEC system with layered pseudoARQ. Although feedback is normally problematic in broadcast situations, ARQ can be simulated by having the receivers subscribe and unsubscribe to the delayed parity layers to receive missing information. This pseudoARQ scheme avoids an implosion of repeat requests at the sender and is scalable to an unlimited number of receivers.
Multiresolution vector quantization
 IEEE TRANS. INF. THEORY
, 2004
"... Multiresolution source codes are data compression algorithms yielding embedded source descriptions. The decoder of a multiresolution code can build a source reproduction by decoding the embedded bit stream in part or in whole. All decoding procedures start at the beginning of the binary source descr ..."
Abstract

Cited by 33 (3 self)
 Add to MetaCart
Multiresolution source codes are data compression algorithms yielding embedded source descriptions. The decoder of a multiresolution code can build a source reproduction by decoding the embedded bit stream in part or in whole. All decoding procedures start at the beginning of the binary source description and decode some fraction of that string. Decoding a small portion of the binary string gives a lowresolution reproduction; decoding more yields a higher resolution reproduction; and so on. Multiresolution vector quantizers are block multiresolution source codes. This paper introduces algorithms for designing fixed and variablerate multiresolution vector quantizers. Experiments on synthetic data demonstrate performance close to the theoretical performance limit. Experiments on natural images demonstrate performance improvements of up to 8 dB over treestructured vector quantizers. Some of the lessons learned through multiresolution vector quantizer design lend insight into the design of more sophisticated multiresolution codes.
Bayes Risk Weighted Vector Quantization With Posterior Estimation for Image Compression and Classification
 IEEE TRANSACTIONS ON IMAGE PROCESSING
, 1996
"... Classification and compression play important roles in communicating digital information. Their combination is useful in many applications, including the detection of abnormalities in compressed medical images. In view of the similarities of compression and lowlevel classification, it is not surpri ..."
Abstract

Cited by 28 (8 self)
 Add to MetaCart
Classification and compression play important roles in communicating digital information. Their combination is useful in many applications, including the detection of abnormalities in compressed medical images. In view of the similarities of compression and lowlevel classification, it is not surprising that there are many similar methods for their design. Because some of these methods are useful for designing vector quantizers, it seems natural that vector quantization (VQ) is explored for the combined goal. We investigate several VQbased algorithms that seek to minimize both the distortion of compressed images and errors in classifying their pixel blocks. These algorithms are investigated with both full search and treestructured codes. We emphasize a nonparametric technique that minimizes both error measures simultaneously by incorporating a Bayes risk component into the distortion measure used for design and encoding. We introduce a treestructured posterior estimator to produce t...