Results 1  10
of
272
Quantization
 IEEE TRANS. INFORM. THEORY
, 1998
"... The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modula ..."
Abstract

Cited by 639 (11 self)
 Add to MetaCart
The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modulation systems, especially in the 1948 paper of Oliver, Pierce, and Shannon. Also in 1948, Bennett published the first highresolution analysis of quantization and an exact analysis of quantization noise for Gaussian processes, and Shannon published the beginnings of rate distortion theory, which would provide a theory for quantization as analogtodigital conversion and as data compression. Beginning with these three papers of fifty years ago, we trace the history of quantization from its origins through this decade, and we survey the fundamentals of the theory and many of the popular and promising techniques for quantization.
Image Quality Assessment: From Error Visibility to Structural Similarity
 IEEE TRANSACTIONS ON IMAGE PROCESSING
, 2004
"... Objective methods for assessing perceptual image quality have traditionally attempted to quantify the visibility of errors between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapt ..."
Abstract

Cited by 577 (40 self)
 Add to MetaCart
Objective methods for assessing perceptual image quality have traditionally attempted to quantify the visibility of errors between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a Structural Similarity Index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and stateoftheart objective methods on a database of images compressed with JPEG and JPEG2000.
Progressive Geometry Compression
, 2000
"... We propose a new progressive compression scheme for arbitrary topology, highly detailed and densely sampled meshes arising from geometry scanning. We observe that meshes consist of three distinct components: geometry, parameter, and connectivity information. The latter two do not contribute to the r ..."
Abstract

Cited by 190 (13 self)
 Add to MetaCart
We propose a new progressive compression scheme for arbitrary topology, highly detailed and densely sampled meshes arising from geometry scanning. We observe that meshes consist of three distinct components: geometry, parameter, and connectivity information. The latter two do not contribute to the reduction of error in a compression setting. Using semiregular meshes, parameter and connectivity information can be virtually eliminated. Coupled with semiregular wavelet transforms, zerotree coding, and subdivision based reconstruction we see improvements in error by a factor four (12dB) compared to other progressive coding schemes. CR Categories and Subject Descriptors: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling  hierarchy and geometric transformations; G.1.2 [Numerical Analysis]: Approximation  approximation of surfaces and contours, wavelets and fractals; I.4.2 [Image Processing and Computer Vision]: Compression (Coding)  Approximate methods Additional K...
Spacefrequency Quantization for Wavelet Image Coding
, 1997
"... Recently, a new class of image coding algorithms coupling standard scalar quantization of frequency coefficients with treestructured quantization (related to spatial structures) has attracted wide attention because its good performance appears to confirm the promised efficiencies of hierarchical re ..."
Abstract

Cited by 152 (15 self)
 Add to MetaCart
Recently, a new class of image coding algorithms coupling standard scalar quantization of frequency coefficients with treestructured quantization (related to spatial structures) has attracted wide attention because its good performance appears to confirm the promised efficiencies of hierarchical representation [1, 2]. This paper addresses the problem of how spatial quantization modes and standard scalar quantization can be applied in a jointly optimal fashion in an image coder. We consider zerotree quantization (zeroing out treestructured sets of wavelet coefficients) and the simplest form of scalar quantization (a single common uniform scalar quantizer applied to all nonzeroed coefficients), and we formalize the problem of optimizing their joint application and we develop an image coding algorithm for solving the resulting optimization problem. Despite the basic form of the two quantizers considered, the resulting algorithm demonstrates coding performance that is competitive (often...
Data compression and harmonic analysis
 IEEE Trans. Inform. Theory
, 1998
"... In this paper we review some recent interactions between harmonic analysis and data compression. The story goes back of course to Shannon’s R(D) theory... ..."
Abstract

Cited by 140 (24 self)
 Add to MetaCart
In this paper we review some recent interactions between harmonic analysis and data compression. The story goes back of course to Shannon’s R(D) theory...
Image Decomposition via the Combination of Sparse Representations and a Variational Approach
 IEEE Transactions on Image Processing
, 2004
"... The separation of image content into semantic parts plays a vital role in applications such as compression, enhancement, restoration, and more. In recent years several pioneering works suggested such a separation based on variational formulation, and others using independent component analysis and s ..."
Abstract

Cited by 127 (27 self)
 Add to MetaCart
The separation of image content into semantic parts plays a vital role in applications such as compression, enhancement, restoration, and more. In recent years several pioneering works suggested such a separation based on variational formulation, and others using independent component analysis and sparsity. This paper presents a novel method for separating images into texture and piecewise smooth (cartoon) parts, exploiting both the variational and the sparsity mechanisms. The method combines the Basis Pursuit Denoising (BPDN) algorithm and the TotalVariation (TV) regularization scheme. The basic idea presented in this paper is the use of two appropriate dictionaries, one for the representation of textures, and the other for the natural scene parts, assumed to be piecewisesmooth. Both dictionaries are chosen such that they lead to sparse representations over one type of imagecontent (either texture or piecewise smooth). The use of the BPDN with the two augmented dictionaries leads to the desired separation, along with noise removal as a byproduct. As the need to choose proper dictionaries is generally hard, a TV regularization is employed to better direct the separation process and reduce ringing artifacts. We present a highly e#cient numerical scheme to solve the combined optimization problem posed in our model, and show several experimental results that validate the algorithm's performance.
Progressive Image Coding for Noisy Channels
 IEEE SIGNAL PROCESSING LETTERS
, 1997
"... We cascade an existing image coder with carefully chosen error control coding, and thus produce a progressive image compression scheme whose performance on a noisy channel is significantly better than that of previously known techniques. The main idea is to trade off the available transmission rate ..."
Abstract

Cited by 123 (9 self)
 Add to MetaCart
We cascade an existing image coder with carefully chosen error control coding, and thus produce a progressive image compression scheme whose performance on a noisy channel is significantly better than that of previously known techniques. The main idea is to trade off the available transmission rate between source coding and channel coding in an efficient manner. This coding system is easy to implement and has acceptably low complexity. Furthermore, effectively no degradation due to channel noise can be detected; instead, the penalty paid due to channel noise is a reduction in source coding resolution. Detailed numerical comparisons are given that can serve as benchmarks for comparisons with future encoding schemes. For example, for the 512 512 Lena image, at a transmission rate of 1 b/pixel, and for binary symmetric channels with bit error probabilities 03 , 02 , and 01 , the proposed system outperforms previously reported results by at least 2.6, 2.8, and 8.9 dB, respectively.
Contextbased adaptive binary arithmetic coding in the h.264/avc video compression standard. Circuits and Systems for VideoTechnology, IEEETransactions on
"... (CABAC) as a normative part of the new ITUT/ISO/IEC standard H.264/AVC for video compression is presented. By combining an adaptive binary arithmetic coding technique with context modeling, a high degree of adaptation and redundancy reduction is achieved. The CABAC framework also includes a novel l ..."
Abstract

Cited by 110 (6 self)
 Add to MetaCart
(CABAC) as a normative part of the new ITUT/ISO/IEC standard H.264/AVC for video compression is presented. By combining an adaptive binary arithmetic coding technique with context modeling, a high degree of adaptation and redundancy reduction is achieved. The CABAC framework also includes a novel lowcomplexity method for binary arithmetic coding and probability estimation that is well suited for efficient hardware and software implementations. CABAC significantly outperforms the baseline entropy coding method of H.264/AVC for the typical area of envisaged target applications. For a set of test sequences representing typical material used in broadcast applications and for a range of acceptable video quality of about 30 to 38 dB, average bitrate savings of 9%–14 % are achieved. Index Terms—Binary arithmetic coding, CABAC, context modeling, entropy coding, H.264, MPEG4 AVC. I.
Unequal Loss Protection: Graceful Degradation of Image Quality over Packet Erasure Channels throught Forward Error Correction
 IN DCC
, 2000
"... We present the unequal loss protection (ULP) framework in which unequal amounts of forward error correction are applied to progressive data to provide graceful degradation of image quality as packet losses increase. We develop a simple algorithm that can find a good assignment within the ULP framew ..."
Abstract

Cited by 106 (6 self)
 Add to MetaCart
We present the unequal loss protection (ULP) framework in which unequal amounts of forward error correction are applied to progressive data to provide graceful degradation of image quality as packet losses increase. We develop a simple algorithm that can find a good assignment within the ULP framework. We use the Set Partitioning in Hierarchical Trees coder in this work, but our algorithm can protect any progressive compression scheme. In addition, we promote the use of a PMF of expected channel conditions so that our system can work with almost any model or estimate of packet losses. We find that when optimizing for an exponential packet loss model with a mean loss rate of 20 % and using a total rate of 0.2 bits per pixel on the Lenna image, good image quality can be obtained even when 40% of transmitted packets are lost.
Image Quality Assessment: From Error Measurement to Structural Similarity
 IEEE Trans. Image Processing
, 2004
"... Objective methods for assessing perceptual image quality traditionally attempt to quantify the visibility of errors (di#erences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly ..."
Abstract

Cited by 99 (13 self)
 Add to MetaCart
Objective methods for assessing perceptual image quality traditionally attempt to quantify the visibility of errors (di#erences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a Structural Similarity Index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and stateoftheart objective methods on a database of images compressed with JPEG and JPEG2000. A MatLab implementation of the proposed algorithm is available online at http://www.cns.nyu.edu/~lcv/ssim/.