Results 1  10
of
44
The Laplacian Pyramid as a Compact Image Code
 IEEE Transactions on Communications
, 1983
"... We describe a technique for image encoding in which local operators of many scales but identical shape serve as the basis functions. The representation differs from established techniques in that the code elements are localized in spatial frequency as well as in space. Pixeltopixel correlations ar ..."
Abstract

Cited by 1021 (11 self)
 Add to MetaCart
We describe a technique for image encoding in which local operators of many scales but identical shape serve as the basis functions. The representation differs from established techniques in that the code elements are localized in spatial frequency as well as in space. Pixeltopixel correlations are first removed by subtracting a lowpass filtered copy of the image from the image itself. The result is a net data compression since the difference, or error, image has low variance and entropy, and the lowpass filtered image may represented at reduced sample density. Further data compression is achieved by quantizing the difference image. These steps are then repeated to compress the lowpass image. Iteration of the process at appropriately expanded scales generates a pyramid data structure. The encoding process is equivalent to sampling the image with Laplacian operators of many scales. Thus, the code tends to enhance salient image features. A further advantage of the present code is tha...
Quantization
 IEEE TRANS. INFORM. THEORY
, 1998
"... The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modula ..."
Abstract

Cited by 652 (11 self)
 Add to MetaCart
The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modulation systems, especially in the 1948 paper of Oliver, Pierce, and Shannon. Also in 1948, Bennett published the first highresolution analysis of quantization and an exact analysis of quantization noise for Gaussian processes, and Shannon published the beginnings of rate distortion theory, which would provide a theory for quantization as analogtodigital conversion and as data compression. Beginning with these three papers of fifty years ago, we trace the history of quantization from its origins through this decade, and we survey the fundamentals of the theory and many of the popular and promising techniques for quantization.
A Multiresolution Spline With Application to Image Mosaics
, 1983
"... this paper was supported by NSF grant ECS8206321. A shorter description of this work was published in the Proceedings of SPIE, vol. 432, Applications of Digital Image Processing VI, The International Society for Optical Engineering, Bellingham, Washington. Authors' address: RCA David Sarnoff Resear ..."
Abstract

Cited by 272 (4 self)
 Add to MetaCart
this paper was supported by NSF grant ECS8206321. A shorter description of this work was published in the Proceedings of SPIE, vol. 432, Applications of Digital Image Processing VI, The International Society for Optical Engineering, Bellingham, Washington. Authors' address: RCA David Sarnoff Research Center, Princeton, NJ 08540. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. 1983 ACM 07300301/83/10000217 $00.75 ACM Transactions on Graphics, Vol. 2. No. 4, October 1983, Pages 217236. 218 . P. J. Burt and E. H. Adelson Fig. 1. A pair of images may be represented as a pair of surfaces above the (x, y) plane. The problem of image splining is to join these surfaces with a smooth seam, with as little distortion of each surface as possible. tiple telescope photographs. In each of these cases, the mosaic technique is used to construct an image with a far larger field of view or level of detail than could be obtained with a single photograph. In advertising or computer graphics, the technique can be used to create synthetic images from possibly unrelated components. A technical problem common to all applications of photomosaics is joining two images so that the edge between them is not visible. Even slight differences in image gray level across an extended boundary can make that boundary quite noticeable. Unfortunately, such gray level differences are frequently unavoidable; they may be due to such factors as differe...
The LOCOI Lossless Image Compression Algorithm: Principles and Standardization into JPEGLS
 IEEE TRANSACTIONS ON IMAGE PROCESSING
, 2000
"... LOCOI (LOw COmplexity LOssless COmpression for Images) is the algorithm at the core of the new ISO/ITU standard for lossless and nearlossless compression of continuoustone images, JPEGLS. It is conceived as a "low complexity projection" of the universal context modeling paradigm, matching its mo ..."
Abstract

Cited by 152 (8 self)
 Add to MetaCart
LOCOI (LOw COmplexity LOssless COmpression for Images) is the algorithm at the core of the new ISO/ITU standard for lossless and nearlossless compression of continuoustone images, JPEGLS. It is conceived as a "low complexity projection" of the universal context modeling paradigm, matching its modeling unit to a simple coding unit. By combining simplicity with the compression potential of context models, the algorithm "enjoys the best of both worlds." It is based on a simple fixed context model, which approaches the capability of the more complex universal techniques for capturing highorder dependencies. The model is tuned for efficient performance in conjunction with an extended family of Golombtype codes, which are adaptively chosen, and an embedded alphabet extension for coding of lowentropy image regions. LOCOI attains compression ratios similar or superior to those obtained with stateoftheart schemes based on arithmetic coding. Moreover, it is within a few percentage points of the best available compression ratios, at a much lower complexity level. We discuss the principles underlying the design of LOCOI, and its standardization into JPEGLS.
Color image quantization for frame buffer display
 Computer Graphics
, 1982
"... Algorithms for approximately optimal quantization of color images are discussed. The distortion measure used is the distance in RGB space. These algorithms are used to compute the color map for lowdepth frame buffers in order to allow highquality static images to be displayed. It is demonstrated t ..."
Abstract

Cited by 141 (0 self)
 Add to MetaCart
Algorithms for approximately optimal quantization of color images are discussed. The distortion measure used is the distance in RGB space. These algorithms are used to compute the color map for lowdepth frame buffers in order to allow highquality static images to be displayed. It is demonstrated that most color images can be very well displayed using only 256 or 512 colors. Thus frame buffers of only 8 or 9 bits can display images that normally require 15 bits or more per pixel. Work reported herein was sponsored by the IBM Corporation though a general grant agreement to MIT dated July 1, 1979.  TABLE OF CONTENTS page I. Introduction ............................................. 4 II. Frame Buffers and Colormaps .............................. 6 III. 1Dimensional Tapered Quantization .......................17 IV. 3Dimensional Tapered Quantization .......................27 V. Conclusions and Ideas for Further Study .......
The Design and Analysis of Efficient Lossless Data Compression Systems
, 1993
"... Our thesis is that high compression efficiency for text and images can be obtained by using sophisticated statistical compression techniques, and that greatly increased speed can be achieved at only a small cost in compression efficiency. Our emphasis is on elegant design and mathematical as well as ..."
Abstract

Cited by 49 (0 self)
 Add to MetaCart
Our thesis is that high compression efficiency for text and images can be obtained by using sophisticated statistical compression techniques, and that greatly increased speed can be achieved at only a small cost in compression efficiency. Our emphasis is on elegant design and mathematical as well as empirical analysis. We analyze arithmetic coding as it is commonly implemented and show rigorously that almost no compression is lost in the implementation. We show that highefficiency lossless compression of both text and grayscale images can be obtained by using appropriate models in conjunction with arithmetic coding. We introduce a fourcomponent paradigm for lossless image compression and present two methods that give state of the art compression efficiency. In the text compression area, we give a small improvement on the preferred method in the literature. We show that we can often obtain significantly improved throughput at the cost of slightly reduced compression. The extra speed c...
Morphological Operators for Image and Video Compression
 IEEE Trans. on Image Processing
, 1996
"... This paper deals with the use of some morphological tools for image and video coding. Mathematical morphology can be considered as a shapeoriented approach to signal processing and some of its features make it very useful for compression. Rather than describing a coding algorithm, the purpose of th ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
This paper deals with the use of some morphological tools for image and video coding. Mathematical morphology can be considered as a shapeoriented approach to signal processing and some of its features make it very useful for compression. Rather than describing a coding algorithm, the purpose of this paper is to describe some morphological tools that have recently proved to be attractive for compression. Four sets of morphological transformations are presented: connected operators, the region growing version of the watershed, the geodesic skeleton and a morphological interpolation technique. Their implementation will be discussed and we will show how they can be used for image and video segmentation, for contour coding and for texture coding. I. Introduction I MAGE and video compression techniques generally rely on the results of the information theory. In this framework, compression is achieved by a decorrelation of the signal followed by quantization and entropy coding of the infor...
Factorial coding of natural images: how effective are linear models in removing higherorder dependencies?
 JOURNAL OF THE OPTICAL SOCIETY OF AMERICA A
, 2006
"... The performance of unsupervised learning models for natural images is evaluated quantitatively by means of information theory. We estimate the gain in statistical independence (the multiinformation reduction) achieved with independent component analysis (ICA), principal component analysis (PCA), z ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
The performance of unsupervised learning models for natural images is evaluated quantitatively by means of information theory. We estimate the gain in statistical independence (the multiinformation reduction) achieved with independent component analysis (ICA), principal component analysis (PCA), zerophase whitening, and predictive coding. Predictive coding is translated into the transform coding framework, where it can be characterized by the constraint of a triangular filter matrix. A randomly sampled whitening basis and the Haar wavelet are included into the comparison as well. The comparison of all these methods is carried out for different patch sizes, ranging from 2x2 to 16x16 pixels. In spite of large differences in the shape of the basis functions, we find only small differences in the multiinformation between all decorrelation transforms (5% or less) for all patch sizes. Among the secondorder methods, PCA is optimal for small patch sizes and predictive coding performs best for large patch sizes. The extra gain achieved with ICA is always less than 2%. In conclusion, the `edge filters&amp;amp;amp;lsquo; found with ICA lead only to a surprisingly small improvement in terms of its actual objective.
Frame bit allocation for the H.264/AVC video coder via cauchydensitybased rate and distortion models
 IEEE TRANS. CIRCUITS SYST. VIDEO TECHNOL
, 2005
"... Based on the observation that a Cauchy density is more accurate in estimating the distribution of the ac coefficients than the traditional Laplacian density, rate and distortion models with improved accuracy are developed. The entropy and distortion models for quantized discrete cosine transform co ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
Based on the observation that a Cauchy density is more accurate in estimating the distribution of the ac coefficients than the traditional Laplacian density, rate and distortion models with improved accuracy are developed. The entropy and distortion models for quantized discrete cosine transform coefficients are justified in a frame bitallocation application for H.264. Extensive analysis with carefully selected anchor video sequences demonstrates a 0.24dB average peak signaltonoise ratio (PSNR) improvement over the JM 8.4 rate control algorithm, and a 0.33dB average PSNR improvement over the TM5based bitallocation algorithm that has recently been proposed for H.264 by Li et al. The analysis also demonstrates 20 % and 60% reductions in PSNR variation among the encoded pictures when compared to the JM 8.4 rate control algorithm and the TM5based bitallocation algorithm, respectively.
Optimal Prefix Codes for Sources with TwoSided Geometric Distributions
, 1997
"... A complete characterization of optimal prefix codes for offcentered, twosided geometric distributions of the integers is presented. These distributions are often encountered in lossless image compression applications, as probabilistic models for image prediction residuals. The family of optimal co ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
A complete characterization of optimal prefix codes for offcentered, twosided geometric distributions of the integers is presented. These distributions are often encountered in lossless image compression applications, as probabilistic models for image prediction residuals. The family of optimal codes described is an extension of the Golomb codes, which are optimal for onesided geometric distributions. The new family of codes allows for encoding of prediction residuals at a complexity similar to that of Golomb codes, without recourse to the heuristic approximations frequently used when modifying a code designed for nonnegative integers so as to apply to the encoding of any integer. Optimal decision rules for choosing among a lower complexity subset of the optimal codes, given the distribution parameters, are also investigated, and the relative redundancy of the subset with respect to the full family of optimal codes is bounded. Index Terms: Lossless image compression, Huffman code, ...