Results 1  10
of
13
Lossless Compression of Continuoustone Images via Context Selection, Quantization, and Modeling
, 1996
"... Context modeling is an extensively studied paradigm for lossless compression of continuoustone images. However, without careful algorithm design, highorder Markovian modeling of continuoustone images is too expensive in both computational time and space to be practical. Furthermore, the exponenti ..."
Abstract

Cited by 70 (3 self)
 Add to MetaCart
Context modeling is an extensively studied paradigm for lossless compression of continuoustone images. However, without careful algorithm design, highorder Markovian modeling of continuoustone images is too expensive in both computational time and space to be practical. Furthermore, the exponential growth of the number of modeling states in the order of a Markov model can quickly lead to the problem of context dilution; that is, an image may not have enough samples for good estimates of conditional probabilities associated with the modeling states. In this paper new techniques for context modeling of DPCM errors are introduced that can exploit contextdependent DPCM error structures to the benefit of compression. New algorithmic techniques of forming and quantizing modeling contexts are also developed to alleviate the problem of context dilution and reduce both time and space complexities. By innovative formation, quantization, and use of modeling contexts, the proposed lossless i...
The Design and Analysis of Efficient Lossless Data Compression Systems
, 1993
"... Our thesis is that high compression efficiency for text and images can be obtained by using sophisticated statistical compression techniques, and that greatly increased speed can be achieved at only a small cost in compression efficiency. Our emphasis is on elegant design and mathematical as well as ..."
Abstract

Cited by 49 (0 self)
 Add to MetaCart
Our thesis is that high compression efficiency for text and images can be obtained by using sophisticated statistical compression techniques, and that greatly increased speed can be achieved at only a small cost in compression efficiency. Our emphasis is on elegant design and mathematical as well as empirical analysis. We analyze arithmetic coding as it is commonly implemented and show rigorously that almost no compression is lost in the implementation. We show that highefficiency lossless compression of both text and grayscale images can be obtained by using appropriate models in conjunction with arithmetic coding. We introduce a fourcomponent paradigm for lossless image compression and present two methods that give state of the art compression efficiency. In the text compression area, we give a small improvement on the preferred method in the literature. We show that we can often obtain significantly improved throughput at the cost of slightly reduced compression. The extra speed c...
Lossless GeneralizedLSB Data Embedding
, 2002
"... We present a novel lossless (reversible) data embedding technique, which enables the exact recovery of the original host signal upon extraction of the embedded information. A generalization of the wellknown LSB (least significant bit) modification is proposed as the data embedding method, which int ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
We present a novel lossless (reversible) data embedding technique, which enables the exact recovery of the original host signal upon extraction of the embedded information. A generalization of the wellknown LSB (least significant bit) modification is proposed as the data embedding method, which introduces additional operating points on the capacitydistortion curve. Lossless recovery of the original is achieved by compressing portions of the signal that are susceptible to embedding distortion, and transmitting these compressed descriptions as a part of the embedded payload. A predictionbased conditional entropy coder which utilizes unaltered portions of the host signal as sideinformation improves the compression e#ciency, and thus the lossless data embedding capacity.
A Contextbased, Adaptive, Lossless/NearlyLossless Coding Scheme for Continuoustone Images
 ISO/IEC JTC 1/SC 29/WC 1 document No
, 1995
"... this memory requirement remains constant for all input images, provided that one row of pixels can be held by a 6K buffer. ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
this memory requirement remains constant for all input images, provided that one row of pixels can be held by a 6K buffer.
Optimal Prefix Codes for Sources with TwoSided Geometric Distributions
, 1997
"... A complete characterization of optimal prefix codes for offcentered, twosided geometric distributions of the integers is presented. These distributions are often encountered in lossless image compression applications, as probabilistic models for image prediction residuals. The family of optimal co ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
A complete characterization of optimal prefix codes for offcentered, twosided geometric distributions of the integers is presented. These distributions are often encountered in lossless image compression applications, as probabilistic models for image prediction residuals. The family of optimal codes described is an extension of the Golomb codes, which are optimal for onesided geometric distributions. The new family of codes allows for encoding of prediction residuals at a complexity similar to that of Golomb codes, without recourse to the heuristic approximations frequently used when modifying a code designed for nonnegative integers so as to apply to the encoding of any integer. Optimal decision rules for choosing among a lower complexity subset of the optimal codes, given the distribution parameters, are also investigated, and the relative redundancy of the subset with respect to the full family of optimal codes is bounded. Index Terms: Lossless image compression, Huffman code, ...
Error Modeling for Hierarchical Lossless Image Compression
 IN PROC. DATA COMPRESSION CONFERENCE
, 1992
"... We present a new method for error modeling applicable to the MLP algorithm for hierarchical lossless image compression. This method, based on a concept called the variability index, provides accurate models for pixel prediction errors without requiring explicit transmission of the models. We also ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
We present a new method for error modeling applicable to the MLP algorithm for hierarchical lossless image compression. This method, based on a concept called the variability index, provides accurate models for pixel prediction errors without requiring explicit transmission of the models. We also use the variability index to show that prediction errors do not always follow the Laplace distribution, as is commonly assumed; replacing the Laplace distribution with a more general symmetric exponential distribution further improves compression. We describe
On Adaptive Strategies for an Extended Family of Golombtype Codes
 in Proc. 1997 Data Compression Conference, (Snowbird
, 1997
"... O#centered, twosided geometric distributions of the integers are often encountered in lossless image compression applications, as probabilistic models for prediction residuals. Based on a recent characterization of the family of optimal prefix codes for these distributions, which is an extensio ..."
Abstract

Cited by 10 (7 self)
 Add to MetaCart
O#centered, twosided geometric distributions of the integers are often encountered in lossless image compression applications, as probabilistic models for prediction residuals. Based on a recent characterization of the family of optimal prefix codes for these distributions, which is an extension of the Golomb codes, we investigate adaptive strategies for their symbolbysymbol prefix coding, as opposed to arithmetic coding. Our adaptive strategies allow for coding of prediction residuals at very low complexity. They provide a theoretical framework for the heuristic approximations frequently used when modifying the Golomb code, originally designed for onesided geometric distributions of nonnegative integers, so as to apply to the encoding of any integer. Index Terms: image compression, adaptive coding, Golomb codes, geometric distribution, low complexity 1 Introduction Predictive coding techniques [1] have become very widespread in lossless compression of continuoustone...
Sequential prediction and ranking in universal context modeling and data compression
 IEEE Trans. Inform. Theory
, 1997
"... Prediction is one of the oldest and most successful tools in the data compression practitioner's toolbox. It is particularly useful in situations where the data (e.g., a digital image) originates from a natural physical process (e.g., sensed light), and the data samples (e.g., real numbers) represen ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
Prediction is one of the oldest and most successful tools in the data compression practitioner's toolbox. It is particularly useful in situations where the data (e.g., a digital image) originates from a natural physical process (e.g., sensed light), and the data samples (e.g., real numbers) represent a continuously varying physical magnitude (e.g., brightness). In these cases, the value of the next sample can often be accurately
Coding of Sources with TwoSided Geometric Distributions and Unknown Parameters
, 1998
"... Lossless compression is studied for a countably in nite alphabet source with an unknown, ocentered, twosided geometric (TSG) distribution, which is a commonly used statistical model for image prediction residuals. In this paper, we demonstrate that arithmetic coding based on a simple strategy ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Lossless compression is studied for a countably in nite alphabet source with an unknown, ocentered, twosided geometric (TSG) distribution, which is a commonly used statistical model for image prediction residuals. In this paper, we demonstrate that arithmetic coding based on a simple strategy of model adaptation, essentially attains the theoretical lower bound to the universal coding redundancy associated with this model. We then focus on more practical codes for the TSG model, that operate on a symbolbysymbol basis, and study the problem of adaptively selecting a code from a given discrete family. By taking advantage of the structure of the optimum Human tree for a known TSG distribution, which enables simple calculation of the codeword of every given source symbol, an ecient adaptive strategy is derived.
GrayLevelEmbedded Lossless Image Compression
, 2003
"... A levelembedded lossless compression method for continuoustone still images is presented. Level (bitplane) scalability is achieved by separating the image into two layers before compression and excellent compression performance is obtained by exploiting both spatial and interlevel correlations. ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
A levelembedded lossless compression method for continuoustone still images is presented. Level (bitplane) scalability is achieved by separating the image into two layers before compression and excellent compression performance is obtained by exploiting both spatial and interlevel correlations. A comparison of the proposed scheme with a number of scalable and nonscalable lossless image compression algorithms is performed to benchmark its performance. The results indicate that the levelembedded compression incurs only a small penalty in compression e#ciency over non scalable lossless compression, while o#ering the significant benefit of levelscalability. Key words: levelscalability, CALIC, lossless compression, nearlossless compression Preprint submitted to Elsevier Science 29 January 2003 PACS: 1