Results 1 
7 of
7
Lossless compression of continuoustone images
 Proc. IEEE
, 2000
"... Abstract — In this paper, we survey some of the recent advances in lossless compression of continuoustone images. The modeling paradigms underlying the stateoftheart algorithms, and the principles guiding their design, are discussed in a unified manner. The algorithms are described and experiment ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
(Show Context)
Abstract — In this paper, we survey some of the recent advances in lossless compression of continuoustone images. The modeling paradigms underlying the stateoftheart algorithms, and the principles guiding their design, are discussed in a unified manner. The algorithms are described and experimentally compared. I.
Some Simple Parametric Lossless Image Compressors
 Proc. 2000 Int. Conf. Image Proc., Vancouver, IEEE 2000
, 2000
"... This paper proposes lossless image compressors that are simpler than existing ones and yet still work well. The compressors process images in rasterscan order, and to code a pixel first estimate that pixel's value by using a linear function of alreadycoded pixels. Next the compressors estimat ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
This paper proposes lossless image compressors that are simpler than existing ones and yet still work well. The compressors process images in rasterscan order, and to code a pixel first estimate that pixel's value by using a linear function of alreadycoded pixels. Next the compressors estimate the uncertainty in the first estimate by using a nonlinear function of alreadycoded pixels. Finally, based on these estimates, they select a discretized Laplacian with which an arithmetic coder represents the pixel. Alternatively, the compressors may select Golomb codewords based on the estimates, and thus directly represent the pixels. These compressors ' rates come within 6 to 8% of CALIC [1], a highlyeffective image compressor. Another benefit is that a simple theoretical motivation exists for the chosen uncertainty estimators. 1. INTRODUCTION Existing lossless image compressors [16] incorporate many good ideas, but not all of these are equally useful. By carefully choosing and refining ...
Cahill “Adaptive combination of linear predictors for lossless image compression
 IEE Proc. Sci. Meas. Technol
, 2000
"... Lossless image coding is an essential requirement for medical imaging applications. Lossless image compression techniques usually have two major components: adaptive prediction and adaptive entropy coding. This paper is concerned with adaptive prediction. Recently, several researchers have studied p ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Lossless image coding is an essential requirement for medical imaging applications. Lossless image compression techniques usually have two major components: adaptive prediction and adaptive entropy coding. This paper is concerned with adaptive prediction. Recently, several researchers have studied prediction schemes in which the final prediction is formed by a combination of a group of subpredictors. In this paper, we present an overview of this new type of prediction technique. We show that the basic principle of adaptive predictor combination has been extensively studied and applied to many science and engineering problems. We then describe our combination scheme which is based on the estimation of the local prediction error variance. Experimental results show that the compression performance of the algorithms that employ this new type of predictor is consistently better than that of stateoftheart algorithms. 1
Compressing as Well as the Best Tiling of an Image
, 1999
"... We investigate the task of compressing an image using different probability models for different regions of the image. In this task, using a larger number of regions would result in better compression of the coefficients of the image but would also require more bits for describing the regions and pr ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
We investigate the task of compressing an image using different probability models for different regions of the image. In this task, using a larger number of regions would result in better compression of the coefficients of the image but would also require more bits for describing the regions and probability models in the regions. We discuss using quadtrees and windowing for performing the compression and introduce the class of tilings of an image with a small number of arbitrarily sized rectangular tiles of probability models. For an image of size n \Theta n, we give a sequential probability assignment algorithm with a computational complexity of O(Nn 3 ) and a redundancy of O(k log Nn k ) relative to the class k rectangular tiling of an image using N probability models. For the simpler class of tilings using rectangles with width no more than W , we give an algorithm with redundancy O(k log Nn k ) and a computational complexity of O(WNn 2 ). Another interesting class is the ...
NEW NEW LINEAR LINEAR PREDICTIVE PREDICTIVE METHODS METHODS FOR FOR DIGITAL DIGITAL SPEECH SPEECH PROCESSING PROCESSING
, 2001
"... Teknillinen korkeakoulu Sähkö ja tietoliikennetekniikan osasto Akustiikan ja äänenkäsittelytekniikan laboratorio ..."
Abstract
 Add to MetaCart
(Show Context)
Teknillinen korkeakoulu Sähkö ja tietoliikennetekniikan osasto Akustiikan ja äänenkäsittelytekniikan laboratorio
Contents
, 2009
"... This paper provides modal and relational characterisations of may and musttesting preorders for recursive CSP processes with divergence, featuring probabilistic as well as nondeterministic choice. May testing is characterised in terms of simulation, and must testing in terms of failure simulation ..."
Abstract
 Add to MetaCart
(Show Context)
This paper provides modal and relational characterisations of may and musttesting preorders for recursive CSP processes with divergence, featuring probabilistic as well as nondeterministic choice. May testing is characterised in terms of simulation, and must testing in terms of failure simulation. To this end we develop weak transitions between
Some Simple MomentBased Lossless Image Compressors
, 2000
"... This paper proposes a conceptually simple lossless image compressor constructed out of new combinations of existing image models. Specifically, for each pixel the compressor gives an arithmetic coder a discretized Laplacian pmf whose mean is a linear function of that pixel's neighbors, and whos ..."
Abstract
 Add to MetaCart
This paper proposes a conceptually simple lossless image compressor constructed out of new combinations of existing image models. Specifically, for each pixel the compressor gives an arithmetic coder a discretized Laplacian pmf whose mean is a linear function of that pixel's neighbors, and whose standard deviation is a linear function of the absolute values of the errors from predicting those neighbors. Optionally, the compressor could also adapt the pmfs as coding progresses so that the pmfs more closely resemble the actual ones. The compression rates from these coders come within 6 or 7% of CALIC [1], a highlyeffective image compressor, while being simpler to implement. This paper also presents a simple error model that helps explain the justdescribed standard deviation estimator. 1 Introduction Finding conceptually simple lossless image compressors is important because they're easier to implement in software and hardware, and also because paring models down to their essentials m...