Results 1  10
of
29
The Design and Analysis of Efficient Lossless Data Compression Systems
, 1993
"... Our thesis is that high compression efficiency for text and images can be obtained by using sophisticated statistical compression techniques, and that greatly increased speed can be achieved at only a small cost in compression efficiency. Our emphasis is on elegant design and mathematical as well as ..."
Abstract

Cited by 57 (0 self)
 Add to MetaCart
(Show Context)
Our thesis is that high compression efficiency for text and images can be obtained by using sophisticated statistical compression techniques, and that greatly increased speed can be achieved at only a small cost in compression efficiency. Our emphasis is on elegant design and mathematical as well as empirical analysis. We analyze arithmetic coding as it is commonly implemented and show rigorously that almost no compression is lost in the implementation. We show that highefficiency lossless compression of both text and grayscale images can be obtained by using appropriate models in conjunction with arithmetic coding. We introduce a fourcomponent paradigm for lossless image compression and present two methods that give state of the art compression efficiency. In the text compression area, we give a small improvement on the preferred method in the literature. We show that we can often obtain significantly improved throughput at the cost of slightly reduced compression. The extra speed c...
Practical Implementations of Arithmetic Coding
 IN IMAGE AND TEXT
, 1992
"... We provide a tutorial on arithmetic coding, showing how it provides nearly optimal data compression and how it can be matched with almost any probabilistic model. We indicate the main disadvantage of arithmetic coding, its slowness, and give the basis of a fast, spaceefficient, approximate arithmet ..."
Abstract

Cited by 40 (6 self)
 Add to MetaCart
We provide a tutorial on arithmetic coding, showing how it provides nearly optimal data compression and how it can be matched with almost any probabilistic model. We indicate the main disadvantage of arithmetic coding, its slowness, and give the basis of a fast, spaceefficient, approximate arithmetic coder with only minimal loss of compression efficiency. Our coder is based on the replacement of arithmetic by table lookups coupled with a new deterministic probability estimation scheme.
Adaptive Scalar Quantization without Side Information
 IEEE Trans. Image Proc
, 1997
"... In this paper, we introduce a novel technique for adaptive scalar quantization. Adaptivity is useful in applications, including image compression, where the statistics of the source are either not known a priori or will change over time. Our algorithm uses previously quantized samples to estimate th ..."
Abstract

Cited by 22 (4 self)
 Add to MetaCart
(Show Context)
In this paper, we introduce a novel technique for adaptive scalar quantization. Adaptivity is useful in applications, including image compression, where the statistics of the source are either not known a priori or will change over time. Our algorithm uses previously quantized samples to estimate the distribution of the source, and does not require that side information be sent in order to adapt to changing source statistics. Our quantization scheme is thus backward adaptive. We propose that an adaptive quantizer can be separated into two building blocks, namely, model estimation and quantizer design. The model estimation produces an estimate of the changing source probability density function, which is then used to redesign the quantizer using standard techniques. We introduce nonparametric estimation techniques that only assume smoothness of the input distribution. We discuss the various sources of error in our estimation and argue that, for a wide class of sources with a smooth probability density function (pdf), we provide a good approximation to a "universal" quantizer, with the approximation becoming better as the rate increases. We study the performance of our scheme and show how the loss due to adaptivity is minimal in typical scenarios. In particular, we provide examples and show how our technique can achieve signalto noise ratios (SNR's) within 0.05 dB of the optimal LloydMax quantizer (LMQ) for a memoryless source, while achieving over 1.5 dB gain over a fixed quantizer for a bimodal source.
Adaptive quantization without side information
 in Proc. IEEE Int. Conf. Image Processing
, 1994
"... ..."
Lossless Compression for Text and Images
 International Journal of High Speed Electronics and Systems
, 1995
"... Most data that is inherently discrete needs to be compressed in such a way that it can be recovered exactly, without any loss. Examples include text of all kinds, experimental results, and statistical databases. Other forms of data may need to be stored exactly, such as imagesparticularly bilevel ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
Most data that is inherently discrete needs to be compressed in such a way that it can be recovered exactly, without any loss. Examples include text of all kinds, experimental results, and statistical databases. Other forms of data may need to be stored exactly, such as imagesparticularly bilevel ones, or ones arising in medical and remotesensing applications, or ones that may be required to be certified true for legal reasons. Moreover, during the process of lossy compression, many occasions for lossless compression of coefficients or other information arise. This paper surveys techniques for lossless compression. The process of compression can be broken down into modeling and coding. We provide an extensive discussion of coding techniques, and then introduce methods of modeling that are appropriate for text and images. Standard methods used in popular utilities (in the case of text) and international standards (in the case of images) are described. Keywords Text compression, ima...
Pattern Matching in Compressed Text and Images
, 2001
"... Normally compressed data needs to be decompressed before it is processed, but if the compression has been done in the fight way, it is often possible to search the data without having to decompress it, or at least only partially decompress it. The problem can be divided into lossless and lossy c ..."
Abstract

Cited by 9 (8 self)
 Add to MetaCart
Normally compressed data needs to be decompressed before it is processed, but if the compression has been done in the fight way, it is often possible to search the data without having to decompress it, or at least only partially decompress it. The problem can be divided into lossless and lossy compression methods, and then in each of these cases the pattern matching can be either exact or inexact. Much work has been reported in the literature on techniques for all of these cases, including algorithms that are suitable for pattern matching for various compression methods, and compression methods designed specifically for pattern matching. This work is surveyed in this paper. The paper also exposes the important relationship between pattern matching and compression, and proposes some performance measures for compressed pattern matching algorithms. Ideas and directions for future work are also described.
Document Image Compression and Analysis
 PhD of the university of Maryland
, 1997
"... Image compression usually considers the minimization of storage space as its main objective. It is desirable, however, to code images so that we have the ability to process the resulting representation directly. In this thesis we explore an approach to document image compression that is efficient in ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
(Show Context)
Image compression usually considers the minimization of storage space as its main objective. It is desirable, however, to code images so that we have the ability to process the resulting representation directly. In this thesis we explore an approach to document image compression that is efficient in both space (storage requirement) and time (processing flexibility). A representation is presented in which componentlevel redundancy is removed by forming a prototype library and component location table. This representation forms a basis for compression and provides direct access to image components. To generate the prototype library, a new clustering approach is developed which is suitable for document image components. The distance metric is based on a character degradation model so that degraded versions of the same character will be grouped together. To achieve a lossless representation when required, the residuals are encoded efficiently using a structural distance ordering. OCR is...
Universal finite memory coding of binary sequences
 Master’s thesis, Dept. Elec. Eng.–Syst., TelAviv Univ
, 2000
"... This whole work was made possible due to the devoted guidance and support of Prof. Meir Feder, my supervisor. His enthusiasm and ideas inspired me during this research, and I thank him for that. Also, I would like to thank my family for their love and support, specially, my wife, Ofira. i This work ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
This whole work was made possible due to the devoted guidance and support of Prof. Meir Feder, my supervisor. His enthusiasm and ideas inspired me during this research, and I thank him for that. Also, I would like to thank my family for their love and support, specially, my wife, Ofira. i This work considers the problem of universal coding of binary sequences, where the universal encoder has limited memory. Universal coding refers to a situation where a single, universal, encoder can achieve the optimal performance for a large class of models or data sequences, without knowing the model in advance, and without tuning the encoder to the data. In the previous work on universal coding, specific universal machines, whose performance attained the theoretical limits, were suggested. However, these machines require unlimited amount of memory. This work
On the Use of Hough Transform for Contextbased Image Compression in Hybrid Raster/Vector Applications
, 2000
"... In a hybrid raster/vector system, two representations of the image are stored. Digitized raster image preserves the original drawing in its exact visual form, whereas additional vector data can be used for resolutionindependent reproduction, image editing, analysis and indexing operations. We intro ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
In a hybrid raster/vector system, two representations of the image are stored. Digitized raster image preserves the original drawing in its exact visual form, whereas additional vector data can be used for resolutionindependent reproduction, image editing, analysis and indexing operations. We introduce two techniques for utilizing the vector features in contextbased compression of the raster image. In both techniques, Hough transform is used for extracting the line features from the raster image. The fn:st technique utilizes the line features to improve the prediction accuracy in the context modeling. The second technique uses a featurebased filter for removing noise near the borders of the extracted line elements. This improves the image quality and results in more compressible raster image. In both cases, we achieve better compression performance.