Results 1  10
of
95
An Image Multiresolution Representation for Lossless and Lossy Compression
 IEEE TRANSACTIONS ON IMAGE PROCESSING
, 1996
"... We propose a new image multiresolution transform that is suited for both lossless (reversible) and lossy compression. The new transformation is similar to the subband decomposition, but can be computed with only integer addition and bitshift operations. During its calculation the number of bits ..."
Abstract

Cited by 183 (11 self)
 Add to MetaCart
We propose a new image multiresolution transform that is suited for both lossless (reversible) and lossy compression. The new transformation is similar to the subband decomposition, but can be computed with only integer addition and bitshift operations. During its calculation the number of bits required to represent the transformed image is kept small through careful scaling and truncations. Numerical results show that the entropy obtained with the new transform is smaller than that obtained with predictive coding of similar complexity. In addition, we propose entropycoding methods that exploit the multiresolution structure, and can efficiently compress the transformed image for progressive transmission (up to exact recovery). The lossless compression ratios are among the best in the literature, and simultaneously the rate vs. distortion performance is comparable to those of the most efficient lossy compression methods.
Contextbased adaptive binary arithmetic coding in the h.264/avc video compression standard. Circuits and Systems for VideoTechnology, IEEETransactions on
"... (CABAC) as a normative part of the new ITUT/ISO/IEC standard H.264/AVC for video compression is presented. By combining an adaptive binary arithmetic coding technique with context modeling, a high degree of adaptation and redundancy reduction is achieved. The CABAC framework also includes a novel l ..."
Abstract

Cited by 132 (9 self)
 Add to MetaCart
(Show Context)
(CABAC) as a normative part of the new ITUT/ISO/IEC standard H.264/AVC for video compression is presented. By combining an adaptive binary arithmetic coding technique with context modeling, a high degree of adaptation and redundancy reduction is achieved. The CABAC framework also includes a novel lowcomplexity method for binary arithmetic coding and probability estimation that is well suited for efficient hardware and software implementations. CABAC significantly outperforms the baseline entropy coding method of H.264/AVC for the typical area of envisaged target applications. For a set of test sequences representing typical material used in broadcast applications and for a range of acceptable video quality of about 30 to 38 dB, average bitrate savings of 9%–14 % are achieved. Index Terms—Binary arithmetic coding, CABAC, context modeling, entropy coding, H.264, MPEG4 AVC. I.
An overview of the JPEG2000 still image compression standard
 Signal Processing: Image Communication
, 2002
"... In 1996, the JPEGcommittee began to investigate possibilities for a new still image compression standard to serve current and future applications. This initiative, which was named JPEG2000, has resulted in a comprehensive standard (ISO 154447ITUT Recommendation T.800) that is being issued in six pa ..."
Abstract

Cited by 86 (0 self)
 Add to MetaCart
(Show Context)
In 1996, the JPEGcommittee began to investigate possibilities for a new still image compression standard to serve current and future applications. This initiative, which was named JPEG2000, has resulted in a comprehensive standard (ISO 154447ITUT Recommendation T.800) that is being issued in six parts. Part 1, in the same vein as the JPEG baseline system, is aimed at minimal complexity and maximal interchange and was issued as an International Standard at the end of 2000. Parts 2–6 define extensions to both the compression technology and the file format and are currently in various stages of development. In this paper, a technical description of Part 1 of the JPEG2000 standard is provided, and the rationale behind the selected technologies is explained. Although the JPEG2000 standard only specifies the decoder and the codesteam syntax, the discussion will span both encoder and decoder issues to provide a better
High Quality Document Image Compression with DjVu
 Journal of Electronic Imaging
, 1998
"... We present a new image compression technique called "DjVu " that is specifically geared towards the compression of highresolution, highquality images of scanned documents in color. This enables fast transmission of document images over lowspeed connections, while faithfully reproducing ..."
Abstract

Cited by 79 (12 self)
 Add to MetaCart
We present a new image compression technique called "DjVu " that is specifically geared towards the compression of highresolution, highquality images of scanned documents in color. This enables fast transmission of document images over lowspeed connections, while faithfully reproducing the visual aspect of the document, including color, fonts, pictures, and paper texture. The DjVu compressor separates the text and drawings, which needs a high spatial resolution, from the pictures and backgrounds, which are smoother and can be coded at a lower spatial resolution. Then, several novel techniques are used to maximize the compression ratio: the bilevel foreground image is encoded with AT&T's proposal to the new JBIG2 fax standard, and a new waveletbased compression method is used for the backgrounds and pictures. Both techniques use a new adaptive binary arithmetic coder called the Zcoder. A typical magazine page in color at 300dpi can be compressed down to between 40 to 60 KB, approx...
The Design and Analysis of Efficient Lossless Data Compression Systems
, 1993
"... Our thesis is that high compression efficiency for text and images can be obtained by using sophisticated statistical compression techniques, and that greatly increased speed can be achieved at only a small cost in compression efficiency. Our emphasis is on elegant design and mathematical as well as ..."
Abstract

Cited by 53 (0 self)
 Add to MetaCart
(Show Context)
Our thesis is that high compression efficiency for text and images can be obtained by using sophisticated statistical compression techniques, and that greatly increased speed can be achieved at only a small cost in compression efficiency. Our emphasis is on elegant design and mathematical as well as empirical analysis. We analyze arithmetic coding as it is commonly implemented and show rigorously that almost no compression is lost in the implementation. We show that highefficiency lossless compression of both text and grayscale images can be obtained by using appropriate models in conjunction with arithmetic coding. We introduce a fourcomponent paradigm for lossless image compression and present two methods that give state of the art compression efficiency. In the text compression area, we give a small improvement on the preferred method in the literature. We show that we can often obtain significantly improved throughput at the cost of slightly reduced compression. The extra speed c...
SZELISKI R.: A layered video object coding system using sprite and affine motion model. Circuits and Systems for Video Technology
 IEEE Transactions on
, 1997
"... Abstract—A layered video object coding system is presented in this paper. The goal is to improve video coding efficiency by exploiting the layering of video and to support contentbased functionality. These two objectives are accomplished using a sprite technique and an affine motion model on a per ..."
Abstract

Cited by 53 (3 self)
 Add to MetaCart
Abstract—A layered video object coding system is presented in this paper. The goal is to improve video coding efficiency by exploiting the layering of video and to support contentbased functionality. These two objectives are accomplished using a sprite technique and an affine motion model on a perobject basis. Several novel algorithms have been developed for mask processing and coding, trajectory coding, sprite accretion and coding, locally affine motion compensation, error signal suppression, and image padding. Compared with conventional framebased coding methods, better experimental results on both hybrid and natural scenes have been obtained using our coding scheme. We also demonstrate contentbased functionality which can be easily achieved in our system. Index Terms — Affine motion model, image padding, layered video object coding, MPEG4, scalability, shape coding, sprite coding. I.
Analysis of Arithmetic Coding for Data Compression
 INFORMATION PROCESSING AND MANAGEMENT
, 1992
"... Arithmetic coding, in conjunction with a suitable probabilistic model, can provide nearly optimal data compression. In this article we analyze the effect that the model and the particular implementation of arithmetic coding have on the code length obtained. Periodic scaling is often used in arithmet ..."
Abstract

Cited by 38 (6 self)
 Add to MetaCart
Arithmetic coding, in conjunction with a suitable probabilistic model, can provide nearly optimal data compression. In this article we analyze the effect that the model and the particular implementation of arithmetic coding have on the code length obtained. Periodic scaling is often used in arithmetic coding implementations to reduce time and storage requirements; it also introduces a recency effect which can further affect compression. Our main contribution is introducing the concept of weighted entropy and using it to characterize in an elegant way the effect that periodic scaling has on the code length. We explain why and by how much scaling increases the code length for files with a homogeneous distribution of symbols, and we characterize the reduction in code length due to scaling for files exhibiting locality of reference. We also give a rigorous proof that the coding effects of rounding scaled weights, using integer arithmetic, and encoding endoffile are negligible.
Practical Implementations of Arithmetic Coding
 IN IMAGE AND TEXT
, 1992
"... We provide a tutorial on arithmetic coding, showing how it provides nearly optimal data compression and how it can be matched with almost any probabilistic model. We indicate the main disadvantage of arithmetic coding, its slowness, and give the basis of a fast, spaceefficient, approximate arithmet ..."
Abstract

Cited by 35 (6 self)
 Add to MetaCart
We provide a tutorial on arithmetic coding, showing how it provides nearly optimal data compression and how it can be matched with almost any probabilistic model. We indicate the main disadvantage of arithmetic coding, its slowness, and give the basis of a fast, spaceefficient, approximate arithmetic coder with only minimal loss of compression efficiency. Our coder is based on the replacement of arithmetic by table lookups coupled with a new deterministic probability estimation scheme.
A CMOS Area Image Sensor With Pixel Level A/D Conversion
 IN ISSCC DIGEST OF TECHNICAL PAPERS
, 1995
"... A CMOS 64 x 64 pixel area image sensor chip using SigmaDelta modulation at each pixel for A/D conversion is described. The image data output is digital. The chip was fabricated using a 1.2µm two layer metal single layer poly nwell CMOS process. Each pixel block consists of a phototransistor and ..."
Abstract

Cited by 31 (7 self)
 Add to MetaCart
A CMOS 64 x 64 pixel area image sensor chip using SigmaDelta modulation at each pixel for A/D conversion is described. The image data output is digital. The chip was fabricated using a 1.2µm two layer metal single layer poly nwell CMOS process. Each pixel block consists of a phototransistor and 22 MOS transistors. Test results demonstrate a dynamic range potentially greater than 93dB, a signal to noise ratio (SNR) of up to 61dB, and dissipation of less than 1mW with a 5V power supply.