Results 1  10
of
23
Contextbased adaptive binary arithmetic coding in the h.264/avc video compression standard. Circuits and Systems for VideoTechnology, IEEETransactions on
"... (CABAC) as a normative part of the new ITUT/ISO/IEC standard H.264/AVC for video compression is presented. By combining an adaptive binary arithmetic coding technique with context modeling, a high degree of adaptation and redundancy reduction is achieved. The CABAC framework also includes a novel l ..."
Abstract

Cited by 207 (13 self)
 Add to MetaCart
(CABAC) as a normative part of the new ITUT/ISO/IEC standard H.264/AVC for video compression is presented. By combining an adaptive binary arithmetic coding technique with context modeling, a high degree of adaptation and redundancy reduction is achieved. The CABAC framework also includes a novel lowcomplexity method for binary arithmetic coding and probability estimation that is well suited for efficient hardware and software implementations. CABAC significantly outperforms the baseline entropy coding method of H.264/AVC for the typical area of envisaged target applications. For a set of test sequences representing typical material used in broadcast applications and for a range of acceptable video quality of about 30 to 38 dB, average bitrate savings of 9%–14 % are achieved. Index Terms—Binary arithmetic coding, CABAC, context modeling, entropy coding, H.264, MPEG4 AVC. I.
Analysis of Arithmetic Coding for Data Compression
 INFORMATION PROCESSING AND MANAGEMENT
, 1992
"... Arithmetic coding, in conjunction with a suitable probabilistic model, can provide nearly optimal data compression. In this article we analyze the effect that the model and the particular implementation of arithmetic coding have on the code length obtained. Periodic scaling is often used in arithmet ..."
Abstract

Cited by 43 (6 self)
 Add to MetaCart
Arithmetic coding, in conjunction with a suitable probabilistic model, can provide nearly optimal data compression. In this article we analyze the effect that the model and the particular implementation of arithmetic coding have on the code length obtained. Periodic scaling is often used in arithmetic coding implementations to reduce time and storage requirements; it also introduces a recency effect which can further affect compression. Our main contribution is introducing the concept of weighted entropy and using it to characterize in an elegant way the effect that periodic scaling has on the code length. We explain why and by how much scaling increases the code length for files with a homogeneous distribution of symbols, and we characterize the reduction in code length due to scaling for files exhibiting locality of reference. We also give a rigorous proof that the coding effects of rounding scaled weights, using integer arithmetic, and encoding endoffile are negligible.
Practical Implementations of Arithmetic Coding
 IN IMAGE AND TEXT
, 1992
"... We provide a tutorial on arithmetic coding, showing how it provides nearly optimal data compression and how it can be matched with almost any probabilistic model. We indicate the main disadvantage of arithmetic coding, its slowness, and give the basis of a fast, spaceefficient, approximate arithmet ..."
Abstract

Cited by 41 (6 self)
 Add to MetaCart
We provide a tutorial on arithmetic coding, showing how it provides nearly optimal data compression and how it can be matched with almost any probabilistic model. We indicate the main disadvantage of arithmetic coding, its slowness, and give the basis of a fast, spaceefficient, approximate arithmetic coder with only minimal loss of compression efficiency. Our coder is based on the replacement of arithmetic by table lookups coupled with a new deterministic probability estimation scheme.
OnLine Stochastic Processes in Data Compression
, 1996
"... The ability to predict the future based upon the past in finitealphabet sequences has many applications, including communications, data security, pattern recognition, and natural language processing. By Shannon's theory and the breakthrough development of arithmetic coding, any sequence, a 1 ..."
Abstract

Cited by 16 (6 self)
 Add to MetaCart
The ability to predict the future based upon the past in finitealphabet sequences has many applications, including communications, data security, pattern recognition, and natural language processing. By Shannon's theory and the breakthrough development of arithmetic coding, any sequence, a 1 a 2 \Delta \Delta \Delta a n , can be encoded in a number of bits that is essentially equal to the minimal informationlossless codelength, P i \Gamma log 2 p(a i ja 1 \Delta \Delta \Delta a i\Gamma1 ). The goal of universal online modeling, and therefore of universal data compression, is to deduce the model of the input sequence a 1 a 2 \Delta \Delta \Delta a n that can estimate each p(a i ja 1 \Delta \Delta \Delta a i\Gamma1 ) knowing only a 1 a 2 \Delta \Delta \Delta a i\Gamma1 so that the ex...
Parallel lossless image compression using Huffman and arithmetic coding
 In Proc. Data Compression Conf. DCC–92, Snowbird
, 1992
"... We show that highresolution images can be encoded and decoded efficiently in parallel. We present an algorithm based on the hierarchical MLP method, used either with Huffman coding or with a new variant of arithmetic coding called quasiarithmetic coding. The coding step can be parallelized, even t ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
We show that highresolution images can be encoded and decoded efficiently in parallel. We present an algorithm based on the hierarchical MLP method, used either with Huffman coding or with a new variant of arithmetic coding called quasiarithmetic coding. The coding step can be parallelized, even though the codes for different pixels are of different lengths; parallelization of the prediction and error modeling components is straightforward.
Lossless Compression for Text and Images
 International Journal of High Speed Electronics and Systems
, 1995
"... Most data that is inherently discrete needs to be compressed in such a way that it can be recovered exactly, without any loss. Examples include text of all kinds, experimental results, and statistical databases. Other forms of data may need to be stored exactly, such as imagesparticularly bilevel ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
Most data that is inherently discrete needs to be compressed in such a way that it can be recovered exactly, without any loss. Examples include text of all kinds, experimental results, and statistical databases. Other forms of data may need to be stored exactly, such as imagesparticularly bilevel ones, or ones arising in medical and remotesensing applications, or ones that may be required to be certified true for legal reasons. Moreover, during the process of lossy compression, many occasions for lossless compression of coefficients or other information arise. This paper surveys techniques for lossless compression. The process of compression can be broken down into modeling and coding. We provide an extensive discussion of coding techniques, and then introduce methods of modeling that are appropriate for text and images. Standard methods used in popular utilities (in the case of text) and international standards (in the case of images) are described. Keywords Text compression, ima...
Scalar Quantization With Arithmetic Coding
, 1990
"... The problem of scalar quantization of certain memoryless sources with entropy coding is considered. The work is divided into two parts. In the first ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
The problem of scalar quantization of certain memoryless sources with entropy coding is considered. The work is divided into two parts. In the first
Optimal Transforms for Multispectral and Multilayer Image Coding
 IEEE Trans. on Image Processing
, 1995
"... 1 Multispectral images are composed of a series of images at differing optical wavelengths. Since these images can be quite large, they invite efficient source coding schemes for reducing storage and transmission requirements. Because multispectral images include a third (spectral) dimension with no ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
(Show Context)
1 Multispectral images are composed of a series of images at differing optical wavelengths. Since these images can be quite large, they invite efficient source coding schemes for reducing storage and transmission requirements. Because multispectral images include a third (spectral) dimension with nonstationary behavior, these multilayer data sets require specialized coding techniques. In this paper, we develop both a theory and specific methods for performing optimal transform coding of multispectral images. The theory is based on the assumption that a multispectral image may be modeled as a set of jointly stationary Gaussian random processes. Therefore, the methods may be applied to any multilayer data set which meets this assumption. Although we do not assume the autocorrelation has a separable form, we show that the optimal transform for coding has a partially separable structure. In particular, we prove that a coding scheme consisting of a frequency transform within each layer foll...
Coding and Compression: a Happy Union of Theory and Practice
"... This paper laid down the foundations for what is now known as information theory in a mathematical framework that is probabilistic (see e.g. Cover and Thomas 1991, Verd'u 1998). That is, Shannon modeled the signal or message process by a random process and a communication channel by a random tr ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
This paper laid down the foundations for what is now known as information theory in a mathematical framework that is probabilistic (see e.g. Cover and Thomas 1991, Verd'u 1998). That is, Shannon modeled the signal or message process by a random process and a communication channel by a random transition matrix that may distort the message. In the five decades that followed, information theory provided fundamental limits for communication in general and coding and compression in particular. These limits, predicted by information theory under probabilistic models, are now being approached in real products such as computer modems. Since these limits or fundamental communication quantities such as entropy and channel capacity vary from signal process to signal process or from channel to channel, they have to be estimated for each communication setup. In this sense, information theory is intrinsically statistical. Moreover, the algorithmic theory of information has inspired an extension of Shannon's ideas that provides a formal measure of information of the kind long sought for in statistical inference and modeling. This measure has led to the Minimum Description Length (MDL) principle for modeling in general and model selection in particular (see Rissanen 1978, Rissanen 1989, Barron, Rissanen and Yu 1998, Hansen and Yu 1998).