Results 1  10
of
22
Contextbased adaptive binary arithmetic coding in the h.264/avc video compression standard. Circuits and Systems for VideoTechnology, IEEETransactions on
"... (CABAC) as a normative part of the new ITUT/ISO/IEC standard H.264/AVC for video compression is presented. By combining an adaptive binary arithmetic coding technique with context modeling, a high degree of adaptation and redundancy reduction is achieved. The CABAC framework also includes a novel l ..."
Abstract

Cited by 132 (9 self)
 Add to MetaCart
(CABAC) as a normative part of the new ITUT/ISO/IEC standard H.264/AVC for video compression is presented. By combining an adaptive binary arithmetic coding technique with context modeling, a high degree of adaptation and redundancy reduction is achieved. The CABAC framework also includes a novel lowcomplexity method for binary arithmetic coding and probability estimation that is well suited for efficient hardware and software implementations. CABAC significantly outperforms the baseline entropy coding method of H.264/AVC for the typical area of envisaged target applications. For a set of test sequences representing typical material used in broadcast applications and for a range of acceptable video quality of about 30 to 38 dB, average bitrate savings of 9%–14 % are achieved. Index Terms—Binary arithmetic coding, CABAC, context modeling, entropy coding, H.264, MPEG4 AVC. I.
Analysis of Arithmetic Coding for Data Compression
 INFORMATION PROCESSING AND MANAGEMENT
, 1992
"... Arithmetic coding, in conjunction with a suitable probabilistic model, can provide nearly optimal data compression. In this article we analyze the effect that the model and the particular implementation of arithmetic coding have on the code length obtained. Periodic scaling is often used in arithmet ..."
Abstract

Cited by 38 (6 self)
 Add to MetaCart
Arithmetic coding, in conjunction with a suitable probabilistic model, can provide nearly optimal data compression. In this article we analyze the effect that the model and the particular implementation of arithmetic coding have on the code length obtained. Periodic scaling is often used in arithmetic coding implementations to reduce time and storage requirements; it also introduces a recency effect which can further affect compression. Our main contribution is introducing the concept of weighted entropy and using it to characterize in an elegant way the effect that periodic scaling has on the code length. We explain why and by how much scaling increases the code length for files with a homogeneous distribution of symbols, and we characterize the reduction in code length due to scaling for files exhibiting locality of reference. We also give a rigorous proof that the coding effects of rounding scaled weights, using integer arithmetic, and encoding endoffile are negligible.
Practical Implementations of Arithmetic Coding
 IN IMAGE AND TEXT
, 1992
"... We provide a tutorial on arithmetic coding, showing how it provides nearly optimal data compression and how it can be matched with almost any probabilistic model. We indicate the main disadvantage of arithmetic coding, its slowness, and give the basis of a fast, spaceefficient, approximate arithmet ..."
Abstract

Cited by 35 (6 self)
 Add to MetaCart
We provide a tutorial on arithmetic coding, showing how it provides nearly optimal data compression and how it can be matched with almost any probabilistic model. We indicate the main disadvantage of arithmetic coding, its slowness, and give the basis of a fast, spaceefficient, approximate arithmetic coder with only minimal loss of compression efficiency. Our coder is based on the replacement of arithmetic by table lookups coupled with a new deterministic probability estimation scheme.
OnLine Stochastic Processes in Data Compression
, 1996
"... The ability to predict the future based upon the past in finitealphabet sequences has many applications, including communications, data security, pattern recognition, and natural language processing. By Shannon's theory and the breakthrough development of arithmetic coding, any sequence, a 1 ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
The ability to predict the future based upon the past in finitealphabet sequences has many applications, including communications, data security, pattern recognition, and natural language processing. By Shannon's theory and the breakthrough development of arithmetic coding, any sequence, a 1 a 2 \Delta \Delta \Delta a n , can be encoded in a number of bits that is essentially equal to the minimal informationlossless codelength, P i \Gamma log 2 p(a i ja 1 \Delta \Delta \Delta a i\Gamma1 ). The goal of universal online modeling, and therefore of universal data compression, is to deduce the model of the input sequence a 1 a 2 \Delta \Delta \Delta a n that can estimate each p(a i ja 1 \Delta \Delta \Delta a i\Gamma1 ) knowing only a 1 a 2 \Delta \Delta \Delta a i\Gamma1 so that the ex...
Parallel lossless image compression using Huffman and arithmetic coding
 In Proc. Data Compression Conf. DCC–92, Snowbird
, 1992
"... We show that highresolution images can be encoded and decoded efficiently in parallel. We present an algorithm based on the hierarchical MLP method, used either with Huffman coding or with a new variant of arithmetic coding called quasiarithmetic coding. The coding step can be parallelized, even t ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
We show that highresolution images can be encoded and decoded efficiently in parallel. We present an algorithm based on the hierarchical MLP method, used either with Huffman coding or with a new variant of arithmetic coding called quasiarithmetic coding. The coding step can be parallelized, even though the codes for different pixels are of different lengths; parallelization of the prediction and error modeling components is straightforward.
Scalar Quantization With Arithmetic Coding
, 1990
"... The problem of scalar quantization of certain memoryless sources with entropy coding is considered. The work is divided into two parts. In the first ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
The problem of scalar quantization of certain memoryless sources with entropy coding is considered. The work is divided into two parts. In the first
Optimal Transforms for Multispectral and Multilayer Image Coding
 IEEE Trans. on Image Processing
, 1995
"... 1 Multispectral images are composed of a series of images at differing optical wavelengths. Since these images can be quite large, they invite efficient source coding schemes for reducing storage and transmission requirements. Because multispectral images include a third (spectral) dimension with no ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
1 Multispectral images are composed of a series of images at differing optical wavelengths. Since these images can be quite large, they invite efficient source coding schemes for reducing storage and transmission requirements. Because multispectral images include a third (spectral) dimension with nonstationary behavior, these multilayer data sets require specialized coding techniques. In this paper, we develop both a theory and specific methods for performing optimal transform coding of multispectral images. The theory is based on the assumption that a multispectral image may be modeled as a set of jointly stationary Gaussian random processes. Therefore, the methods may be applied to any multilayer data set which meets this assumption. Although we do not assume the autocorrelation has a separable form, we show that the optimal transform for coding has a partially separable structure. In particular, we prove that a coding scheme consisting of a frequency transform within each layer foll...
Lossless Compression for Text and Images
 International Journal of High Speed Electronics and Systems
, 1995
"... Most data that is inherently discrete needs to be compressed in such a way that it can be recovered exactly, without any loss. Examples include text of all kinds, experimental results, and statistical databases. Other forms of data may need to be stored exactly, such as imagesparticularly bilevel ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Most data that is inherently discrete needs to be compressed in such a way that it can be recovered exactly, without any loss. Examples include text of all kinds, experimental results, and statistical databases. Other forms of data may need to be stored exactly, such as imagesparticularly bilevel ones, or ones arising in medical and remotesensing applications, or ones that may be required to be certified true for legal reasons. Moreover, during the process of lossy compression, many occasions for lossless compression of coefficients or other information arise. This paper surveys techniques for lossless compression. The process of compression can be broken down into modeling and coding. We provide an extensive discussion of coding techniques, and then introduce methods of modeling that are appropriate for text and images. Standard methods used in popular utilities (in the case of text) and international standards (in the case of images) are described. Keywords Text compression, ima...
Multialphabet Arithmetic Coding at 16 MBytes/sec
 Proc. Data Compression Conference, 30 Mar 93, Snowbird, UT
, 1993
"... We present the design and performance of a nonadaptive hardware system for data compression by arithmetic coding. The alphabet of the data source is the full 256symbol ASCII character set, plus a nonASCII endoffile symbol. The key ideas of our system are ffl the nonarithmetic representation o ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We present the design and performance of a nonadaptive hardware system for data compression by arithmetic coding. The alphabet of the data source is the full 256symbol ASCII character set, plus a nonASCII endoffile symbol. The key ideas of our system are ffl the nonarithmetic representation of the current interval width, which yields improved coding efficiency in the intervalwidth update, and ffl a retimed circuit for the code point update, which removes this step from the critical path of the system's operation. Through a further retiming, the lower bound on this circuit's clock period can be reduced to a constant, independent of its width in bits. We have implemented and tested the system on a reconfigurable coprocessor, which is constructed from commercially available fieldprogrammable gate arrays and static RAM. This implementation compresses its input stream at better than 16 MBytes/sec. 1 Introduction Arithmetic coding is a wellknown method for lossless data compressi...