Results 1  10
of
31
Contextbased adaptive binary arithmetic coding in the h.264/avc video compression standard. Circuits and Systems for VideoTechnology, IEEETransactions on
"... (CABAC) as a normative part of the new ITUT/ISO/IEC standard H.264/AVC for video compression is presented. By combining an adaptive binary arithmetic coding technique with context modeling, a high degree of adaptation and redundancy reduction is achieved. The CABAC framework also includes a novel l ..."
Abstract

Cited by 110 (6 self)
 Add to MetaCart
(CABAC) as a normative part of the new ITUT/ISO/IEC standard H.264/AVC for video compression is presented. By combining an adaptive binary arithmetic coding technique with context modeling, a high degree of adaptation and redundancy reduction is achieved. The CABAC framework also includes a novel lowcomplexity method for binary arithmetic coding and probability estimation that is well suited for efficient hardware and software implementations. CABAC significantly outperforms the baseline entropy coding method of H.264/AVC for the typical area of envisaged target applications. For a set of test sequences representing typical material used in broadcast applications and for a range of acceptable video quality of about 30 to 38 dB, average bitrate savings of 9%–14 % are achieved. Index Terms—Binary arithmetic coding, CABAC, context modeling, entropy coding, H.264, MPEG4 AVC. I.
Analysis of Arithmetic Coding for Data Compression
 INFORMATION PROCESSING AND MANAGEMENT
, 1992
"... Arithmetic coding, in conjunction with a suitable probabilistic model, can provide nearly optimal data compression. In this article we analyze the effect that the model and the particular implementation of arithmetic coding have on the code length obtained. Periodic scaling is often used in arithmet ..."
Abstract

Cited by 36 (6 self)
 Add to MetaCart
Arithmetic coding, in conjunction with a suitable probabilistic model, can provide nearly optimal data compression. In this article we analyze the effect that the model and the particular implementation of arithmetic coding have on the code length obtained. Periodic scaling is often used in arithmetic coding implementations to reduce time and storage requirements; it also introduces a recency effect which can further affect compression. Our main contribution is introducing the concept of weighted entropy and using it to characterize in an elegant way the effect that periodic scaling has on the code length. We explain why and by how much scaling increases the code length for files with a homogeneous distribution of symbols, and we characterize the reduction in code length due to scaling for files exhibiting locality of reference. We also give a rigorous proof that the coding effects of rounding scaled weights, using integer arithmetic, and encoding endoffile are negligible.
A Natural Law of Succession
, 1995
"... Consider the following problem. You are given an alphabet of k distinct symbols and are told that the i th symbol occurred exactly ni times in the past. On the basis of this information alone, you must now estimate the conditional probability that the next symbol will be i. In this report, we presen ..."
Abstract

Cited by 35 (3 self)
 Add to MetaCart
Consider the following problem. You are given an alphabet of k distinct symbols and are told that the i th symbol occurred exactly ni times in the past. On the basis of this information alone, you must now estimate the conditional probability that the next symbol will be i. In this report, we present a new solution to this fundamental problem in statistics and demonstrate that our solution outperforms standard approaches, both in theory and in practice.
Design and Analysis of Fast Text Compression Based on QuasiArithmetic Coding
 IN PROC. DATA COMPRESSION CONFERENCE
, 1994
"... We give a detailed algorithm for fast text compression. Our algorithm, related to the PPM method, simplifies the modeling phase by eliminating the escape mechanism and speeds up coding by using a combination of quasiarithmetic coding and Rice coding. We provide details of the use of quasiarithmeti ..."
Abstract

Cited by 21 (5 self)
 Add to MetaCart
We give a detailed algorithm for fast text compression. Our algorithm, related to the PPM method, simplifies the modeling phase by eliminating the escape mechanism and speeds up coding by using a combination of quasiarithmetic coding and Rice coding. We provide details of the use of quasiarithmetic code tables, and analyze their compression performance. Our Fast PPM method is shown experimentally to be almost twice as fast as the PPMC method, while giving comparable compression.
Random Access Decompression using Binary Arithmetic Coding
 Data Compression Conference
, 1999
"... We present an algorithm based on arithmetic coding that allows decompression to start at any point in the compressed #le. This random access requirement poses some restrictions on the implementation of arithmetic coding and on the model used. Our main application area is executable code compression ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
We present an algorithm based on arithmetic coding that allows decompression to start at any point in the compressed #le. This random access requirement poses some restrictions on the implementation of arithmetic coding and on the model used. Our main application area is executable code compression for computer systems where machine instructions aredecompressed onthe#y before execution. We focus on the decompression side of arithmetic coding and we propose a fast decoding scheme based on #nite state machines. Furthermore, we present a methodtodecode multiple bits per cycle, while keeping the size of the decoder small.
OnLine Stochastic Processes in Data Compression
, 1996
"... The ability to predict the future based upon the past in finitealphabet sequences has many applications, including communications, data security, pattern recognition, and natural language processing. By Shannon's theory and the breakthrough development of arithmetic coding, any sequence, a 1 a 2 \ ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
The ability to predict the future based upon the past in finitealphabet sequences has many applications, including communications, data security, pattern recognition, and natural language processing. By Shannon's theory and the breakthrough development of arithmetic coding, any sequence, a 1 a 2 \Delta \Delta \Delta a n , can be encoded in a number of bits that is essentially equal to the minimal informationlossless codelength, P i \Gamma log 2 p(a i ja 1 \Delta \Delta \Delta a i\Gamma1 ). The goal of universal online modeling, and therefore of universal data compression, is to deduce the model of the input sequence a 1 a 2 \Delta \Delta \Delta a n that can estimate each p(a i ja 1 \Delta \Delta \Delta a i\Gamma1 ) knowing only a 1 a 2 \Delta \Delta \Delta a i\Gamma1 so that the ex...
2DPattern Matching Image and Video Compression: Preliminary Results
 IEEE Trans. Image Processing
, 1998
"... In this paper, we present a lossy data compression scheme based on an approximate two dimensional pattern matching (2DPMIC) extension of the LempelZiv lossless scheme. We apply the scheme to image and video compression and report on our theoretical and experimental results. Theoretically, we sh ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
In this paper, we present a lossy data compression scheme based on an approximate two dimensional pattern matching (2DPMIC) extension of the LempelZiv lossless scheme. We apply the scheme to image and video compression and report on our theoretical and experimental results. Theoretically, we show that the so called fixed database model leads to suboptimal compression. Furthermore, the compression ratio of this model is as low as the generalized entropy that we define in the paper. We use this model for our video compression scheme and present experimental results. For image compression we use a growing database model for which we provide an approximate analysis (which will be presented in the final version of the paper). The implementation of 2DPMIC is a challenging problem from the algorithmic point of view. We use a range of novel techniques and data structures such as kd trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0:25 \Gamma 0:5 bpp for high quality images and data rates in the range of 0:15 \Gamma 0:4 Mbps for video compression.
Parallel lossless image compression using Huffman and arithmetic coding
 In Proc. Data Compression Conf. DCC–92, Snowbird
, 1992
"... We show that highresolution images can be encoded and decoded efficiently in parallel. We present an algorithm based on the hierarchical MLP method, used either with Huffman coding or with a new variant of arithmetic coding called quasiarithmetic coding. The coding step can be parallelized, even t ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
We show that highresolution images can be encoded and decoded efficiently in parallel. We present an algorithm based on the hierarchical MLP method, used either with Huffman coding or with a new variant of arithmetic coding called quasiarithmetic coding. The coding step can be parallelized, even though the codes for different pixels are of different lengths; parallelization of the prediction and error modeling components is straightforward.
Efficient Sensor Network Reprogramming through Compression of Executable Modules
"... Abstract—Software in deployed sensor networks needs to be updated to introduce new functionality or to fix bugs. Reducing dissemination time is important because the dissemination disturbs the regular operation of the network. We present a method for reducing the dissemination time and energy consum ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Abstract—Software in deployed sensor networks needs to be updated to introduce new functionality or to fix bugs. Reducing dissemination time is important because the dissemination disturbs the regular operation of the network. We present a method for reducing the dissemination time and energy consumption based on compression of native code modules. Code compression reduces the size of the software update, but the decompression on the sensor nodes requires processing time and energy. We quantify these tradeoffs for seven different compression algorithms. Our results show that GZIP has the most favorable tradeoffs, saving on average 67 % of the dissemination time and 69 % of the energy in a multihop wireless sensor network. I.