Results 1  10
of
161
Compressed Bloom Filters
, 2001
"... A Bloom filter is a simple spaceefficient randomized data structure for representing a set in order to support membership queries. Although Bloom filters allow false positives, for many applications the space savings outweigh this drawback when the probability of an error is sufficiently low. We in ..."
Abstract

Cited by 247 (10 self)
 Add to MetaCart
A Bloom filter is a simple spaceefficient randomized data structure for representing a set in order to support membership queries. Although Bloom filters allow false positives, for many applications the space savings outweigh this drawback when the probability of an error is sufficiently low. We introduce compressed Bloom filters, which improve performance when the Bloom filter is passed as a message, and its transmission size is a limiting factor. For example, Bloom filters have been suggested as a means for sharing Web cache information. In this setting, proxies do not share the exact contents of their caches, but instead periodically broadcast Bloom filters representing their cache. By using compressed Bloom filters, proxies can reduce the number of bits broadcast, the false positive rate, and/or the amount of computation per lookup. The cost is the processing time for compression and decompression, which can use simple arithmetic coding, and more memory use at the proxies, which utilize the larger uncompressed form of the Bloom filter.
Contextbased adaptive binary arithmetic coding in the h.264/avc video compression standard. Circuits and Systems for VideoTechnology, IEEETransactions on
"... (CABAC) as a normative part of the new ITUT/ISO/IEC standard H.264/AVC for video compression is presented. By combining an adaptive binary arithmetic coding technique with context modeling, a high degree of adaptation and redundancy reduction is achieved. The CABAC framework also includes a novel l ..."
Abstract

Cited by 189 (12 self)
 Add to MetaCart
(Show Context)
(CABAC) as a normative part of the new ITUT/ISO/IEC standard H.264/AVC for video compression is presented. By combining an adaptive binary arithmetic coding technique with context modeling, a high degree of adaptation and redundancy reduction is achieved. The CABAC framework also includes a novel lowcomplexity method for binary arithmetic coding and probability estimation that is well suited for efficient hardware and software implementations. CABAC significantly outperforms the baseline entropy coding method of H.264/AVC for the typical area of envisaged target applications. For a set of test sequences representing typical material used in broadcast applications and for a range of acceptable video quality of about 30 to 38 dB, average bitrate savings of 9%–14 % are achieved. Index Terms—Binary arithmetic coding, CABAC, context modeling, entropy coding, H.264, MPEG4 AVC. I.
Compression and Explanation using Hierarchical Grammars
 Computer Journal
, 1997
"... This paper describes an algorithm, called SEQUITUR, that identifies hierarchical structure in ..."
Abstract

Cited by 96 (1 self)
 Add to MetaCart
This paper describes an algorithm, called SEQUITUR, that identifies hierarchical structure in
Compressing Integers for Fast File Access
 The Computer Journal
, 1999
"... this paper we show experimentally that, for large or small collections, storing integers in a compressed format reduces the time required for either sequential stream access or random access. We compare di#erent approaches to compressing integers, including the Elias gamma and delta codes, Golom ..."
Abstract

Cited by 79 (14 self)
 Add to MetaCart
(Show Context)
this paper we show experimentally that, for large or small collections, storing integers in a compressed format reduces the time required for either sequential stream access or random access. We compare di#erent approaches to compressing integers, including the Elias gamma and delta codes, Golomb coding, and a variablebyte integer scheme. As a conclusion, we recommend that, for fast access to integers, files be stored compressed
Single resolution compression of arbitrary triangular meshes with properties
 In Data Compression Conference’99 Conference Proceedings
, 1999
"... Polygonal meshes have been used as the primary geometric model representation for networked gaming and for complex interactive design in manufacturing. Accurate polygonal mesh approximation of a surface with sharp features (holes, highly varying curvatures) requires extremely large number of triangl ..."
Abstract

Cited by 64 (3 self)
 Add to MetaCart
(Show Context)
Polygonal meshes have been used as the primary geometric model representation for networked gaming and for complex interactive design in manufacturing. Accurate polygonal mesh approximation of a surface with sharp features (holes, highly varying curvatures) requires extremely large number of triangles. Transmission of such large triangle meshes is
Models of English text
, 1997
"... The problem of constructing models of English text is considered. A number of applications of such models including cryptology, spelling correction and speech recognition are reviewed. The best current models of English text have been the result of research into compression. Not only is this an impo ..."
Abstract

Cited by 51 (8 self)
 Add to MetaCart
The problem of constructing models of English text is considered. A number of applications of such models including cryptology, spelling correction and speech recognition are reviewed. The best current models of English text have been the result of research into compression. Not only is this an important application of such models but the amount of compression provides a measure of how well such models perform. Three main classes of models are considered: character based models, word based models, and models which use auxilary information in the form of parts of speech. These models are compared in terms of their memory usage and compression.
Extended Application of Suffix Trees to Data Compression
 In Data Compression Conference
, 1996
"... A practical scheme for maintaining an index for a sliding window in optimal time and space, by use of a suffix tree, is presented. The index supports location of the longest matching substring in time proportional to the length of the match. The total time for build and update operations is proporti ..."
Abstract

Cited by 41 (4 self)
 Add to MetaCart
(Show Context)
A practical scheme for maintaining an index for a sliding window in optimal time and space, by use of a suffix tree, is presented. The index supports location of the longest matching substring in time proportional to the length of the match. The total time for build and update operations is proportional to the size of the input. The algorithm, which is simple and straightforward, is presented in detail. The most prominent lossless data compression scheme, when considering compression performance, is prediction by partial matching with unbounded context lengths (PPM*). However, previously presented algorithms are hardly practical, considering their extensive use of computational resources. We show that our scheme can be applied to PPM*style compression, obtaining an algorithm that runs in linear time, and in space bounded by an arbitrarily chosen window size. Application to ZivLempel '77 compression methods is straightforward and the resulting algorithm runs in linear time. 1 Introdu...
Bicubic SubdivisionSurface Wavelets for LargeScale Isosurface Representation and Visualization
, 2000
"... We introduce a new subdivisionsurface wavelet transform for arbitrary twomanifolds with boundary that is the first to use simple liftingstyle filtering operations with bicubic precision. We also describe a conversion process for remapping largescale isosurfaces to have subdivision connectivity ..."
Abstract

Cited by 36 (13 self)
 Add to MetaCart
We introduce a new subdivisionsurface wavelet transform for arbitrary twomanifolds with boundary that is the first to use simple liftingstyle filtering operations with bicubic precision. We also describe a conversion process for remapping largescale isosurfaces to have subdivision connectivity and fair parameterizations so that the new wavelet transform can be used for compression and visualization. The main idea enabling our wavelet transform is the circular symmetrization of the filters in irregular neighborhoods, which replaces the traditional separation of filters into two 1D passes. Our wavelet transform uses polygonal base meshes to represent surface topology, from which a CatmullClarkstyle subdivision hierarchy is generated. The details between these levels of resolution are quickly computed and compactly stored as wavelet coefficients. The isosurface conversion process begins with a contour triangulation computed using conventional techniques, which we subsequently simplify with a variant edgecollapse procedure, followed by an edgeremoval process. This provides a coarse initial base mesh, which is subsequently refined, relaxed and attracted in phases to converge to the contour. The conversion is designed to produce smooth, untangled and minimally skewed parameterizations, which improves the subsequent compression after applying the transform. We have demonstrated our conversion and transform for an isosurface obtained from a highresolution turbulentmixing hydrodynamics simulation, showing the potential for compression and levelofdetail visualization.
Text Image Compression Using Soft Pattern Matching
 Computer Journal
, 1997
"... this paper we describe a process which can be used for both lossless and lossy compression. For text documents at 200 dpi, our lossless compression ratios are between 20% and 65% better than those of the JBIG1 standard [1]. Our lossy compression ratios are between 2.0 and 4.6 times the lossless rat ..."
Abstract

Cited by 35 (9 self)
 Add to MetaCart
(Show Context)
this paper we describe a process which can be used for both lossless and lossy compression. For text documents at 200 dpi, our lossless compression ratios are between 20% and 65% better than those of the JBIG1 standard [1]. Our lossy compression ratios are between 2.0 and 4.6 times the lossless ratios of JBIG1, with only barely perceptible changes from the original. The lossless algorithm is similar to the method described by Mohiuddin et al. [2]; we extend the method to allow lossy compression by preprocessing each character in a way that reduces the number of bits output without noticeably distorting the character.