Results 1  10
of
15
XGRIND: A Queryfriendly XML Compressor
 IN ICDE
, 2002
"... XML documents are extremely verbose since the "schema" is repeated for every "record" in the document. While a variety of compressors are available to address this problem, they are not designed to support direct querying of the compressed document, a useful feature from a databa ..."
Abstract

Cited by 102 (0 self)
 Add to MetaCart
XML documents are extremely verbose since the "schema" is repeated for every "record" in the document. While a variety of compressors are available to address this problem, they are not designed to support direct querying of the compressed document, a useful feature from a database perspective. In this paper, we propose a new compression tool called XGrind, that directly supports queries in the compressed domain. A special feature of XGrind is that the compressed document retains the structure of the original document, permitting reuse of the standard XML techniques for processing the compressed document. Performance evaluation over a variety of XML documents and user queries indicates that XGrind simultaneously delivers improved query processing times and reasonable compression ratios.
The Design and Analysis of Efficient Lossless Data Compression Systems
, 1993
"... Our thesis is that high compression efficiency for text and images can be obtained by using sophisticated statistical compression techniques, and that greatly increased speed can be achieved at only a small cost in compression efficiency. Our emphasis is on elegant design and mathematical as well as ..."
Abstract

Cited by 53 (0 self)
 Add to MetaCart
(Show Context)
Our thesis is that high compression efficiency for text and images can be obtained by using sophisticated statistical compression techniques, and that greatly increased speed can be achieved at only a small cost in compression efficiency. Our emphasis is on elegant design and mathematical as well as empirical analysis. We analyze arithmetic coding as it is commonly implemented and show rigorously that almost no compression is lost in the implementation. We show that highefficiency lossless compression of both text and grayscale images can be obtained by using appropriate models in conjunction with arithmetic coding. We introduce a fourcomponent paradigm for lossless image compression and present two methods that give state of the art compression efficiency. In the text compression area, we give a small improvement on the preferred method in the literature. We show that we can often obtain significantly improved throughput at the cost of slightly reduced compression. The extra speed c...
Analysis of Arithmetic Coding for Data Compression
 INFORMATION PROCESSING AND MANAGEMENT
, 1992
"... Arithmetic coding, in conjunction with a suitable probabilistic model, can provide nearly optimal data compression. In this article we analyze the effect that the model and the particular implementation of arithmetic coding have on the code length obtained. Periodic scaling is often used in arithmet ..."
Abstract

Cited by 41 (6 self)
 Add to MetaCart
Arithmetic coding, in conjunction with a suitable probabilistic model, can provide nearly optimal data compression. In this article we analyze the effect that the model and the particular implementation of arithmetic coding have on the code length obtained. Periodic scaling is often used in arithmetic coding implementations to reduce time and storage requirements; it also introduces a recency effect which can further affect compression. Our main contribution is introducing the concept of weighted entropy and using it to characterize in an elegant way the effect that periodic scaling has on the code length. We explain why and by how much scaling increases the code length for files with a homogeneous distribution of symbols, and we characterize the reduction in code length due to scaling for files exhibiting locality of reference. We also give a rigorous proof that the coding effects of rounding scaled weights, using integer arithmetic, and encoding endoffile are negligible.
Practical Implementations of Arithmetic Coding
 IN IMAGE AND TEXT
, 1992
"... We provide a tutorial on arithmetic coding, showing how it provides nearly optimal data compression and how it can be matched with almost any probabilistic model. We indicate the main disadvantage of arithmetic coding, its slowness, and give the basis of a fast, spaceefficient, approximate arithmet ..."
Abstract

Cited by 37 (6 self)
 Add to MetaCart
We provide a tutorial on arithmetic coding, showing how it provides nearly optimal data compression and how it can be matched with almost any probabilistic model. We indicate the main disadvantage of arithmetic coding, its slowness, and give the basis of a fast, spaceefficient, approximate arithmetic coder with only minimal loss of compression efficiency. Our coder is based on the replacement of arithmetic by table lookups coupled with a new deterministic probability estimation scheme.
Set Redundancy, The Enhanced Compression Model, And Methods For Compressing Sets Of Similar Images
, 1996
"... ........................................................................................................................... x CHAPTER 1: INTRODUCTION ............................................................................................ 1 CHAPTER 2: IMAGE COMPRESSION ........................ ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
........................................................................................................................... x CHAPTER 1: INTRODUCTION ............................................................................................ 1 CHAPTER 2: IMAGE COMPRESSION ................................................................................. 3 2.1 INTRODUCTION ........................................................................................................... 3 2.2 LOSSLESS COMPRESSION METHODS ........................................................................... 5 2.2.1 Huffman Coding ................................................................................................. 5 2.2.2 Arithmetic Coding .............................................................................................. 5 2.2.3 LempelZiv Compression ................................................................................... 6 2.2.4 Run Length Encoding ................
Cycon: “Efficient PreCoding Techniques for WaveletBased Image Compression
 Proc. PCS ’97
, 1997
"... The principle of transform coding is a successfully established concept in image compression. In this paper we introduce a coding method using a fast wavelet transform and an uniform quantizer combined with a new framework of precoding techniques which are based on the concepts of partitioning, agg ..."
Abstract

Cited by 14 (8 self)
 Add to MetaCart
(Show Context)
The principle of transform coding is a successfully established concept in image compression. In this paper we introduce a coding method using a fast wavelet transform and an uniform quantizer combined with a new framework of precoding techniques which are based on the concepts of partitioning, aggregation and conditional coding (PACC). Following these concepts, the data object emerging from the quantizer will be first partitioned into different subsources. Parts of correlations within and between different subsources will then be captured by aggregating homogeneous elements into data structures like runlength codes or zerotrees. By using models based on conditional probabilities we are able to recover correlations between the structures constructed before as well as crosscorrelations between different subsources which will be utilized in a final arithmetic coding stage. Experimental results show that our proposed coding methods have a ratedistortion (RD) performance comparable to or even better than the best zerotreebased still image coders in the published literature with the advantage of a less demanding computational complexity. In addition, we propose and evaluate a waveletbased video coding algorithm which outperforms the very efficient MPEG4 Video Verification Model (VM 5.1) in both subjective and objective quality. 1
Waveletbased coding of threedimensional oceanographic images around land masses
 in Proceedings of IEEE International Conference on Image Processing (ICIP ’00
, 2000
"... We describe an algorithm for the embedded coding of 3D oceanographic images. These images differ from those arising in other applications in that valid data exists only at grid points corresponding to sea; grids points that cover land or lie beyond the bathymetry have no associated data. For these i ..."
Abstract

Cited by 13 (8 self)
 Add to MetaCart
(Show Context)
We describe an algorithm for the embedded coding of 3D oceanographic images. These images differ from those arising in other applications in that valid data exists only at grid points corresponding to sea; grids points that cover land or lie beyond the bathymetry have no associated data. For these images, we employ a 3D lifting wavelet transform tailored specifically to the potentially sparse nature of the data by processing only the valid sea data points in between land masses. In addition, we introduce successiveapproximation runlength (SARL) coding, an embeddedcoding procedure which adds successiveapproximation properties to the well known stackrun (SR) algorithm. SARL is a general technique applicable to the oceanographic images considered here as well as to other coding tasks in which embedded coding is desired but for which zerotreetechniques are impractical. 1.
Universal data compression based on the Burrows–Wheeler transformation: Theory and practice
 IEEE Transactions on Computers
"... ..."
CONTROLAB MUFA: A MultiLevel Fusion Architecture for Intelligent Navigation of a Telerobot
 in Proc. IEEE Int. Conf. Robot. Automat
, 1999
"... l%ispaper proposes a MVi’tilevelFusion Architecture ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
l%ispaper proposes a MVi’tilevelFusion Architecture