Results 11  20
of
123
Lossless Compression of Volume Data
, 1994
"... Data in volume form consumes an extraordinary amount of storage space. For efficient storage and transmission of such data, compression algorithms are imperative. However, most volumetric datasets are used in biomedicine and other scientific applications where lossy compression is unacceptable. We p ..."
Abstract

Cited by 31 (1 self)
 Add to MetaCart
Data in volume form consumes an extraordinary amount of storage space. For efficient storage and transmission of such data, compression algorithms are imperative. However, most volumetric datasets are used in biomedicine and other scientific applications where lossy compression is unacceptable. We present a lossless datacompression algorithm which, being oriented specifically for volume data, achieves greater compression performance than generic compression algorithms that are typically available on modern computer systems. Our algorithm is a combination of differential pulsecode modulation (DPCM) and Huffman coding and results in compression of around 50% for a set of volume data files. I. Introduction Compression for efficient storage and transmission of digital data has become routine as the application of such data has grown. Several common datacompression programs are readily available on many computers to fight the burgeoning demand for storage space. These programs are typica...
Efficient Decoding of Prefix Codes
 Communications of the ACM
, 1990
"... We discuss representations of prefix codes and the corresponding storage space and decoding time requirements. We assume that a dictionary of words to be encoded has been defined and that a prefix code appropriate to the dictionary has been constructed. The encoding operation becomes simple given th ..."
Abstract

Cited by 31 (0 self)
 Add to MetaCart
We discuss representations of prefix codes and the corresponding storage space and decoding time requirements. We assume that a dictionary of words to be encoded has been defined and that a prefix code appropriate to the dictionary has been constructed. The encoding operation becomes simple given these assumptions and given an appropriate parsing strategy, therefore we concentrate on decoding. The application which led us to this work constrains the use of internal memory during the decode operation. As a result, we seek a method of decoding which has a small memory requirement. Introduction Data compression is an important and muchstudied problem. Compressing data to be stored or transmitted can result in significant improvements in the use of computing resources. The degree of improvement that can be achieved depends not only on the selection of a data compression method, but also on the characteristics of the particular application. That is, no single data compression algorithm wi...
Test Data Compression and Test Resource Partitioning for SystemonaChip Using . . .
, 2003
"... Test data compression and test resource partitioning (TRP) are necessary to reduce the volume of test data for systemonachip designs. We present a new class of variabletovariablelength compression codes that are designed using distributions of the runs of 0s in typical test sequences. We refe ..."
Abstract

Cited by 30 (5 self)
 Add to MetaCart
Test data compression and test resource partitioning (TRP) are necessary to reduce the volume of test data for systemonachip designs. We present a new class of variabletovariablelength compression codes that are designed using distributions of the runs of 0s in typical test sequences. We refer to these as frequencydirected runlength (FDR) codes. We present experimental results for ISCAS 89 benchmark circuits and two IBM production circuits to show that FDR codes are extremely effective for test data compression and TRP. We derive upper and lower bounds on the compression expected for some generic parameters of the test sequences. These bounds are especially tight when the number of runs is small, thereby showing that FDR codes are robust, i.e., they are insensitive to variations in the input data stream. In order to highlight the inherent superiority of FDR codes, we present a probabilistic analysis of data compression for a memoryless data source. Finally, we derive entropy bounds for the benchmark test sets and show that the compression obtained using FDR codes is close to the entropy bounds.
Structures of String Matching and Data Compression
, 1999
"... This doctoral dissertation presents a range of results concerning efficient algorithms and data structures for string processing, including several schemes contributing to sequential data compression. It comprises both theoretic results and practical implementations. We study the suffix tree data st ..."
Abstract

Cited by 29 (0 self)
 Add to MetaCart
This doctoral dissertation presents a range of results concerning efficient algorithms and data structures for string processing, including several schemes contributing to sequential data compression. It comprises both theoretic results and practical implementations. We study the suffix tree data structure, presenting an efficient representation and several generalizations. This includes augmenting the suffix tree to fully support sliding window indexing (including a practical implementation) in linear time. Furthermore, we consider a variant that indexes naturally wordpartitioned data, and present a lineartime construction algorithm for a tree that represents only suffixes starting at word boundaries, requiring space linear in the number of words. By applying our sliding window indexing techniques, we achieve an efficient implementation for dictionarybased compression based on the LZ77 algorithm. Furthermore, considering predictive source
'Computing' as information compression by multiple alignment, unification and search
 Journal of Universal Computer Science
, 1999
"... This paper argues that the operations of a `Universal Turing Machine' (UTM) and equivalent mechanisms such as the `Post Canonical System' (PCS)  which are widely accepted as definitions of the concept of `computing'  may be interpreted as information compression by multiple alignment, unificat ..."
Abstract

Cited by 28 (14 self)
 Add to MetaCart
This paper argues that the operations of a `Universal Turing Machine' (UTM) and equivalent mechanisms such as the `Post Canonical System' (PCS)  which are widely accepted as definitions of the concept of `computing'  may be interpreted as information compression by multiple alignment, unification and search (ICMAUS). The motivation for this interpretation is that it suggests ways in which the UTM/PCS model may be augmented in a proposed new computing system designed to exploit the ICMAUS principles as fully as possible. The provision of a relatively sophisticated search mechanism in the proposed `SP' system appears to open the door to the integration and simplification of a range of functions including unsupervised inductive learning, bestmatch pattern recognition and information retrieval, probabilistic reasoning, planning and problem solving, and others. Detailed consideration of how the ICMAUS principles may be applied to these functions is outside the scope of this article but relevant sources are cited in this article.
Rotation of Periodic Strings and Short Superstrings
, 1996
"... This paper presents two simple approximation algorithms for the shortest superstring problem, with approximation ratios 2 2 3 ( 2:67) and 2 25 42 ( 2:596), improving the best previously published 2 3 4 approximation. The framework of our improved algorithms is similar to that of previous a ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
This paper presents two simple approximation algorithms for the shortest superstring problem, with approximation ratios 2 2 3 ( 2:67) and 2 25 42 ( 2:596), improving the best previously published 2 3 4 approximation. The framework of our improved algorithms is similar to that of previous algorithms in the sense that they construct a superstring by computing some optimal cycle covers on the distance graph of the given strings, and then break and merge the cycles to finally obtain a Hamiltonian path, but we make use of new bounds on the overlap between two strings. We prove that for each periodic semiinfinite string ff = a1a2 \Delta \Delta \Delta of period q, there exists an integer k, such that for any (finite) string s of period p which is inequivalent to ff, the overlap between s and the rotation ff[k] = ak ak+1 \Delta \Delta \Delta is at most p+ 1 2 q. Moreover, if p q, then the overlap between s and ff[k] is not larger than 2 3 (p+q). In the previous shortes...
Offline compression by greedy textual substitution
 PROC. IEEE
, 2000
"... Greedy offline textual substitution refers to the following approach to compression or structural inference. Given a long textstring x, a substring w is identified such that replacing all instances of w in x except one by a suitable pair of pointers yields the highest possible contraction of x; the ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
Greedy offline textual substitution refers to the following approach to compression or structural inference. Given a long textstring x, a substring w is identified such that replacing all instances of w in x except one by a suitable pair of pointers yields the highest possible contraction of x; the process is then repeated on the contracted textstring until substrings capable of producing contractions can no longer be found. This paper examines computational issues arising in the implementation of this paradigm and describes some applications and experiments.
Compression of Correlated BitVectors
 Information Systems
, 1990
"... : Bitmaps are data structures occurring often in information retrieval. They are useful; they are also large and expensive to store. For this reason, considerable effort has been devoted to finding techniques for compressing them. These techniques are most effective for sparse bitmaps. We propose a ..."
Abstract

Cited by 25 (2 self)
 Add to MetaCart
: Bitmaps are data structures occurring often in information retrieval. They are useful; they are also large and expensive to store. For this reason, considerable effort has been devoted to finding techniques for compressing them. These techniques are most effective for sparse bitmaps. We propose a preprocessing stage, in which bitmaps are first clustered and the clusters used to transform their member bitmaps into sparser ones, that can be more effectively compressed. The clustering method efficiently generates a graph structure on the bitmaps. In some situations, it is desired to impose restrictions on the graph; finding the optimal graph satisfying these restrictions is shown to be NPcomplete. The results of applying our algorithm to the Bible is presented: for some sets of bitmaps, our method almost doubled the compression savings. 1. Introduction Textual Information Retrieval Systems (IRS) are voracious consumers of computer storage resources. Most conspicuous, of course, is the...
Text Compression Using Antidictionaries
 In 26th Internationale Colloquium on Automata, Languages and Programming (ICALP
, 1998
"... We give a new text compression scheme based on Forbidden Words ("antidictionary"). We prove that our algorithms attain the entropy for equilibrated binary sources. One of the main advantage of this approach is that it produces very fast decompressors. A second advantage is a synchronization property ..."
Abstract

Cited by 24 (5 self)
 Add to MetaCart
We give a new text compression scheme based on Forbidden Words ("antidictionary"). We prove that our algorithms attain the entropy for equilibrated binary sources. One of the main advantage of this approach is that it produces very fast decompressors. A second advantage is a synchronization property that is helpful to search compressed data and to parallelize the compressor. Our algorithms can also be presented as "compilers" that create compressors dedicated to any previously fixed source. The techniques used in this paper are from Information Theory and Finite Automata; as a consequence, this paper shows that Formal Language Theory (in particular Finite Automata Theory) can be useful in Data Compression. Keywords: data compression, information theory, finite automaton, forbidden word, pattern matching. 1 Introduction We present a simple text compression method called DCA (Data Compression with Antidictionaries) that uses some "negative" information about the text, which is describe...
The Smallest Grammar Problem
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 2005
"... This paper addresses the smallest grammar problem: What is the smallest contextfree grammar that generates exactly one given string σ? This is a natural question about a fundamental object connected to many fields, including data compression, Kolmogorov complexity, pattern identification, and addi ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
This paper addresses the smallest grammar problem: What is the smallest contextfree grammar that generates exactly one given string σ? This is a natural question about a fundamental object connected to many fields, including data compression, Kolmogorov complexity, pattern identification, and addition chains. Due to the problem’s inherent complexity, our objective is to find an approximation algorithm which finds a small grammar for the input string. We focus attention on the approximation ratio of the algorithm (and implicitly, worstcase behavior) to establish provable performance guarantees and to address shortcomings in the classical measure of redundancy in the literature. Our first results are a variety of hardness results, most notably that every efficient algorithm for the smallest grammar problem has approximation ratio at least 8569 unless P = NP. 8568 We then bound approximation ratios for several of the bestknown grammarbased compression algorithms, including LZ78, BISECTION, SEQUENTIAL, LONGEST MATCH, GREEDY, and REPAIR. Among these, the best upper bound we show is O(n 1/2). We finish by presenting two novel algorithms with exponentially better ratios of O(log 3 n) and O(log(n/m ∗)), where m ∗ is the size of the smallest grammar for that input. The latter highlights a connection between grammarbased compression and LZ77.