Results 1 
4 of
4
Data Compression
 ACM Computing Surveys
, 1987
"... This paper surveys a variety of data compression methods spanning almost forty years of research, from the work of Shannon, Fano and Huffman in the late 40's to a technique developed in 1986. The aim of data compression is to reduce redundancy in stored or communicated data, thus increasing effectiv ..."
Abstract

Cited by 87 (3 self)
 Add to MetaCart
This paper surveys a variety of data compression methods spanning almost forty years of research, from the work of Shannon, Fano and Huffman in the late 40's to a technique developed in 1986. The aim of data compression is to reduce redundancy in stored or communicated data, thus increasing effective data density. Data compression has important application in the areas of file storage and distributed systems. Concepts from information theory, as they relate to the goals and evaluation of data compression methods, are discussed briefly. A framework for evaluation and comparison of methods is constructed and applied to the algorithms presented. Comparisons of both theoretical and empirical natures are reported and possibilities for future research are suggested. INTRODUCTION Data compression is often referred to as coding, where coding is a very general term encompassing any special representation of data which satisfies a given need. Information theory is defined to be the study of eff...
Efficient Decoding of Prefix Codes
 Communications of the ACM
, 1990
"... We discuss representations of prefix codes and the corresponding storage space and decoding time requirements. We assume that a dictionary of words to be encoded has been defined and that a prefix code appropriate to the dictionary has been constructed. The encoding operation becomes simple given th ..."
Abstract

Cited by 31 (0 self)
 Add to MetaCart
We discuss representations of prefix codes and the corresponding storage space and decoding time requirements. We assume that a dictionary of words to be encoded has been defined and that a prefix code appropriate to the dictionary has been constructed. The encoding operation becomes simple given these assumptions and given an appropriate parsing strategy, therefore we concentrate on decoding. The application which led us to this work constrains the use of internal memory during the decode operation. As a result, we seek a method of decoding which has a small memory requirement. Introduction Data compression is an important and muchstudied problem. Compressing data to be stored or transmitted can result in significant improvements in the use of computing resources. The degree of improvement that can be achieved depends not only on the selection of a data compression method, but also on the characteristics of the particular application. That is, no single data compression algorithm wi...
LowComplexity Algorithms in Digital Receivers
 University of Technology
, 1996
"... This thesis addresses lowcomplexity algorithms in digital receivers. This includes algorithms for estimation, detection, and source coding. ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
This thesis addresses lowcomplexity algorithms in digital receivers. This includes algorithms for estimation, detection, and source coding.
Sublinear Decoding of Huffman Codes Almost InPlace
, 1998
"... We present a succinct data structure storing the Huffman encoding that permits sublinear decoding in the number of transmitted bits. The size of the extra storage except for the storage of the symbols in the alphabet for the new data structure is O(l log N) bits, where l is the longest Huffman code ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We present a succinct data structure storing the Huffman encoding that permits sublinear decoding in the number of transmitted bits. The size of the extra storage except for the storage of the symbols in the alphabet for the new data structure is O(l log N) bits, where l is the longest Huffman code and N is the number of symbols in the alphabet. We present a solution that typically decodes texts of sizes ranging from a few hundreds up to 68 000 with only one third to one fifth of the number of memory accesses of that of regular Huffman implementations. In our solution, the overhead structure where we do all but one memory access to, is never more than 342 bytes. This will with a very high probability reside in cache, which means that the actual decoding time compares even better. 1 Introduction If you have an alphabet of N symbols that you would like to encode the typical solution would be to use dlog ne bits to encode N different numbers, each number corresponding to a symbol. This ...