Results 1  10
of
20
Design and analysis of dynamic Huffman codes
 Journal of the ACM
, 1987
"... Abstract. A new onepass algorithm for constructing dynamic Huffman codes is introduced and analyzed. We also analyze the onepass algorithm due to Failer, Gallager, and Knuth. In each algorithm, both the sender and the receiver maintain equivalent dynamically varying Huffman trees, and the coding i ..."
Abstract

Cited by 87 (3 self)
 Add to MetaCart
Abstract. A new onepass algorithm for constructing dynamic Huffman codes is introduced and analyzed. We also analyze the onepass algorithm due to Failer, Gallager, and Knuth. In each algorithm, both the sender and the receiver maintain equivalent dynamically varying Huffman trees, and the coding is done in real time. We show that the number of bits used by the new algorithm to encode a message containing t letters is < t bits more than that used by the conventional twopass Huffman scheme, independent of the alphabet size. This is best possible in the worst case, for any onepass Huffman method. Tight upper and lower bounds are derived. Empirical tests show that the encodings produced by the new algorithm are shorter than those of the other onepass algorithm and, except for long messages, are shorter than those of the twopass method. The new algorithm is well suited for online encoding/decoding in data networks and for file compression.
Data Compression
 ACM Computing Surveys
, 1987
"... This paper surveys a variety of data compression methods spanning almost forty years of research, from the work of Shannon, Fano and Huffman in the late 40's to a technique developed in 1986. The aim of data compression is to reduce redundancy in stored or communicated data, thus increasing effectiv ..."
Abstract

Cited by 85 (3 self)
 Add to MetaCart
This paper surveys a variety of data compression methods spanning almost forty years of research, from the work of Shannon, Fano and Huffman in the late 40's to a technique developed in 1986. The aim of data compression is to reduce redundancy in stored or communicated data, thus increasing effective data density. Data compression has important application in the areas of file storage and distributed systems. Concepts from information theory, as they relate to the goals and evaluation of data compression methods, are discussed briefly. A framework for evaluation and comparison of methods is constructed and applied to the algorithms presented. Comparisons of both theoretical and empirical natures are reported and possibilities for future research are suggested. INTRODUCTION Data compression is often referred to as coding, where coding is a very general term encompassing any special representation of data which satisfies a given need. Information theory is defined to be the study of eff...
Improved Bounds on the Inefficiency of LengthRestricted Prefix Codes
 Departamento de Inform'atica, PUCRJ, Rio de
, 1997
"... : Consider an alphabet \Sigma = fa 1 ; : : : ; ang with corresponding symbol probabilities p 1 ; : : : ; pn . The L\Gammarestricted prefix code is a prefix code where all the code lengths are not greater than L. The value L is a given integer such that L dlog ne. Define the average code length dif ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
: Consider an alphabet \Sigma = fa 1 ; : : : ; ang with corresponding symbol probabilities p 1 ; : : : ; pn . The L\Gammarestricted prefix code is a prefix code where all the code lengths are not greater than L. The value L is a given integer such that L dlog ne. Define the average code length difference by ffl = P n i=1 p i :l i \Gamma P n i=1 p i :l i , where l 1 ; : : : ; l n are the code lengths of the optimal Lrestricted prefix code for \Sigma and l 1 ; : : : ; l n are the code lengths of the optimal prefix code for \Sigma. Let / be the golden ratio 1,618. In this paper, we show that ffl ! 1=/ L\Gammadlog(n+dlog ne\GammaL)e\Gamma1 when L ? dlog ne. We also prove the sharp bound ffl ! dlog ne \Gamma 1, when L = dlog ne. By showing the lower bound 1 / L\Gammadlog ne+2+dlog n n\GammaL e \Gamma1 on the maximum value of ffl, we guarantee that our bound is asymptotically tight in the range dlog ne ! L n=2. Furthermore, we present an O(n) time and space 1=/ L\Gammadlo...
The WARMUP Algorithm: A Lagrangean Construction of Length Restricted Huffman Codes
 Departamento de Inform'atica, PUCRJ, Rio de
, 1996
"... : Given an alphabet fa 1 ; : : : ; ang with corresponding set of weights fw 1 ; : : : ; wng, and a number L dlog ne, we introduce an O(n log n+n log w) algorithm for constructing a suboptimal prefix code with restricted maximal length L, where w is the highest presented weight. The number of additi ..."
Abstract

Cited by 13 (8 self)
 Add to MetaCart
: Given an alphabet fa 1 ; : : : ; ang with corresponding set of weights fw 1 ; : : : ; wng, and a number L dlog ne, we introduce an O(n log n+n log w) algorithm for constructing a suboptimal prefix code with restricted maximal length L, where w is the highest presented weight. The number of additional bits per symbol generated by our code is not greater than 1=/ L\Gammadlog(n+dlog ne\GammaL)e\Gamma2 when L ? dlog ne + 1, where / is the golden ratio 1:618. An important feature of the proposed algorithm is its implementation simplicity. The algorithm is basically a selected sequence of Huffman trees construction for modified weights. Keywords: Prefix codes, Huffman Trees, Lagragean Duality Resumo: Dado um alfabeto fa 1 ; : : : ; ang com pesos correspondentes fw 1 ; : : : ; wng e um n'umero L dlog ne, n'os apresentamoso um algoritmo de de complexidade O(n log n + n log w)para construit c'odigos de prefixo sub'otimos com restric~ao de comprimento L, onde w 'e o maior peso do dado co...
Dynamic Shannon Coding
, 2005
"... We present a new algorithm for dynamic prefixfree coding, based on Shannon coding. We give a simple analysis and prove a better upper bound on the length of the encoding produced than the corresponding bound for dynamic Huffman coding. We show how our algorithm can be modified for efficient lengthr ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
We present a new algorithm for dynamic prefixfree coding, based on Shannon coding. We give a simple analysis and prove a better upper bound on the length of the encoding produced than the corresponding bound for dynamic Huffman coding. We show how our algorithm can be modified for efficient lengthrestricted coding, alphabetic coding and coding with unequal letter costs.
A general framework for codes involving redundancy minimization
 IEEE Transactions on Information Theory
, 2006
"... Abstract — A framework with two scalar parameters is introduced for various problems of finding a prefix code minimizing a coding penalty function. The framework involves a twoparameter class encompassing problems previously proposed by Huffman [1], Campbell [2], Nath [3], and Drmota and Szpankowsk ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
Abstract — A framework with two scalar parameters is introduced for various problems of finding a prefix code minimizing a coding penalty function. The framework involves a twoparameter class encompassing problems previously proposed by Huffman [1], Campbell [2], Nath [3], and Drmota and Szpankowski [4]. It sheds light on the relationships among these problems. In particular, Nath’s problem can be seen as bridging that of Huffman with that of Drmota and Szpankowski. This leads to a lineartime algorithm for the last of these with a solution that solves a range of Nath subproblems. We find simple bounds and lineartime Huffmanlike optimization algorithms for all nontrivial problems within the class.
A Relational Approach To Optimization Problems
, 1996
"... The main contribution of this thesis is a study of the dynamic programming and greedy strategies for solving combinatorial optimization problems. The study is carried out in the context of a calculus of relations, and generalises previous work by using a loop operator in the imperative programming s ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
The main contribution of this thesis is a study of the dynamic programming and greedy strategies for solving combinatorial optimization problems. The study is carried out in the context of a calculus of relations, and generalises previous work by using a loop operator in the imperative programming style for generating feasible solutions, rather than the fold and unfold operators of the functional programming style. The relationship between fold operators and loop operators is explored, and it is shown how to convert from the former to the latter. This fresh approach provides additional insights into the relationship between dynamic programming and greedy algorithms, and helps to unify previously distinct approaches to solving combinatorial optimization problems. Some of the solutions discovered are new and solve problems which had previously proved difficult. The material is illustrated with a selection of problems and solutions that is a mixture of old and new. Another contribution is the invention of a new calculus, called the graph calculus, which is a useful tool for reasoning in the relational calculus and other nonrelational calculi. The graph
Efficient Implementation of the WARMUP Algorithm for the Construction of LengthRestricted Prefix Codes
 in Proceedings of the ALENEX
, 1999
"... . Given an alphabet \Sigma = fa1 ; : : : ; ang with a corresponding list of positive weights fw1 ; : : : ; wng and a length restriction L, the lengthrestricted prefix code problem is to find, a prefix code that minimizes P n i=1 w i l i , where l i , the length of the codeword assigned to a i , ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
. Given an alphabet \Sigma = fa1 ; : : : ; ang with a corresponding list of positive weights fw1 ; : : : ; wng and a length restriction L, the lengthrestricted prefix code problem is to find, a prefix code that minimizes P n i=1 w i l i , where l i , the length of the codeword assigned to a i , cannot be greater than L, for i = 1; : : : ; n. In this paper, we present an efficient implementation of the WARMUP algorithm, an approximative method for this problem. The worstcase time complexity of WARMUP is O(n log n +n log wn ), where wn is the greatest weight. However, some experiments with a previous implementation of WARMUP show that it runs in linear time for several practical cases, if the input weights are already sorted. In addition, it often produces optimal codes. The proposed implementation combines two new enhancements to reduce the space usage of WARMUP and to improve its execution time. As a result, it is about ten times faster than the previous implementat...
Practical Use of The Warmup Algorithm on LengthRestricted Coding
 the Proceedings of the Fourth Latin American Workshop on String Processing
, 1997
"... . In this paper we present an efficient implementation of the WARMUP Algorithm for the construction of lengthrestricted prefix codes. This algorithm has O(n log n + n log wn) worst case time complexity, where n is the number of symbols of the source alphabet and wn is the largest weight of the ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
. In this paper we present an efficient implementation of the WARMUP Algorithm for the construction of lengthrestricted prefix codes. This algorithm has O(n log n + n log wn) worst case time complexity, where n is the number of symbols of the source alphabet and wn is the largest weight of the alphabet. An important feature of the proposed algorithm is its implementation simplicity. The algorithm is basically a selected sequence of Huffman trees construction for modified weights. The proposed implementation has the same time complexity, but requires only additional O(1) space. We also report some empirical experiments showing that this algorithm provides good compression and speed performances. 1 Introduction An important problem in the field of Coding and Information Theory is the Binary Prefix Code Problem. Given an alphabet \Sigma = fa 1 ; : : : ; ang and a corresponding set of positive weights fw 1 ; : : : ; wng, the problem is to find a prefix code for \Sigma that mi...