Results 1  10
of
13
Nearoptimal routing lookups with bounded worst case performance
 In IEEE INFOCOM’00
, 2000
"... Abstract — The problem of route address lookup has received much attention recently and several algorithms and data structures for performing address lookups at high speeds have been proposed. In this paper we consider one such data structure – a binary search tree built on the intervals created by ..."
Abstract

Cited by 29 (0 self)
 Add to MetaCart
Abstract — The problem of route address lookup has received much attention recently and several algorithms and data structures for performing address lookups at high speeds have been proposed. In this paper we consider one such data structure – a binary search tree built on the intervals created by the routing table prefixes. We wish to exploit the difference in the probabilities with which the various leaves of the tree (where the intervals are stored) are accessed by incoming packets in order to speedup the lookup process. More precisely, we seek an answer to the question “How can the search tree be drawn so as to minimize the average packet lookup time while keeping the worstcase lookup time within a fixed bound? ” We use ideas from information theory to derive efficient algorithms for computing nearoptimal routing lookup trees. Finally, we consider the practicality of our algorithms through analysis and simulation.
Improved Bounds on the Inefficiency of LengthRestricted Prefix Codes
 Departamento de Inform'atica, PUCRJ, Rio de
, 1997
"... : Consider an alphabet \Sigma = fa 1 ; : : : ; ang with corresponding symbol probabilities p 1 ; : : : ; pn . The L\Gammarestricted prefix code is a prefix code where all the code lengths are not greater than L. The value L is a given integer such that L dlog ne. Define the average code length dif ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
: Consider an alphabet \Sigma = fa 1 ; : : : ; ang with corresponding symbol probabilities p 1 ; : : : ; pn . The L\Gammarestricted prefix code is a prefix code where all the code lengths are not greater than L. The value L is a given integer such that L dlog ne. Define the average code length difference by ffl = P n i=1 p i :l i \Gamma P n i=1 p i :l i , where l 1 ; : : : ; l n are the code lengths of the optimal Lrestricted prefix code for \Sigma and l 1 ; : : : ; l n are the code lengths of the optimal prefix code for \Sigma. Let / be the golden ratio 1,618. In this paper, we show that ffl ! 1=/ L\Gammadlog(n+dlog ne\GammaL)e\Gamma1 when L ? dlog ne. We also prove the sharp bound ffl ! dlog ne \Gamma 1, when L = dlog ne. By showing the lower bound 1 / L\Gammadlog ne+2+dlog n n\GammaL e \Gamma1 on the maximum value of ffl, we guarantee that our bound is asymptotically tight in the range dlog ne ! L n=2. Furthermore, we present an O(n) time and space 1=/ L\Gammadlo...
Efficient Implementation of the WARMUP Algorithm for the Construction of LengthRestricted Prefix Codes
 in Proceedings of the ALENEX
, 1999
"... . Given an alphabet \Sigma = fa1 ; : : : ; ang with a corresponding list of positive weights fw1 ; : : : ; wng and a length restriction L, the lengthrestricted prefix code problem is to find, a prefix code that minimizes P n i=1 w i l i , where l i , the length of the codeword assigned to a i , ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
. Given an alphabet \Sigma = fa1 ; : : : ; ang with a corresponding list of positive weights fw1 ; : : : ; wng and a length restriction L, the lengthrestricted prefix code problem is to find, a prefix code that minimizes P n i=1 w i l i , where l i , the length of the codeword assigned to a i , cannot be greater than L, for i = 1; : : : ; n. In this paper, we present an efficient implementation of the WARMUP algorithm, an approximative method for this problem. The worstcase time complexity of WARMUP is O(n log n +n log wn ), where wn is the greatest weight. However, some experiments with a previous implementation of WARMUP show that it runs in linear time for several practical cases, if the input weights are already sorted. In addition, it often produces optimal codes. The proposed implementation combines two new enhancements to reduce the space usage of WARMUP and to improve its execution time. As a result, it is about ten times faster than the previous implementat...
Practical Use of The Warmup Algorithm on LengthRestricted Coding
 the Proceedings of the Fourth Latin American Workshop on String Processing
, 1997
"... . In this paper we present an efficient implementation of the WARMUP Algorithm for the construction of lengthrestricted prefix codes. This algorithm has O(n log n + n log wn) worst case time complexity, where n is the number of symbols of the source alphabet and wn is the largest weight of the ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
. In this paper we present an efficient implementation of the WARMUP Algorithm for the construction of lengthrestricted prefix codes. This algorithm has O(n log n + n log wn) worst case time complexity, where n is the number of symbols of the source alphabet and wn is the largest weight of the alphabet. An important feature of the proposed algorithm is its implementation simplicity. The algorithm is basically a selected sequence of Huffman trees construction for modified weights. The proposed implementation has the same time complexity, but requires only additional O(1) space. We also report some empirical experiments showing that this algorithm provides good compression and speed performances. 1 Introduction An important problem in the field of Coding and Information Theory is the Binary Prefix Code Problem. Given an alphabet \Sigma = fa 1 ; : : : ; ang and a corresponding set of positive weights fw 1 ; : : : ; wng, the problem is to find a prefix code for \Sigma that mi...
Dynamic LengthRestricted Coding
, 2003
"... Suppose that $S$ is a string of length $m$ drawn from an alphabet of $n$ characters, $d$ of which occur in $S$. Let $P$ be the relative frequency distribution of characters in $S$. We present a new algorithm for dynamic coding that uses at most \(\lceil \lg n \rceil 1\) bits to encode each character ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Suppose that $S$ is a string of length $m$ drawn from an alphabet of $n$ characters, $d$ of which occur in $S$. Let $P$ be the relative frequency distribution of characters in $S$. We present a new algorithm for dynamic coding that uses at most \(\lceil \lg n \rceil 1\) bits to encode each character in $S$
Inplace LengthRestricted Prefix Coding
 In String Processing and Information Retrieval
, 1998
"... Huffman codes, combined with wordbased models, are considered efficient compression schemes for fulltext retrieval systems. The decoding rate for these schemes can be substantially improved if the maximum length of the codewords is not greater then the machine word size L. However, if the vocabular ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Huffman codes, combined with wordbased models, are considered efficient compression schemes for fulltext retrieval systems. The decoding rate for these schemes can be substantially improved if the maximum length of the codewords is not greater then the machine word size L. However, if the vocabulary is large, simple methods for generating optimal lengthrestricted codes are either too slow or require a significantly large amount of memory. In this paper we present an inplace, simple and fast implementation for the BRCI (Initials of Build, Remove, Condense and Insert) algorithm, an approximative method for lengthrestricted coding. It overwrites a sorted input list of n weights with the corresponding codeword lengths in O(n) time. In addition, the worstcase compression loss introduced by BRCI codes with respect to unrestricted Huffman codes is proved to be negligible for all practical values of both L and n. 1 Introduction Zobel and Moffat [18] have proposed an innovative compressio...
A SpaceEconomical Algorithm for MinimumRedundancy Coding
 Departamento de Inform'atica, PUCRJ, Rio de
, 1998
"... : The minimum redundancy prefixfree code problem is to determine, for a given set W = fw 1 ; : : : ; wng of n positive weights, a set = fl 1 ; : : : ; l n g of n integer codeword lengths such that P n i=1 2 \Gammal i 1 and P n i=1 w i l i is minimized. In this technical report, we consider ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
: The minimum redundancy prefixfree code problem is to determine, for a given set W = fw 1 ; : : : ; wng of n positive weights, a set = fl 1 ; : : : ; l n g of n integer codeword lengths such that P n i=1 2 \Gammal i 1 and P n i=1 w i l i is minimized. In this technical report, we consider the case where W is already sorted and must remain unchanged after the codeword lengths are calculated. The algorithm proposed in this technical report solves this problem in O(n) time using just O(minfl 2 1 ; ng) additional space, where l 1 is the greatest codeword length in . Keywords: Prefix Codes, Huffman Trees, Data Compression Resumo: Calcular um c'odigo livre de prefixo com redundancia m'inima equivale a determinar para um dado conjunto de n pesos positivos W = fw 1 ; : : : ; wng, um conjunto de n comprimentos de c'odigo inteiros = fl 1 ; : : : ; l ng tal que P n i=1 2 \Gammal i 1, minimizando P n i=1 w i l i . Nesta monografia, consideramos que W 'e fornecido sob forma de...
Two SpaceEconomical Algorithms for Calculating Minimum Redundancy Prefix Codes (Extended Abstract)
 In Proceedings of the DCC
, 1999
"... The minimum redundancy prefix code problem is to determine, for a given list W ={w 1 , ..., w n } of n positive symbol weights, a list L =#` 1 ;:::;` n # of n corresponding integer codeword lengths such that P i=1 2 ,` i # 1 and P i=1 w i ` i is minimized. Let us consider the case where W is already ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
The minimum redundancy prefix code problem is to determine, for a given list W ={w 1 , ..., w n } of n positive symbol weights, a list L =#` 1 ;:::;` n # of n corresponding integer codeword lengths such that P i=1 2 ,` i # 1 and P i=1 w i ` i is minimized. Let us consider the case where W is already sorted. In this case, the output list L can be represented by a list M =#m 1 ;:::;m H #, where m ` , for ` = 1;:::;H, denotes the multiplicity of the codeword length ` in L and H is the length of the greatest codeword. Fortunately, H is proved to be O(min{log(1/p 1 ), n}), where p 1 is the smallest symbol probability, given by w 1 = P i=1 w i . In this paper, we present the FLazyHuff and the ELazyHuff algorithms. FLazyHuff runs in O(n) time but requires O(min{H&sup2, n}) additional space. On the other hand, ELazyHuff runs in O(n log(n/H)) time, requiring only O(H) additional space. Finally, since our two algorithms have the advantage of not writing at the input buffer during the code calculation, we discuss some applications where this feature is very useful.
Practical Constructions of Lrestricted Alphabetic Prefix Codes
 In String Processing and Information Retrieval
, 1999
"... this paper, we presented a simple technique to generate Lrestricted alphabetic prefix codes. Furthermore, we proved that the inefficiency of Lrestricted alphabetic prefix codes rather than Huffman codes is bounded above by 1+1=/ ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
this paper, we presented a simple technique to generate Lrestricted alphabetic prefix codes. Furthermore, we proved that the inefficiency of Lrestricted alphabetic prefix codes rather than Huffman codes is bounded above by 1+1=/
Improved Analysis of FGK Algorithm
, 1997
"... : An important issue related to coding schemes is their compression loss. A simple measure ffl of the compression loss due to a coding scheme different than Huffman coding, is defined by ffl = AC \Gamma AH where AH is the average code length of a static Huffman encoding and AC is the average code ..."
Abstract
 Add to MetaCart
: An important issue related to coding schemes is their compression loss. A simple measure ffl of the compression loss due to a coding scheme different than Huffman coding, is defined by ffl = AC \Gamma AH where AH is the average code length of a static Huffman encoding and AC is the average code length of an encoding based on the compression scheme C. When the scheme C is the FGK algorithm, Vitter[12] conjectured that ffl K for some real constant K. Here, we use an amortized analysis to prove this conjecture. We show that ffl ! 2. Furthermore, we show through an example that our bound is asymptotically tight. This result explain the good performance of FGK that many authors have observed through practical experiments. Keywords: Dynamic Huffman Codes, Amortized Analysis, Data Compession Resumo: Um importante aspecto relacionado a um esquema de codificac~ao 'e a sua perda de compress~ao. A medida ffl da perda de compress~ao de um esquema de codificac~ao 'e definida por ffl = AC \...