## Worst-Case Optimal Adaptive Prefix Coding (2009)

Venue: | IN: PROCEEDINGS OF THE ALGORITHMS AND DATA STRUCTURES SYMPOSIUM (WADS |

Citations: | 4 - 4 self |

### BibTeX

@INPROCEEDINGS{Gagie09worst-caseoptimal,

author = {Travis Gagie and Yakov Nekrich},

title = { Worst-Case Optimal Adaptive Prefix Coding},

booktitle = {IN: PROCEEDINGS OF THE ALGORITHMS AND DATA STRUCTURES SYMPOSIUM (WADS},

year = {2009},

pages = {315--326},

publisher = {}

}

### OpenURL

### Abstract

A common complaint about adaptive prefix coding is that it is much slower than static prefix coding. Karpinski and Nekrich recently took an important step towards resolving this: they gave an adaptive Shannon coding algorithm that encodes each character in O(1) amortized time and decodes it in O(log H) amortized time, where H is the empirical entropy of the input string s. For comparison, Gagie’s adaptive Shannon coder and both Knuth’s and Vitter’s adaptive Huffman coders all use Θ(H) amortized time for each character. In this paper we give an adaptive Shannon coder that both encodes and decodes each character in O(1) worst-case time. As with both previous adaptive Shannon coders, we store s in at most (H + 1)|s | + o(|s|) bits. We also show that this encoding length is worst-case optimal up to the lower order term.

### Citations

6494 |
The mathematical theory of communication
- Shannon
- 1948
(Show Context)
Citation Context ...ring s of length m over an alphabet of size n. For static prefix coding, we are allowed to make two passes over s but, after the first pass, we must build a single prefix code, such as a Shannon code =-=[16]-=- or Huffman code [9], and use it to encode every character. Since a Huffman code minimizes the expected codeword length, static Huffman coding is optimal (ignoring the asymptotically negligible O(n lo... |

995 |
A method for the construction of minimum-redundancy codes
- Huffman
- 1952
(Show Context)
Citation Context ...er an alphabet of size n. For static prefix coding, we are allowed to make two passes over s but, after the first pass, we must build a single prefix code, such as a Shannon code [16] or Huffman code =-=[9]-=-, and use it to encode every character. Since a Huffman code minimizes the expected codeword length, static Huffman coding is optimal (ignoring the asymptotically negligible O(n log n) bits needed to ... |

147 |
Surpassing the information theoretic bound with fusion trees
- Fredman, Willard
- 1993
(Show Context)
Citation Context ...earch tree we use as D is optimal instead of balanced then, by Jensen’s Inequality, we decode each character in O(log H) amortized time. Even better, if we use a data structure by Fredman and Willard =-=[4]-=-, then we can decode each character in O(1) worst-case time. Lemma 1 (Fredman and Willard, 1993). Given O(log 1/6 m) keys, in O(log 2/3 m) worst-case time we can build a data structure that stores tho... |

121 |
Variations on a theme by Huffman
- Gallager
- 1978
(Show Context)
Citation Context ...rete Structures, with Applications to Bioinformatics”.2 T. Gagie and Y. Nekrich length of the encoding produced, taking advantage of a property of Huffman codes discovered by Faller [3] and Gallager =-=[7]-=-. Shortly thereafter, Vitter [18] gave another adaptive Huffman coder that also uses time proportional to the encoding’s length; he proved his coder stores s in fewer than m more bits than static Huff... |

100 |
Dynamic Huffman coding
- Knuth
- 1985
(Show Context)
Citation Context ...istically from the prefix of s already encoded, we can later decode s symmetrically. The most intuitive solution is to encode each character using a Huffman code for the prefix already encoded; Knuth =-=[11]-=- showed how to do this in time proportional to the ⋆ This paper was written while the second author was at the University of Eastern Piedmont, Italy, supported by Italy-Israel FIRB Project “Pattern Di... |

91 | Design and analysis of dynamic Huffman codes
- Vitter
- 1987
(Show Context)
Citation Context ...ns to Bioinformatics”.2 T. Gagie and Y. Nekrich length of the encoding produced, taking advantage of a property of Huffman codes discovered by Faller [3] and Gallager [7]. Shortly thereafter, Vitter =-=[18]-=- gave another adaptive Huffman coder that also uses time proportional to the encoding’s length; he proved his coder stores s in fewer than m more bits than static Huffman coding, and that this is opti... |

47 |
Variable-length binary encodings
- Gilbert, Moore
- 1959
(Show Context)
Citation Context ...ery comparison must have the character we read most recently as one of its two arguments. Gagie [6] noted that, by replacing Shannon’s construction by a modified construction due to Gilbert and Moore =-=[8]-=-, his coder can be used to sort s using (H +2)m+o(m) comparisons and O(log n) worst-case time for each comparison. We can use our results to speed up Gagie’s sorter when n = o( √ m/ logm). Whereas Sha... |

45 |
On the implementation of minimum redundancy prefix codes
- Moffat, Turpin
- 1997
(Show Context)
Citation Context .... We can also decode an arbitrary prefix code in O(1) time using a look-up table, but the space usage and initialization time for such a table can be prohibitively high, up to O(m). Moffat and Turpin =-=[14]-=- described a practical algorithm for decoding prefix codes in O(1) time; their algorithm works for a special class of prefix codes, the canonical codes introduced by Schwartz and Kallick [15]. While a... |

42 |
An Adaptive System for Data Compression
- Faller
- 1973
(Show Context)
Citation Context ...lgorithms in Discrete Structures, with Applications to Bioinformatics”.2 T. Gagie and Y. Nekrich length of the encoding produced, taking advantage of a property of Huffman codes discovered by Faller =-=[3]-=- and Gallager [7]. Shortly thereafter, Vitter [18] gave another adaptive Huffman coder that also uses time proportional to the encoding’s length; he proved his coder stores s in fewer than m more bits... |

37 |
Generating a Canonical Prefix Encoding
- Schwartz, Kallick
- 1964
(Show Context)
Citation Context ...d Turpin [14] described a practical algorithm for decoding prefix codes in O(1) time; their algorithm works for a special class of prefix codes, the canonical codes introduced by Schwartz and Kallick =-=[15]-=-. While all adaptive coding methods described above maintain the optimal Huffman code, Gagie [6] described an adaptive prefix coder that is based on sub-optimal Shannon coding; his method also needs O... |

27 | M.: Dynamic ordered sets with exponential search trees
- Anderson, Thorup
- 2007
(Show Context)
Citation Context ...o use this approach to obtain constant worst-case time per symbol. In this paper a different approach is used. Symbols s[i+1], s[i+2], . . .,s[i+d] are encoded with a Shannon code for the prefix s[1]s=-=[2]-=- . . .s[i − d] of the input string. Recall that in a traditional adaptive code the symbol s[i + 1] is encoded with a code for s[1] . . .s[i]. While symbols s[i + 1] . . .s[i + d] are encoded, we build... |

24 |
Channels which transmit letters of unequal duration
- Krause
(Show Context)
Citation Context ...s and decodes each symbol of a string s in O(1) and O(min( √ log n, log log m)) time respectively. If n = o( √ m/ logm), the encoding length is ((H +2)m+o(m). Coding with unequal letter costs. Krause =-=[12]-=- showed how to modify Shannon’s construction for the case in which code letters have different costs, e.g., the different durations of dots and dashes in Morse code. Consider a binary channel and supp... |

15 | Fusion trees can be implemented with AC0 instructions only, Theor
- Andersson, Miltersen, et al.
- 1999
(Show Context)
Citation Context ... m) time and implement queries in O(1) time. In Lemma 1 and Corollary 1 we assume that multiplication and finding the most significant bit of an integer can be performed in constant time. As shown in =-=[1]-=-, we can implement the data structure of Lemma 1 using AC 0 operations only. We can restrict the set of elementary operations to bit operations and table look-ups by increasing the space usage and pre... |

9 | Dynamic Shannon coding
- Gagie
- 2004
(Show Context)
Citation Context ... another and we want to minimize the average cost of a codeword. All of the above problems were studied in the static scenario. Adaptive prefix coding algorithms for those problems were considered in =-=[5]-=-. In this section we show that the good upper bounds on the length of the encoding can be achieved by algorithms that encode in O(1) worst-case time. The main idea of our improvements is that we encod... |

4 |
A fast algorithm for adaptive prefix coding. Algorithmica
- Karpinski, Nekrich
(Show Context)
Citation Context ...fman code in the static scenario, it achieves (H + 1)m + o(m) upper bound on the encoding length that is better than the best known upper bounds for adaptive Huffman algorithms. Karpinski and Nekrich =-=[10]-=- recently reduced the gap between static and adaptive prefix coding by using quantized canonical coding to speed up an adaptive Shannon coder of Gagie [6]: their coder uses O(1) amortized time to enco... |

4 |
Bounding the compression loss of the FGK algorithm
- Milidiú, Laber, et al.
- 1999
(Show Context)
Citation Context ... length; he proved his coder stores s in fewer than m more bits than static Huffman coding, and that this is optimal for any adaptive Huffman coder. With a similar analysis, Milidiú, Laber and Pessoa =-=[13]-=- later proved Knuth’s coder uses fewer than 2m more bits than static Huffman coding. In other words, Knuth’s and Vitter’s coders store s in at most (H + 2 + h)m + o(m) and (H + 1 + h)m + o(m) bits, re... |

2 |
On-line adaptive canonical prefix coding with bounded compression loss
- Turpin, Moffat
- 2001
(Show Context)
Citation Context ...character a in s, and h ∈ [0, 1) is the redundancy of a Huffman code for s; therefore, both adaptive Huffman coders use Θ(H) amortized time to encode and decode each character of s. Turpin and Moffat =-=[17]-=- gave an adaptive prefix coder that uses canonical codes, and showed it achieves nearly the same compression as adaptive Huffman coding but runs much faster in practice. Their upper bound was still O(... |