Results 1  10
of
10
WorstCase Optimal Adaptive Prefix Coding
 IN: PROCEEDINGS OF THE ALGORITHMS AND DATA STRUCTURES SYMPOSIUM (WADS
, 2009
"... A common complaint about adaptive prefix coding is that it is much slower than static prefix coding. Karpinski and Nekrich recently took an important step towards resolving this: they gave an adaptive Shannon coding algorithm that encodes each character in O(1) amortized time and decodes it in O(l ..."
Abstract

Cited by 6 (6 self)
 Add to MetaCart
(Show Context)
A common complaint about adaptive prefix coding is that it is much slower than static prefix coding. Karpinski and Nekrich recently took an important step towards resolving this: they gave an adaptive Shannon coding algorithm that encodes each character in O(1) amortized time and decodes it in O(log H) amortized time, where H is the empirical entropy of the input string s. For comparison, Gagie’s adaptive Shannon coder and both Knuth’s and Vitter’s adaptive Huffman coders all use Θ(H) amortized time for each character. In this paper we give an adaptive Shannon coder that both encodes and decodes each character in O(1) worstcase time. As with both previous adaptive Shannon coders, we store s in at most (H + 1)s  + o(s) bits. We also show that this encoding length is worstcase optimal up to the lower order term.
Dynamic Asymmetric Communication
, 2005
"... In Adler and Maggs' asymmetric communication problem, a server with high bandwidth tries to help clients with low bandwidth send it messages. We give four new asymmetric communication protocols and show they are robust with respect to changes in the messages' distribution. ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
In Adler and Maggs' asymmetric communication problem, a server with high bandwidth tries to help clients with low bandwidth send it messages. We give four new asymmetric communication protocols and show they are robust with respect to changes in the messages' distribution.
Minimax Trees in Linear Time with Applications
"... Abstract. A minimax tree is similar to a Huffman tree except that, instead of minimizing the weighted average of the leaves ’ depths, it minimizes the maximum of any leaf’s weight plus its depth. Golumbic (1976) introduced minimax trees and gave a Huffmanlike, O(n logn)time algorithm for buildin ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Abstract. A minimax tree is similar to a Huffman tree except that, instead of minimizing the weighted average of the leaves ’ depths, it minimizes the maximum of any leaf’s weight plus its depth. Golumbic (1976) introduced minimax trees and gave a Huffmanlike, O(n logn)time algorithm for building them. Drmota and Szpankowski (2002) gave another O(n logn)time algorithm, which takes linear time when the weights are already sorted by their fractional parts. In this paper we give the first lineartime algorithm for building minimax trees for unsorted real weights. 1
LowMemory Adaptive Prefix Coding
 DATA COMPRESSION CONFERENCE
, 2009
"... In this paper we study the adaptive prefix coding problem in cases where the size of the input alphabet is large. We present an online prefix coding algorithm that uses O(σ1/λ+ɛ) bits of space for any constants ε> 0, λ> 1, and encodes the string of symbols in O(log log σ) time per symbol in th ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
In this paper we study the adaptive prefix coding problem in cases where the size of the input alphabet is large. We present an online prefix coding algorithm that uses O(σ1/λ+ɛ) bits of space for any constants ε> 0, λ> 1, and encodes the string of symbols in O(log log σ) time per symbol in the worst case, where σ is the size of the alphabet. The upper bound on the encoding length is λnH(s) + (λ / ln 2 + 2 + ɛ)n + O(σ1/λ log 2 σ) bits.
An Efficient Compression Scheme for Data Communication Which Uses a New Family of SelfOrganizing Binary Search Trees
"... In this paper, we demonstrate that we can effectively use results from the field of adaptive selforganizing data structures in enhancing compression schemes. Unlike adaptive lists, which have already been used in compression, to the best of our knowledge, adaptive selforganizing trees have not bee ..."
Abstract
 Add to MetaCart
(Show Context)
In this paper, we demonstrate that we can effectively use results from the field of adaptive selforganizing data structures in enhancing compression schemes. Unlike adaptive lists, which have already been used in compression, to the best of our knowledge, adaptive selforganizing trees have not been used in this regard. To achieve this, we introduce a new data structure, the Partitioning Binary Search Tree (PBST) which, although based on the wellknown Binary Search Tree (BST), also appropriately partitions the data elements into mutually exclusive sets. When used in conjunction with Fano encoding, the PBST leads to the socalled Fano Binary Search Tree (FBST), which, indeed, incorporates the required Fano coding (nearlyequalprobability) property into the BST. We demonstrate how both the PBST and FBST can be maintained adaptively and in a selforganizing manner. The updating procedure that converts a PBST into an FBST, and the corresponding new treebased operators, namely the ShiftToLeft (STL) and the ShiftToRight (STR) operators, are explicitly presented. The encoding and decoding procedures that also update the FBST have been implemented and rigorously tested. Our empirical results on files of the wellknown benchmark, the Canterbury corpus, show that the adaptive Fano coding using FBSTs, the Huffman, and the greedy adaptive Fano coding achieve similar compression ratios. However, in terms of encoding/decoding speed, the new scheme is much faster than the latter two in the encoding phase, and they achieve approximately the same speed in the decoding phase. We believe that the same philosophy, namely that of using an adaptive selforganizing BST to maintain the frequencies, can also be utilized for other data encoding mechanisms, even as the Fenwick scheme has been used in arithmetic coding. 1
Minimax Trees in Linear Time
, 2009
"... A minimax tree is similar to a Huffman tree except that, instead of minimizing the weighted average of the leaves’ depths, it minimizes the maximum of any leaf’s weight plus its depth. Golumbic (1976) introduced minimax trees and gave a Huffmanlike, O(n log n)time algorithm for building them. Dr ..."
Abstract
 Add to MetaCart
(Show Context)
A minimax tree is similar to a Huffman tree except that, instead of minimizing the weighted average of the leaves’ depths, it minimizes the maximum of any leaf’s weight plus its depth. Golumbic (1976) introduced minimax trees and gave a Huffmanlike, O(n log n)time algorithm for building them. Drmota and Szpankowski (2002) gave another O(n log n)time algorithm, which checks the Kraft Inequality in each step of a binary search. In this paper we show how Drmota and Szpankowski’s algorithm can be made to run in linear time on a word RAM with Ω(log n)bit words. We also discuss how our solution applies to problems in data compression, group testing and circuit design.
Research Proposal: A data structure to support the
, 2005
"... simulation of random events ..."
(Show Context)
Sorting a LowEntropy Sequence STUDENT PAPER
, 2005
"... Abstract. We give the first sorting algorithm with bounds in terms of higherorder entropies: let S be a sequence of length m containing n distinct elements and let Hℓ(S) be the ℓthorder empirical entropy of S, with n ℓ+1 log n ∈ O(m); our algorithm sorts S using (Hℓ(S) + O(1))m comparisons. 1 ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. We give the first sorting algorithm with bounds in terms of higherorder entropies: let S be a sequence of length m containing n distinct elements and let Hℓ(S) be the ℓthorder empirical entropy of S, with n ℓ+1 log n ∈ O(m); our algorithm sorts S using (Hℓ(S) + O(1))m comparisons. 1
Sorting a LowEntropy Sequence
, 2005
"... We give the first sorting algorithm with bounds in terms of higherorder entropies: let S be a sequence of length m containing n distinct elements and let H # (S) be the #thorder empirical entropy of S, log n # O(m); our algorithm sorts S using (H # (S) + O(1))m comparisons. ..."
Abstract
 Add to MetaCart
We give the first sorting algorithm with bounds in terms of higherorder entropies: let S be a sequence of length m containing n distinct elements and let H # (S) be the #thorder empirical entropy of S, log n # O(m); our algorithm sorts S using (H # (S) + O(1))m comparisons.