Results 1 
7 of
7
Optimal prefix codes for infinite alphabets with nonlinear costs
 IEEE Trans. Inf. Theory
, 2008
"... Abstract — Let P = {p(i)} be a measure of strictly positive probabilities on the set of nonnegative integers. Although the countable number of inputs prevents usage of the Huffman algorithm, there are nontrivial P for which known methods find a source code that is optimal in the sense of minimizing ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Abstract — Let P = {p(i)} be a measure of strictly positive probabilities on the set of nonnegative integers. Although the countable number of inputs prevents usage of the Huffman algorithm, there are nontrivial P for which known methods find a source code that is optimal in the sense of minimizing expected codeword length. For some applications, however, a source code should instead minimize one of a family of nonlinear objective P functions, βexponential means, those of the form loga i p(i)an(i) , where n(i) is the length of the ith codeword and a is a positive constant. Applications of such minimizations include a novel problem of maximizing the chance of message receipt in singleshot communications (a < 1) and a previously known problem of minimizing the chance of buffer overflow in a queueing system (a> 1). This paper introduces methods for finding codes optimal for such exponential means. One method applies to geometric distributions, while another applies to distributions with lighter tails. The latter algorithm is applied to Poisson distributions and both are extended to alphabetic codes, as well as to minimizing maximum pointwise redundancy. The aforementioned application of minimizing the chance of buffer overflow is also considered. Index Terms — Communication networks, generalized entropies, generalized means, Golomb codes, Huffman algorithm,
1 Prefix Coding under Siege
, 2006
"... Abstract — A novel lossless source coding paradigm applies to problems of unreliable lossless channels with low bitrate, in which a vital message needs to be transmitted prior to termination of communications. This paradigm can be applied to Alfréd Rényi’s secondhand account of an ancient siege in w ..."
Abstract
 Add to MetaCart
Abstract — A novel lossless source coding paradigm applies to problems of unreliable lossless channels with low bitrate, in which a vital message needs to be transmitted prior to termination of communications. This paradigm can be applied to Alfréd Rényi’s secondhand account of an ancient siege in which a spy was sent to scout the enemy but was captured. After escaping, the spy returned to his base in no condition to speak and unable to write. His commander asked him questions that he could answer by nodding or shaking his head, and the fortress was defended with this information. Rényi told this story with reference to traditional lossless prefix coding, in which the objective is minimization of expected codeword length. The goal of maximizing probability of survival in the siege scenario, however, is distinct from yet related to this traditional objective. Rather ∑ than finding a code minimizing expected codeword length n i=1 p(i)l(i), this variant involves maximizing ∑n i=1 p(i)θl(i) for a known θ ∈ (0,1). When there are no restrictions on codewords, this problem can be solved using a known generalization of Huffman coding. The optimal solution has coding bounds which are functions of Rényi entropy; in addition to known bounds, new bounds are derived here. The alphabetically constrained version of this problem has applications in search trees and diagnostic testing. A novel dynamic programming algorithm — based upon the oldest known algorithm for the traditional alphabetic problem — optimizes this problem in O(n 3) time and O(n 2) space, whereas two novel approximation algorithms can find a suboptimal solution faster: one in linear time, the other in O(nlog n) time. Coding bounds for the alphabetic version of this problem are also presented.
InfiniteAlphabet Prefix Codes Optimal for βExponential Penalties
, 2007
"... Abstract — Let P = {p(i)} be a measure of strictly positive probabilities on the set of nonnegative integers. Although the countable number of inputs prevents usage of the Huffman algorithm, there are nontrivial P for which known methods find a source code that is optimal in the sense of minimizing ..."
Abstract
 Add to MetaCart
Abstract — Let P = {p(i)} be a measure of strictly positive probabilities on the set of nonnegative integers. Although the countable number of inputs prevents usage of the Huffman algorithm, there are nontrivial P for which known methods find a source code that is optimal in the sense of minimizing expected codeword length. For some applications, however, a source code should instead minimize one of a family of nonlinear objective P functions, βexponential means, those of the form loga i p(i)an(i) , where n(i) is the length of the ith codeword and a is a positive constant. Applications of such minimizations include a problem of maximizing the chance of message receipt in singleshot communications (a < 1) and a problem of minimizing the chance of buffer overflow in a queueing system (a> 1). This paper introduces methods for finding codes optimal for such exponential means. One method applies to geometric distributions, while another applies to distributions with lighter tails. The latter algorithm is applied to Poisson distributions. Both are extended to minimizing maximum pointwise redundancy.
RedundancyRelated Bounds for Generalized Huffman Codes
, 2009
"... This paper presents new lower and upper bounds for the compression rate of optimal binary prefix codes on memoryless sources according to various nonlinear codeword length objectives. Like the most wellknown redundancy bounds for minimum (arithmetic) average redundancy coding — Huffman coding — t ..."
Abstract
 Add to MetaCart
This paper presents new lower and upper bounds for the compression rate of optimal binary prefix codes on memoryless sources according to various nonlinear codeword length objectives. Like the most wellknown redundancy bounds for minimum (arithmetic) average redundancy coding — Huffman coding — these are in terms of a form of entropy and/or the probability of the most probable input symbol. The bounds here improve on known bounds of the form L ∈ [H, H + 1), where H is some form of entropy in bits (or, in the case of redundancy measurements, 0) and L is the length objective, also in bits. The objectives explored here include exponentialaverage length, maximum pointwise redundancy, and exponentialaverage pointwise redundancy (also called d th exponential redundancy). These relate to queueing and singleshot communications, Shannon coding and universal modeling (worstcase minimax redundancy), and bridging the maximum pointwise redundancy problem with Huffman coding, respectively. A generalized form of Huffman coding known to find optimal codes for these objectives helps yield these bounds, some of which are tight. Related properties to such bounds, also explored here, are the necessary and sufficient conditions for the shortest codeword being a specific length.