Results 11  20
of
44
Tight bounds on minimum maximum pointwise redundancy
 In Proceedings of the International Symposium on Information Theory
, 1944
"... Abstract — This paper presents new lower and upper bounds for the optimal compression of binary prefix codes in terms of the most probable input symbol, where compression efficiency is determined by the nonlinear codeword length objective of minimizing maximum pointwise redundancy. This objective re ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract — This paper presents new lower and upper bounds for the optimal compression of binary prefix codes in terms of the most probable input symbol, where compression efficiency is determined by the nonlinear codeword length objective of minimizing maximum pointwise redundancy. This objective relates to both universal modeling and Shannon coding, and these bounds are tight throughout the interval. The upper bounds also apply to a related objective, that of d th exponential redundancy. I.
Twenty (or so) questions: boundedlength Huffman coding
, 2006
"... The game of Twenty Questions has long been used to illustrate binary source coding. Recently, a physical device has been developed which mimics the process of playing Twenty Questions, with the device supplying the questions and the user providing the answers. However, this game differs from Twent ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
The game of Twenty Questions has long been used to illustrate binary source coding. Recently, a physical device has been developed which mimics the process of playing Twenty Questions, with the device supplying the questions and the user providing the answers. However, this game differs from Twenty Questions in two ways: Answers need not be only âyes â and âno, â and the device continues to ask questions beyond the traditional twenty; typically, at least 20 and at most 25 questions are asked. The nonbinary variation on source coding is one that is well known and understood, but not with such bounds on length. An O(n(lmax â lmin))time O(n)space PackageMergebased algorithm is presented here for binary and nonbinary source coding with codeword lengths (numbers of questions) bounded to be within a certain interval, one that minimizes average codeword length or, more generally, any other quasiarithmetic convex coding penalty. In the case of minimizing average codeword length, both time and space complexity can be improved via an alternative reduction. This has, as a special case, a method for nonbinary lengthlimited Huffman coding, which was previously solved via dynamic programming with O(nÂ² lmax log D) time and space.
Secondorder properties of lossy likelihoods and the MLE/MDL dichotomy in lossy compression
"... lossy compression ..."
Reservedlength prefix coding
 In Proceedings of the 2008 IEEE International Symposium on Information Theory
, 2008
"... Abstract — Huffman coding finds an optimal prefix code for a given probability mass function. Consider situations in which one wishes to find an optimal code with the restriction that all codewords have lengths that lie in a userspecified set of lengths (or, equivalently, no codewords have lengths ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract — Huffman coding finds an optimal prefix code for a given probability mass function. Consider situations in which one wishes to find an optimal code with the restriction that all codewords have lengths that lie in a userspecified set of lengths (or, equivalently, no codewords have lengths that lie in a complementary set). This paper introduces a polynomialtime dynamic programming algorithm that finds optimal codes for this reservedlength prefix coding problem. This has applications to quickly encoding and decoding lossless codes. In addition, one modification of the approach solves any quasiarithmetic prefix coding problem, while another finds optimal codes restricted to the set of codes with g codeword lengths for userspecified g (e.g., g = 2). I.
A Nonlinear Dynamical Systems ’ Proof of KraftMcMillan Inequality and its
, 2007
"... In this short paper, we shall provide a dynamical systems ’ proof of the famous KraftMcMillan inequality and its converse. KraftMcMillan inequality is a basic result in information theory which gives a necessary and sufficient condition for the lengths of the codewords of a code to be uniquely dec ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this short paper, we shall provide a dynamical systems ’ proof of the famous KraftMcMillan inequality and its converse. KraftMcMillan inequality is a basic result in information theory which gives a necessary and sufficient condition for the lengths of the codewords of a code to be uniquely decodable [1, 2, 3]. 1 KraftMcMillan Inequality Given a binary prefix code set C for an alphabet set A, the codewords c1, c2,...,cN with lengths l1, l2,...,lN necessarily satisfy: N∑ 2 −li ≤ 1 (1) i=1 where N = A, the cardinality of set A. A binary prefix code C is a set of binary codewords such that no codeword is a prefix of another. Prefix codes are known to be uniquely decodable and easy to decode. A famous example of prefix codes are the Huffman codes which have minimum redundancy. The Binary map Consider the binary map (Fig. 1) T: [0, 1) → [0, 1): x ↦ → 2x, 0 ≤ x < 1 2
Dary BoundedLength Huffman Coding
, 2007
"... Abstract — Efficient optimal prefix coding has long been accomplished via the Huffman algorithm. However, there is still room for improvement and exploration regarding variants of the Huffman problem. Lengthlimited Huffman coding, useful for many practical applications, is one such variant, in whic ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract — Efficient optimal prefix coding has long been accomplished via the Huffman algorithm. However, there is still room for improvement and exploration regarding variants of the Huffman problem. Lengthlimited Huffman coding, useful for many practical applications, is one such variant, in which codes are restricted to the set of codes in which none of the n codewords is longer than a given length, lmax. Binary lengthlimited coding can be done in O(nlmax) time and O(n) space using the widely used PackageMerge algorithm. In this paper the PackageMerge approach is generalized in order to introduce a minimum codeword length, lmin, to allow for objective functions other than the minimization of expected codeword length, and to be applicable to both binary and nonbinary codes, the latter of which was previously addressed using a slower dynamic programming approach. These extensions have various applications — including faster decompression — and can be used to solve the problem of finding an optimal code with bounded fringe, that is, finding the best code among codes with a maximum difference between the longest and shortest codewords. The previously proposed method for solving this problem was nonpolynomial time, whereas the novel algorithm requires only O(n(lmax − lmin) 2) time and O(n) space. I.
Normalized Compression Distance of Multisets with Applications
, 2013
"... Normalized compression distance (NCD) is a parameterfree, featurefree, alignmentfree, similarity measure between a pair of finite objects based on compression. However, it is not sufficient for all applications. We propose an NCD of finite nonempty multisets (a.k.a. multiples) of finite objects t ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Normalized compression distance (NCD) is a parameterfree, featurefree, alignmentfree, similarity measure between a pair of finite objects based on compression. However, it is not sufficient for all applications. We propose an NCD of finite nonempty multisets (a.k.a. multiples) of finite objects that is also a metric. Previously, attempts to obtain such an NCD failed. We cover the entire trajectory from theoretical underpinning to feasible practice. The new NCD for multisets is applied to retinal progenitor cell classification questions and to related synthetically generated data that were earlier treated with the pairwise NCD. With the new method we achieved significantly better results. Similarly for questions about axonal organelle transport. We also applied the new NCD to handwritten digit recognition and improved classification accuracy significantly over that of pairwise NCD by incorporating both the pairwise and NCD for multisets. In the analysis we use the incomputable Kolmogorov complexity that for practical purposes is approximated from above by the length of the compressed version of the file involved, using a realworld compression program.
On the withinfamily KullbackLeibler risk in Gaussian Predictive models
, 2012
"... We consider estimating the predictive density under KullbackLeibler loss in a highdimensional Gaussian model. Decision theoretic properties of the withinfamily prediction error – the minimal risk among estimates in the class G of all Gaussian densities are discussed. We show that in sparse models ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We consider estimating the predictive density under KullbackLeibler loss in a highdimensional Gaussian model. Decision theoretic properties of the withinfamily prediction error – the minimal risk among estimates in the class G of all Gaussian densities are discussed. We show that in sparse models, the class G is minimax suboptimal. We produce asymptotically sharp upper and lower bounds on the withinfamily prediction errors for various subfamilies of G. Under mild regularity conditions, in the subfamily where the covariance structure is represented by a single data dependent parameter Σ = d · I, the KullbackLeiber risk has a tractable decomposition which can be subsequently minimized to yield optimally flattened predictive density estimates. The optimal predictive risk can be explicitly expressed in terms of the corresponding mean square error of the location estimate, and so, the role of shrinkage in the predictive regime can be determined based on point estimation theory results. Our results demonstrate that some of the decision theoretic parallels between predictive density estimation and point estimation regimes can be explained by second moment based concentration properties of the quadratic loss.
Extensions of Linear Independent Component Analysis: Neural and InformationTheoretic Methods
, 1998
"... ..."
The Appeal of Information Transactions
, 2012
"... Abstract: An information transaction entails the purchase of information. Formally, it consists of an information structure together with a price. We develop an index of the appeal of information transactions, which is derived asadual tothe agent’s preferences forinformation. The index of informatio ..."
Abstract
 Add to MetaCart
Abstract: An information transaction entails the purchase of information. Formally, it consists of an information structure together with a price. We develop an index of the appeal of information transactions, which is derived asadual tothe agent’s preferences forinformation. The index of information transactions has a simple analytic characterization in terms of the relative entropy from priors to posteriors, and it also connects naturally with a recent index of riskiness.