Results 1  10
of
27
Universal compression of memoryless sources over unknown alphabets
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 2004
"... It has long been known that the compression redundancy of independent and identically distributed (i.i.d.) strings increases to infinity as the alphabet size grows. It is also apparent that any string can be described by separately conveying its symbols, and its pattern—the order in which the symbol ..."
Abstract

Cited by 59 (22 self)
 Add to MetaCart
It has long been known that the compression redundancy of independent and identically distributed (i.i.d.) strings increases to infinity as the alphabet size grows. It is also apparent that any string can be described by separately conveying its symbols, and its pattern—the order in which the symbols appear. Concentrating on the latter, we show that the patterns of i.i.d. strings over all, including infinite and even unknown, alphabets, can be compressed with diminishing redundancy, both in block and sequentially, and that the compression can be performed in linear time. To establish these results, we show that the number of patterns is the Bell number, that the number of patterns with a given number of symbols is the Stirling number of the second kind, and that the redundancy of patterns can be bounded using results of Hardy and Ramanujan on the number of integer partitions. The results also imply an asymptotically optimal solution for the GoodTuring probabilityestimation problem.
A lower bound on compression of unknown alphabets
 Theoret. Comput. Sci
, 2005
"... Many applications call for universal compression of strings over large, possibly infinite, alphabets. However, it has long been known that the resulting redundancy is infinite even for i.i.d. distributions. It was recently shown that the redudancy of the strings ’ patterns, which abstract the values ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
(Show Context)
Many applications call for universal compression of strings over large, possibly infinite, alphabets. However, it has long been known that the resulting redundancy is infinite even for i.i.d. distributions. It was recently shown that the redudancy of the strings ’ patterns, which abstract the values of the symbols, retaining only their relative precedence, is sublinear in the blocklength n, hence the persymbol redundancy diminishes to zero. In this paper we show that pattern redundancy is at least (1.5 log 2 e) n 1/3 bits. To do so, we construct a generating function whose coefficients lower bound the redundancy, and use Hayman’s saddlepoint approximation technique to determine the coefficients ’ asymptotic behavior. 1
Analytic Variations on Redundancy Rates of Renewal Processes
 IEEE Trans. Information Theory
, 2002
"... Csisz ar and Shields have recently proved that the minimax redundancy for a class of (stationary) renewal processes is ( n) where n is the block length. This interesting result provides a first nontrivial bound on redundancy for a nonparametric family of processes. The present paper gives a precis ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
(Show Context)
Csisz ar and Shields have recently proved that the minimax redundancy for a class of (stationary) renewal processes is ( n) where n is the block length. This interesting result provides a first nontrivial bound on redundancy for a nonparametric family of processes. The present paper gives a precise estimate of the redundancy rate for such (nonstationary) renewal sources, namely, 2 n +O(log n): This asymptotic expansion is derived by complexanalytic methods that include generating function representations, Mellin transforms, singularity analysis and saddle point estimates. This work places itself within the framework of analytic information theory.
Tight Bounds on Profile Redundancy and Distinguishability
"... The minimax KLdivergence of any distribution from all distributions in a collection P has several practical implications. In compression, it is called redundancy and represents the least additional number of bits over the entropy needed to encode the output of any distribution in P. In online estim ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
The minimax KLdivergence of any distribution from all distributions in a collection P has several practical implications. In compression, it is called redundancy and represents the least additional number of bits over the entropy needed to encode the output of any distribution in P. In online estimation and learning, it is the lowest expected logloss regret when guessing a sequence of random values generated by a distribution in P. In hypothesis testing, it upper bounds the largest number of distinguishable distributions in P. Motivated by problems ranging from population estimation to text classification and speech recognition, several machinelearning and informationtheory researchers have recently considered labelinvariant observations and properties induced by i.i.d. distributions. A sufficient statistic for all these properties is the data’s profile, the multiset of the number of times each data element appears. Improving on a sequence of previous works, we show that the redundancy of the collection of distributions induced over profiles by lengthn i.i.d. sequences is between 0.3 · n 1/3 and n 1/3 log 2 n, in particular, establishing its exact growth power. 1
A comparison of automatic histogram constructions
, 2008
"... Abstract. Even for a welltrained statistician the construction of a histogram for a given realvalued data set is a difficult problem. It is even more difficult to construct a fully automatic procedure which specifies the number and widths of the bins in a satisfactory manner for a wide range of da ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Abstract. Even for a welltrained statistician the construction of a histogram for a given realvalued data set is a difficult problem. It is even more difficult to construct a fully automatic procedure which specifies the number and widths of the bins in a satisfactory manner for a wide range of data sets. In this paper we compare several histogram construction procedures by means of a simulation study. The study includes plugin methods, crossvalidation, penalized maximum likelihood and the taut string procedure. Their performance on different test beds is measured by their ability to identify the peaks of an underlying density as well as by Hellinger distance.
Tight bounds for universal compression of large alphabets
 In ISIT
, 2013
"... Abstract—Over the past decade, several papers, e.g., [1–7] and references therein, have considered universal compression of sources over large alphabets, often using patterns to avoid infinite redundancy. Improving on previous results, we prove tight bounds on expected and worstcase pattern redund ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Abstract—Over the past decade, several papers, e.g., [1–7] and references therein, have considered universal compression of sources over large alphabets, often using patterns to avoid infinite redundancy. Improving on previous results, we prove tight bounds on expected and worstcase pattern redundancy, in particular closing a decadelong gap and showing that the worstcase pattern redundancy of i.i.d. distributions is Θ̃(n1/3)†. I.
Minimax Pointwise Redundancy for Memoryless Models over Large Alphabets
"... We study the minimax pointwise redundancy of universal coding for memoryless models over large alphabets and present two main results: We first complete studies initiated in Orlitsky and Santhanam [15] deriving precise asymptotics of the minimax pointwise redundancy for all ranges of the alphabet s ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
We study the minimax pointwise redundancy of universal coding for memoryless models over large alphabets and present two main results: We first complete studies initiated in Orlitsky and Santhanam [15] deriving precise asymptotics of the minimax pointwise redundancy for all ranges of the alphabet size relative to the sequence length. Second, we consider the pointwise minimax redundancy for a family of models in which some symbol probabilities are fixed. The latter problem leads to a binomial sum for functions with superpolynomial growth. Our findings can be used to approximate numerically the minimax pointwise redundancy for various ranges of the sequence length and the alphabet size. These results are obtained by analytic techniques such as treelike generating functions and the saddle point method.
Universal compression of Markov and related sources over arbitrary alphabets
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 2006
"... Recent work has considered encoding a string by separately conveying its symbols and its pattern—the order in which the symbols appear. It was shown that the patterns of i.i.d. strings can be losslessly compressed with diminishing persymbol redundancy. In this paper the pattern redundancy of distri ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Recent work has considered encoding a string by separately conveying its symbols and its pattern—the order in which the symbols appear. It was shown that the patterns of i.i.d. strings can be losslessly compressed with diminishing persymbol redundancy. In this paper the pattern redundancy of distributions with memory is considered. Close lower and upper bounds are established on the pattern redundancy of strings generated by Hidden Markov Models with a small number of states, showing in particular that their persymbol pattern redundancy diminishes with increasing string length. The upper bounds are obtained by analyzing the growth rate of the number of multidimensional integer partitions, and the lower bounds, using Hayman’s Theorem.
Constructing a regular histogram a comparison of methods
, 2007
"... Even for a welltrained statistician the construction of a histogram for a given realvalued data set is a difficult problem. It is even more difficult to construct a fully automatic procedure which specifies the number and widths of the bins in a satisfactory manner for a wide range of data sets. I ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Even for a welltrained statistician the construction of a histogram for a given realvalued data set is a difficult problem. It is even more difficult to construct a fully automatic procedure which specifies the number and widths of the bins in a satisfactory manner for a wide range of data sets. In this paper we compare several histogram construction methods by means of a simulation study. The study includes plugin methods, crossvalidation, penalized maximum likelihood and the taut string procedure. Their performance on different test beds is measured by the Hellinger distance and the ability to identify the modes of the underlying density.
Uniform asymptotics of some Abel sums arising in coding theory
"... We derive uniform asymptotic expressions of some Abel sums appearing in some problems in coding theory and indicate the usefulness of these sums in other fields, like empirical processes, machine maintenance, analysis of algorithms, probabilistic number theory, queuing models, etc. Key words: Abel s ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We derive uniform asymptotic expressions of some Abel sums appearing in some problems in coding theory and indicate the usefulness of these sums in other fields, like empirical processes, machine maintenance, analysis of algorithms, probabilistic number theory, queuing models, etc. Key words: Abel sums, coding theory, Mellin transforms, Wfunction, uniform asymptotics. 1