Results 1  10
of
12
Universal compression of memoryless sources over unknown alphabets
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 2004
"... It has long been known that the compression redundancy of independent and identically distributed (i.i.d.) strings increases to infinity as the alphabet size grows. It is also apparent that any string can be described by separately conveying its symbols, and its pattern—the order in which the symbol ..."
Abstract

Cited by 52 (19 self)
 Add to MetaCart
(Show Context)
It has long been known that the compression redundancy of independent and identically distributed (i.i.d.) strings increases to infinity as the alphabet size grows. It is also apparent that any string can be described by separately conveying its symbols, and its pattern—the order in which the symbols appear. Concentrating on the latter, we show that the patterns of i.i.d. strings over all, including infinite and even unknown, alphabets, can be compressed with diminishing redundancy, both in block and sequentially, and that the compression can be performed in linear time. To establish these results, we show that the number of patterns is the Bell number, that the number of patterns with a given number of symbols is the Stirling number of the second kind, and that the redundancy of patterns can be bounded using results of Hardy and Ramanujan on the number of integer partitions. The results also imply an asymptotically optimal solution for the GoodTuring probabilityestimation problem.
Universal lossless compression with unknown alphabets  The average case
, 2006
"... Universal compression of patterns of sequences generated by independently identically distributed (i.i.d.) sources with unknown, possibly large, alphabets is investigated. A pattern is a sequence of indices that contains all consecutive indices in increasing order of first occurrence. If the alphabe ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
(Show Context)
Universal compression of patterns of sequences generated by independently identically distributed (i.i.d.) sources with unknown, possibly large, alphabets is investigated. A pattern is a sequence of indices that contains all consecutive indices in increasing order of first occurrence. If the alphabet of a source that generated a sequence is unknown, the inevitable cost of coding the unknown alphabet symbols can be exploited to create the pattern of the sequence. This pattern can in turn be compressed by itself. It is shown that if the alphabet size k is essentially small, then the average minimax and maximin redundancies as well as the redundancy of every code for almost every source, when compressing a pattern, consist of at least 0.5 log ( n/k 3) bits per each unknown probability parameter, and if all alphabet letters are likely to occur, there exist codes whose redundancy is at most 0.5 log ( n/k 2) bits per each unknown probability parameter, where n is the length of the data sequences. Otherwise, if the alphabet is large, these redundancies are essentially at least O ( n −2/3) bits per symbol, and there exist codes that achieve redundancy of essentially O ( n −1/2) bits per symbol. Two suboptimal lowcomplexity sequential algorithms for compression of patterns are presented and their description lengths
On the entropy rate of pattern processes
 Proceedings of the 2005 Data Compression Conference, Snowbird
, 2005
"... We study the entropy rate of pattern sequences of stochastic processes, and its relationship to the entropy rate of the original process. We give a complete characterization of this relationship for i.i.d. processes over arbitrary alphabets, stationary ergodic processes over discrete alphabets, and ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
We study the entropy rate of pattern sequences of stochastic processes, and its relationship to the entropy rate of the original process. We give a complete characterization of this relationship for i.i.d. processes over arbitrary alphabets, stationary ergodic processes over discrete alphabets, and a broad family of stationary ergodic processes over uncountable alphabets. For cases where the entropy rate of the pattern process is infinite, we characterize the possible growth rate of the block entropy. 1
Universal Coding on Infinite Alphabets: Exponentially Decreasing Envelopes
, 2008
"... This paper deals with the problem of universal lossless coding on a countable infinite alphabet. It focuses on some classes of sources defined by an envelope condition on the marginal distribution, namely exponentially decreasing envelope classes with exponent α. The minimax redundancy of exponentia ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
This paper deals with the problem of universal lossless coding on a countable infinite alphabet. It focuses on some classes of sources defined by an envelope condition on the marginal distribution, namely exponentially decreasing envelope classes with exponent α. The minimax redundancy of exponentially decreasing envelope 1 classes is proved to be equivalent to 4α log e log² n. Then a coding strategy is proposed, with a Bayes redundancy equivalent to the maximin redundancy. At last, an adaptive algorithm is provided, whose redundancy is equivalent to the minimax redundancy.
Universal Source Coding for Monotonic and Fast Decaying Monotonic Distributions
, 2007
"... We study universal compression of sequences generated by monotonic distributions. We show that for a monotonic distribution over an alphabet of size k, each probability parameter costs essentially 0.5 log(n/k 3) bits, where n is the coded sequence length, as long as k = o(n 1/3). Otherwise, for k = ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
We study universal compression of sequences generated by monotonic distributions. We show that for a monotonic distribution over an alphabet of size k, each probability parameter costs essentially 0.5 log(n/k 3) bits, where n is the coded sequence length, as long as k = o(n 1/3). Otherwise, for k = O(n), the total average sequence redundancy is O(n1/3+ε) bits overall. We then show that there exists a subclass of monotonic distributions over infinite alphabets for which redundancy of O(n1/3+ε) bits overall is still achievable. This class contains fast decaying distributions, including many distributions over the integers and geometric distributions. For some slower decays, including other distributions over the integers, redundancy of o(n) bits overall is achievable, where a method to compute specific redundancy rates for such distributions is derived. The results are specifically true for finite entropy monotonic distributions. Finally, we study individual sequence redundancy behavior assuming a sequence is governed by a monotonic distribution. We show that for sequences whose empirical distributions are monotonic, individual redundancy bounds similar to those in the average case can be obtained. However, even if the monotonicity in the empirical distribution is violated, diminishing per symbol individual sequence redundancies with respect to the monotonic maximum likelihood description length may still be achievable.
Universal compression of Markov and related sources over arbitrary alphabets
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 2006
"... Recent work has considered encoding a string by separately conveying its symbols and its pattern—the order in which the symbols appear. It was shown that the patterns of i.i.d. strings can be losslessly compressed with diminishing persymbol redundancy. In this paper the pattern redundancy of distri ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Recent work has considered encoding a string by separately conveying its symbols and its pattern—the order in which the symbols appear. It was shown that the patterns of i.i.d. strings can be losslessly compressed with diminishing persymbol redundancy. In this paper the pattern redundancy of distributions with memory is considered. Close lower and upper bounds are established on the pattern redundancy of strings generated by Hidden Markov Models with a small number of states, showing in particular that their persymbol pattern redundancy diminishes with increasing string length. The upper bounds are obtained by analyzing the growth rate of the number of multidimensional integer partitions, and the lower bounds, using Hayman’s Theorem.
Adaptive Coding and Prediction of Sources With Large and Infinite Alphabets
"... Abstract—The problem of predicting a sequence x;x;...generated by a discrete source with unknown statistics is considered. Each letter x is predicted using the information on the word x x 111x only. This problem is of great importance for data compression, because of its use to estimate probability ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract—The problem of predicting a sequence x;x;...generated by a discrete source with unknown statistics is considered. Each letter x is predicted using the information on the word x x 111x only. This problem is of great importance for data compression, because of its use to estimate probability distributions for PPM algorithms and other adaptive codes. On the other hand, such prediction is a classical problem which has received much attention. Its history can be traced back to Laplace. We address the problem where the sequence is generated by an independent and identically distributed (i.i.d.) source with some large (or even infinite) alphabet and suggest a class of new methods of prediction. Index Terms—Adaptive coding, Laplace problem of succession, lossless data compression, prediction of random processes, Shannon entropy, source coding. I.
Patterns and exchangeability
 In Proceedings of the IEEE Symposium on Information Theory
, 2010
"... Abstract—In statistics and theoretical computer science, the notion of exchangeability provides a framework for the study of large alphabet scenarios. This idea has been developed in an important line of work starting with Kingman’s study of population genetics, and leading on to the paintbox proce ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract—In statistics and theoretical computer science, the notion of exchangeability provides a framework for the study of large alphabet scenarios. This idea has been developed in an important line of work starting with Kingman’s study of population genetics, and leading on to the paintbox processes of Kingman, the Chinese restaurant processes and their generalizations. In information theory, the notion of the pattern of a sequence provides a framework for the study of large alphabet scenarios, as developed in work of Orlitsky and collaborators. The pattern is a statistic that captures all the information present in the data, and yet is universally compressible regardless of the alphabet size. In this note, connections are made between these two lines of work – specifically, patterns are examined in the context of exchangeability. After observing the relationship between patterns and Kingman’s paintbox processes, and discussing the redundancy of a class of mixture codes for patterns, alternate representations of patterns in terms of graph limits are discussed. I.
A Universal Compression Perspective of Smoothing
"... We analyze smoothing algorithms from a universalcompression perspective. Instead of evaluating their performance on an empirical sample, we analyze their performance on the most inconvenient sample possible. Consequently the performance of the algorithm can be guaranteed even on unseen data. We sho ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
We analyze smoothing algorithms from a universalcompression perspective. Instead of evaluating their performance on an empirical sample, we analyze their performance on the most inconvenient sample possible. Consequently the performance of the algorithm can be guaranteed even on unseen data. We show that universal compression bounds can explain the empirical performance of several smoothing methods. We also describe a new interpolated additive smoothing algorithm, and show that it has lower training complexity and better compression performance than existing smoothing techniques. Key words: Language modeling, universal compression, smoothing 1
HAYMAN ADMISSIBLE FUNCTIONS IN SEVERAL VARIABLES
"... Abstract. An alternative generalisation of Hayman’s admissible functions ([17]) to functions in several variables is developed and a multivariate asymptotic expansion for the coefficients is proved. In contrast to existing generalisations of Hayman admissibility ([7]), most of the closure properties ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. An alternative generalisation of Hayman’s admissible functions ([17]) to functions in several variables is developed and a multivariate asymptotic expansion for the coefficients is proved. In contrast to existing generalisations of Hayman admissibility ([7]), most of the closure properties which are satisfied by Hayman’s admissible functions can be shown to hold for this class of functions as well. 1.