Results 1 
8 of
8
Precise Minimax Redundancy and Regret
 IEEE TRANS. INFORMATION THEORY
, 2004
"... Recent years have seen a resurgence of interest in redundancy of lossless coding. The redundancy (regret) of universal xed{to{variable length coding for a class of sources determines by how much the actual code length exceeds the optimal (ideal over the class) code length. In a minimax scenario ..."
Abstract

Cited by 46 (15 self)
 Add to MetaCart
Recent years have seen a resurgence of interest in redundancy of lossless coding. The redundancy (regret) of universal xed{to{variable length coding for a class of sources determines by how much the actual code length exceeds the optimal (ideal over the class) code length. In a minimax scenario one nds the best code for the worst source either in the worst case (called also maximal minimax) or on average. We rst study the worst case minimax redundancy over a class of stationary ergodic sources and replace Shtarkov's bound by an exact formula. Among others, we prove that a generalized Shannon code minimizes the worst case redundancy, derive asymptotically its redundancy, and establish some general properties. This allows us to obtain precise redundancy rates for memoryless, Markov and renewal sources. For example, we derive the exact constant of the redundancy rate for memoryless and Markov sources by showing that an integer nature of coding contributes log(log m=(m 1))= log m+ o(1) where m is the size of the alphabet. Then we deal with the average minimax redundancy and regret. Our approach
Universal Coding on Infinite Alphabets: Exponentially Decreasing Envelopes
, 2008
"... This paper deals with the problem of universal lossless coding on a countable infinite alphabet. It focuses on some classes of sources defined by an envelope condition on the marginal distribution, namely exponentially decreasing envelope classes with exponent α. The minimax redundancy of exponentia ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
This paper deals with the problem of universal lossless coding on a countable infinite alphabet. It focuses on some classes of sources defined by an envelope condition on the marginal distribution, namely exponentially decreasing envelope classes with exponent α. The minimax redundancy of exponentially decreasing envelope 1 classes is proved to be equivalent to 4α log e log² n. Then a coding strategy is proposed, with a Bayes redundancy equivalent to the maximin redundancy. At last, an adaptive algorithm is provided, whose redundancy is equivalent to the minimax redundancy.
Minimax Pointwise Redundancy for Memoryless Models over Large Alphabets
"... We study the minimax pointwise redundancy of universal coding for memoryless models over large alphabets and present two main results: We first complete studies initiated in Orlitsky and Santhanam [15] deriving precise asymptotics of the minimax pointwise redundancy for all ranges of the alphabet s ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We study the minimax pointwise redundancy of universal coding for memoryless models over large alphabets and present two main results: We first complete studies initiated in Orlitsky and Santhanam [15] deriving precise asymptotics of the minimax pointwise redundancy for all ranges of the alphabet size relative to the sequence length. Second, we consider the pointwise minimax redundancy for a family of models in which some symbol probabilities are fixed. The latter problem leads to a binomial sum for functions with superpolynomial growth. Our findings can be used to approximate numerically the minimax pointwise redundancy for various ranges of the sequence length and the alphabet size. These results are obtained by analytic techniques such as treelike generating functions and the saddle point method.
Average redundancy for known sources: ubiquitous trees in source coding
 Proceedings, Fifth Colloquium on Mathematics and Computer Science (Blaubeuren, 2008), Discrete Math. Theor. Comput. Sci. Proc. AI
, 2008
"... Analytic information theory aims at studying problems of information theory using analytic techniques of computer science and combinatorics. Following Hadamard’s precept, these problems are tackled by complex analysis methods such as generating functions, Mellin transform, Fourier series, saddle poi ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Analytic information theory aims at studying problems of information theory using analytic techniques of computer science and combinatorics. Following Hadamard’s precept, these problems are tackled by complex analysis methods such as generating functions, Mellin transform, Fourier series, saddle point method, analytic poissonization and depoissonization, and singularity analysis. This approach lies at the crossroad of computer science and information theory. In this survey we concentrate on one facet of information theory (i.e., source coding better known as data compression), namely the redundancy rate problem. The redundancy rate problem determines by how much the actual code length exceeds the optimal code length. We further restrict our interest to the average redundancy for known sources, that is, when statistics of information sources are known. We present precise analyses of three types of lossless data compression schemes, namely fixedtovariable (FV) length codes, variabletofixed (VF) length codes, and variabletovariable (VV) length codes. In particular, we investigate average redundancy of Huffman, Tunstall, and Khodak codes. These codes have succinct representations as trees, either as coding or parsing trees, and we analyze here some of their parameters (e.g., the average path from the root to a leaf).
Algorithms, Combinatorics, Information, and Beyond
, 2012
"... Shannon information theory aims at finding fundamental limits for storage and communication, including ratesof convergenceto these limits. Indeed, many interesting information theoretic phenomena seem to appear in the second order asymptotics. So we first discuss precise analysis of the minimax redu ..."
Abstract
 Add to MetaCart
(Show Context)
Shannon information theory aims at finding fundamental limits for storage and communication, including ratesof convergenceto these limits. Indeed, many interesting information theoretic phenomena seem to appear in the second order asymptotics. So we first discuss precise analysis of the minimax redundancy that can be viewed as a measure of learnable or useful information. Then we highlight Markov types unveiling some interesting connections to combinatorics of graphical enumeration and linear Diophantine equations. Next we turn our attention to structural compression of graphical objects, proposing a compression algorithm achieving the lower bound represented by the structural entropy. These results are obtained using tools of analytic combinatorics and analysis of algorithms, known also as analytic information theory. Finally, we argue that perhaps information theory needs to be broadened if it is to meet today’s challenges beyond its original goals (of traditional communication) in biology, economics, modern communication, and knowledge extraction. One of the essential components of this perspective is to continue building foundations in better understanding of temporal, spatial, structural and semantic information in dynamic networks with limited resources. Recently, the National Science Foundation has established the first Science and Technology Center on Science of Information (CSoI) to address these challenges and develop tools to move beyond our current understanding of information flow in communication and storage systems. 1
Analytic Information Theory and the Redundancy Rate Problem
, 2000
"... called the Bernoulli model.) M2. A Markov model assumes an underlying finite set of states with transition probability p i;j between states i and j and a mapping from states to letters. As discovered by Shannon around 1949, information is measured by entropy. The entropy of a probability distribut ..."
Abstract
 Add to MetaCart
called the Bernoulli model.) M2. A Markov model assumes an underlying finite set of states with transition probability p i;j between states i and j and a mapping from states to letters. As discovered by Shannon around 1949, information is measured by entropy. The entropy of a probability distribution P = fp s g s2S over any finite set S is defined as H(P ) := \Gamma X s2S p s lg p s ; where lg x = log 2 x. (Roughly, the definition extends the fact that an element in a set of cardinality m needs to be encoded by about lg m bits in order to be distinguished from its companions elements.) Most "reasonable
President’s Column
"... The writing of this column has been marked by many different emotions. When I began composing my message, I was thinking about continuing our reflection on our Society in the context of our IEEE review and of the upcom ing ISIT. Having attended the TAB meeting in February, where I attended our Trans ..."
Abstract
 Add to MetaCart
(Show Context)
The writing of this column has been marked by many different emotions. When I began composing my message, I was thinking about continuing our reflection on our Society in the context of our IEEE review and of the upcom ing ISIT. Having attended the TAB meeting in February, where I attended our Transactions ’ glowing review, and being in the midst of preparing for ISIT in Cambridge, I was trying to distill for this column the promises and challenges that lie before us. Before I was able to commit my thoughts to text, the untimely death of our colleague