Results 1  10
of
32
Arithmetic coding
 IBM J. Res. Develop
, 1979
"... Arithmetic coding is a data compression technique that encodes data (the data string) by creating a code string which represents a fractional value on the number line between 0 and 1. The coding algorithm is symbolwise recursive; i.e., it operates upon and encodes (decodes) one data symbol per itera ..."
Abstract

Cited by 197 (0 self)
 Add to MetaCart
Arithmetic coding is a data compression technique that encodes data (the data string) by creating a code string which represents a fractional value on the number line between 0 and 1. The coding algorithm is symbolwise recursive; i.e., it operates upon and encodes (decodes) one data symbol per iteration or recursion. On each recursion, the algorithm successively partitions an interval
To Code, or Not to Code: Lossy SourceChannel Communication Revisited
 IEEE TRANS. INFORM. THEORY
, 2003
"... What makes a sourcechannel communication system optimal? It is shown that in order to achieve an optimal costdistortion tradeoff, the source and the channel have to be matched in a probabilistic sense. The match (or lack of it) involves the source distribution, the distortion measure, the channel ..."
Abstract

Cited by 89 (5 self)
 Add to MetaCart
What makes a sourcechannel communication system optimal? It is shown that in order to achieve an optimal costdistortion tradeoff, the source and the channel have to be matched in a probabilistic sense. The match (or lack of it) involves the source distribution, the distortion measure, the channel conditional distribution, and the channel input cost function. Closedform necessary and sufficient expressions relating the above entities are given. This generalizes both the separationbased approach as well as the two wellknown examples of optimal uncoded communication. The condition of
The Context Tree Weighting Method: Basic Properties
 IEEE Transactions on Information Theory
, 1995
"... We describe a sequential universal data compression procedure for binary tree sources that performs the "double mixture". Using a context tree, this method weights in an efficient recursive way the coding distributions corresponding to all bounded memory tree sources, and achieves a desirable coding ..."
Abstract

Cited by 79 (1 self)
 Add to MetaCart
We describe a sequential universal data compression procedure for binary tree sources that performs the "double mixture". Using a context tree, this method weights in an efficient recursive way the coding distributions corresponding to all bounded memory tree sources, and achieves a desirable coding distribution for tree sources with an unknown model and unknown parameters. Computational and storage complexity of the proposed procedure are both linear in the source sequence length. We derive a natural upper bound on the cumulative redundancy of our method for individual sequences. The three terms in this bound can be identified as coding, parameter and model redundancy. The bound holds for all source sequence lengths, not only for asymptotically large lengths. The analysis that leads to this bound is based on standard techniques and turns out to be extremely simple. Our upper bound on the redundancy shows that the proposed context tree weighting procedure is optimal in the sense that i...
Lossy Source Coding
 IEEE Trans. Inform. Theory
, 1998
"... Lossy coding of speech, highquality audio, still images, and video is commonplace today. However, in 1948, few lossy compression systems were in service. Shannon introduced and developed the theory of source coding with a fidelity criterion, also called ratedistortion theory. For the first 25 year ..."
Abstract

Cited by 72 (1 self)
 Add to MetaCart
Lossy coding of speech, highquality audio, still images, and video is commonplace today. However, in 1948, few lossy compression systems were in service. Shannon introduced and developed the theory of source coding with a fidelity criterion, also called ratedistortion theory. For the first 25 years of its existence, ratedistortion theory had relatively little impact on the methods and systems actually used to compress real sources. Today, however, ratedistortion theoretic concepts are an important component of many lossy compression techniques and standards. We chronicle the development of ratedistortion theory and provide an overview of its influence on the practice of lossy source coding. Index TermsData compression, image coding, speech coding, rate distortion theory, signal coding, source coding with a fidelity criterion, video coding. I.
On the construction of some capacityapproaching coding schemes
, 2000
"... This thesis proposes two constructive methods of approaching the Shannon limit very closely. Interestingly, these two methods operate in opposite regions, one has a block length of one and the other has a block length approaching infinity. The first approach is based on novel memoryless joint source ..."
Abstract

Cited by 57 (2 self)
 Add to MetaCart
This thesis proposes two constructive methods of approaching the Shannon limit very closely. Interestingly, these two methods operate in opposite regions, one has a block length of one and the other has a block length approaching infinity. The first approach is based on novel memoryless joint sourcechannel coding schemes. We first show some examples of sources and channels where no coding is optimal for all values of the signaltonoise ratio (SNR). When the source bandwidth is greater than the channel bandwidth, joint coding schemes based on spacefilling curves and other families of curves are proposed. For uniform sources and modulo channels, our coding scheme based on spacefilling curves operates within 1.1 dB of Shannon’s ratedistortion bound. For Gaussian sources and additive white Gaussian noise (AWGN) channels, we can achieve within 0.9 dB of the ratedistortion bound. The second scheme is based on lowdensity paritycheck (LDPC) codes. We first demonstrate that we can translate threshold values of an LDPC code between channels accurately using a simple mapping. We develop some models for density evolution
A Convergent Gambling Estimate of the Entropy of English
 IEEE Transactions on Information Theory
, 1978
"... AbstmctIn his original paper on the subject, Shannon found upper which follow using the boundedness and continuity of and lower bounds for the entropy of printed English based on the number h(p) =p logp (1p) log (1p). In addition, if English of trials required for a subject to guess subsequent ..."
Abstract

Cited by 53 (1 self)
 Add to MetaCart
AbstmctIn his original paper on the subject, Shannon found upper which follow using the boundedness and continuity of and lower bounds for the entropy of printed English based on the number h(p) =p logp (1p) log (1p). In addition, if English of trials required for a subject to guess subsequent symbols in a given text. is an ergodic process, then the ShamronMcMillanBreiThe guessing approach precludes asymptotic consistency of either the upper or lower bounds except for degenerate ergodic processes. Shannon’s man theorem states technique of guessing the next symbol is altered by having the subject place sequential bets on the next symbol of text. lf S,, denotes the subject’s capital after n bets at 27 for 1 odds, and lf it is assumed thati log,,p(X,;..,X&H(X) a.e. (3) the subject hnows the underlying prpbabillty distribution for the process X, then the entropy estimate ls H,(X) =(l(l/n) log,, S,) log, 27 If printed English is indeed an ergodic process, then for bits/symbol. If the subject does npt hnow the true probabllty distribution sufficiently large n a good estimate of H(X) can be for the stochastic process, then Z&(X! ls an asymptotic upper bound for obtained from knowledge of p(e) on a randomly drawn the true entropy. ff X is stationary, EH,,(X)+H(X), H(X) bell the true
Joint SourceChannel Coding With Variable Length Codes
, 2000
"... We address the problem of joint sourcechannel coding when variable length codes are used for information transmission over a discrete memoryless channel. The data transmitted over the channel are interpreted as pairs (m k ; t k ); where m k is the message generated by the source and t k is the time ..."
Abstract

Cited by 43 (1 self)
 Add to MetaCart
We address the problem of joint sourcechannel coding when variable length codes are used for information transmission over a discrete memoryless channel. The data transmitted over the channel are interpreted as pairs (m k ; t k ); where m k is the message generated by the source and t k is the time instant when the transmission of the kth codeword begins. The decoder constructs an estimate of the transmitted sequence of pairs, and the kth decoding error is introduced as the event that the pair (m k ; t k ) does not belong to this sequence. We describe the maximum likelihood decoding algorithm and prove a lower bound on the exponent of decoding error probability. For a subclass of discrete memoryless sources and discrete memoryless channels this bound is asymptotically tight.
Low Complexity Sequential Lossless Coding for Piecewise Stationary Memoryless Sources
 IEEE Transactions on Information Theory
, 1999
"... Abstract — Three strongly sequential, lossless compression schemes, one with linearly growing perletter computational complexity, and two with fixed perletter complexity, are presented and analyzed for memoryless sources with abruptly changing statistics. The first method, which improves on Willem ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
Abstract — Three strongly sequential, lossless compression schemes, one with linearly growing perletter computational complexity, and two with fixed perletter complexity, are presented and analyzed for memoryless sources with abruptly changing statistics. The first method, which improves on Willems’ weighting approach, asymptotically achieves a lower bound on the redundancy, and hence is optimal. The second scheme achieves redundancy of O (log N=N) when the transitions in the statistics are large, and O (log log N = log N) otherwise. The third approach always achieves redundancy of O ( log N=N). Obviously, the two fixed complexity approaches can be easily combined to achieve the better redundancy between the two. Simulation results support the analytical bounds derived for all the coding schemes. Index Terms — Change detection, ideal code length, minimum description length, piecewisestationary memoryless source, redundancy, segmentation, sequential coding, source block code, strongly sequential coding, transition path, universal coding, weighting. I.
Joint sourcechannel coding error exponent for discrete communication systems with Markovian memory
 IEEE Trans. Info. Theory
, 2007
"... Abstract—We investigate the computation of Csiszár’s bounds for the joint source–channel coding (JSCC) error exponent of a communication system consisting of a discrete memoryless source and a discrete memoryless channel. We provide equivalent expressions for these bounds and derive explicit formula ..."
Abstract

Cited by 23 (9 self)
 Add to MetaCart
Abstract—We investigate the computation of Csiszár’s bounds for the joint source–channel coding (JSCC) error exponent of a communication system consisting of a discrete memoryless source and a discrete memoryless channel. We provide equivalent expressions for these bounds and derive explicit formulas for the rates where the bounds are attained. These equivalent representations can be readily computed for arbitrary source–channel pairs via Arimoto’s algorithm. When the channel’s distribution satisfies a symmetry property, the bounds admit closedform parametric expressions. We then use our results to provide a systematic comparison between the JSCC error exponent and the tandem coding error exponent, which applies if the source and channel are separately coded. It is shown that 2. We establish conditions for which and for which =2. Numerical examples indicate that is close to2 for many source– channel pairs. This gain translates into a power saving larger than 2 dB for a binary source transmitted over additive white Gaussian noise (AWGN) channels and Rayleighfading channels with finite output quantization. Finally, we study the computation of the lossy JSCC error exponent under the Hamming distortion measure. Index Terms—Discrete memoryless sources and channels, error exponent, Fenchel’s duality, Hamming distortion measure, joint source–channel coding, randomcoding exponent, reliability function, spherepacking exponent, symmetric channels, tandem source and channel coding. I.
On universal types
 PROC. ISIT 2004
, 2004
"... We define the universal type class of a sequence x n, in analogy to the notion used in the classical method of types. Two sequences of the same length are said to be of the same universal (LZ) type if and only if they yield the same set of phrases in the incremental parsing of Ziv and Lempel (1978 ..."
Abstract

Cited by 22 (6 self)
 Add to MetaCart
We define the universal type class of a sequence x n, in analogy to the notion used in the classical method of types. Two sequences of the same length are said to be of the same universal (LZ) type if and only if they yield the same set of phrases in the incremental parsing of Ziv and Lempel (1978). We show that the empirical probability distributions of any finite order of two sequences of the same universal type converge, in the variational sense, as the sequence length increases. Consequently, the normalized logarithms of the probabilities assigned by any kth order probability assignment to two sequences of the same universal type, as well as the kth order empirical entropies of the sequences, converge for all k. We study the size of a universal type class, and show that its asymptotic behavior parallels that of the conventional counterpart, with the LZ78 code length playing the role of the empirical entropy. We also estimate the number of universal types for sequences of length n, and show that it is of the form exp((1+o(1))γ n/log n) for a well characterized constant γ. We describe algorithms for enumerating the sequences in a universal type class, and for drawing a sequence from the class with uniform probability. As an application, we consider the problem of universal simulation of individual sequences. A sequence drawn with uniform probability from the universal type class of x n is an optimal simulation of x n in a well defined mathematical sense.