Results 1  10
of
25
Packet Loss Correlation in the MBone Multicast Network
, 1996
"... The recent success of multicast applications such as Internet teleconferencing illustrates the tremendous potential of applications built upon widearea multicast communication services. A critical issue for such multicast applications and the higher layer protocols required to support them is the m ..."
Abstract

Cited by 225 (17 self)
 Add to MetaCart
The recent success of multicast applications such as Internet teleconferencing illustrates the tremendous potential of applications built upon widearea multicast communication services. A critical issue for such multicast applications and the higher layer protocols required to support them is the manner in which packet losses occur within the multicast network. In this paper we present and analyze packet loss data collected on multicastcapable hosts at 17 geographically distinct locations in Europe and the US and connected via the MBone. We experimentally and quantitatively examine the spatial and temporal correlation in packet loss among participants in a multicast session. Our results show that there is some spatial correlation in loss among the multicast sites. However, the shared loss in the backbone of the MBone is, for the most part, low. We find a fairly significant amount of of burst loss (consecutive losses) at most sites. In every dataset, at least one receiver experienced ...
The Method of Types
, 1998
"... The method of types is one of the key technical tools in Shannon Theory, and this tool is valuable also in other fields. In this paper, some key applications will be presented in sufficient detail enabling an interested nonspecialist to gain a working knowledge of the method, and a wide selection of ..."
Abstract

Cited by 126 (0 self)
 Add to MetaCart
The method of types is one of the key technical tools in Shannon Theory, and this tool is valuable also in other fields. In this paper, some key applications will be presented in sufficient detail enabling an interested nonspecialist to gain a working knowledge of the method, and a wide selection of further applications will be surveyed. These range from hypothesis testing and large deviations theory through error exponents for discrete memoryless channels and capacity of arbitrarily varying channels to multiuser problems. While the method of types is suitable primarily for discrete memoryless models, its extensions to certain models with memory will also be discussed. Index TermsArbitrarily varying channels, choice of decoder, counting approach, error exponents, extended type concepts, hypothesis testing, large deviations, multiuser problems, universal coding. I.
The consistency of the BIC Markov order estimator.
"... . The Bayesian Information Criterion (BIC) estimates the order of a Markov chain (with finite alphabet A) from observation of a sample path x 1 ; x 2 ; : : : ; x n , as that value k = k that minimizes the sum of the negative logarithm of the kth order maximum likelihood and the penalty term jAj ..."
Abstract

Cited by 64 (3 self)
 Add to MetaCart
. The Bayesian Information Criterion (BIC) estimates the order of a Markov chain (with finite alphabet A) from observation of a sample path x 1 ; x 2 ; : : : ; x n , as that value k = k that minimizes the sum of the negative logarithm of the kth order maximum likelihood and the penalty term jAj k (jAj\Gamma1) 2 log n: We show that k equals the correct order of the chain, eventually almost surely as n ! 1, thereby strengthening earlier consistency results that assumed an apriori bound on the order. A key tool is a strong ratiotypicality result for Markov sample paths. We also show that the Bayesian estimator or minimum description length estimator, of which the BIC estimator is an approximation, fails to be consistent for the uniformly distributed i.i.d. process. AMS 1991 subject classification: Primary 62F12, 62M05; Secondary 62F13, 60J10 Key words and phrases: Bayesian Information Criterion, order estimation, ratiotypicality, Markov chains. 1 Supported in part by a joint N...
Joint sourcechannel coding error exponent for discrete communication systems with Markovian memory
 IEEE Trans. Info. Theory
, 2007
"... Abstract—We investigate the computation of Csiszár’s bounds for the joint source–channel coding (JSCC) error exponent of a communication system consisting of a discrete memoryless source and a discrete memoryless channel. We provide equivalent expressions for these bounds and derive explicit formula ..."
Abstract

Cited by 29 (11 self)
 Add to MetaCart
(Show Context)
Abstract—We investigate the computation of Csiszár’s bounds for the joint source–channel coding (JSCC) error exponent of a communication system consisting of a discrete memoryless source and a discrete memoryless channel. We provide equivalent expressions for these bounds and derive explicit formulas for the rates where the bounds are attained. These equivalent representations can be readily computed for arbitrary source–channel pairs via Arimoto’s algorithm. When the channel’s distribution satisfies a symmetry property, the bounds admit closedform parametric expressions. We then use our results to provide a systematic comparison between the JSCC error exponent and the tandem coding error exponent, which applies if the source and channel are separately coded. It is shown that 2. We establish conditions for which and for which =2. Numerical examples indicate that is close to2 for many source– channel pairs. This gain translates into a power saving larger than 2 dB for a binary source transmitted over additive white Gaussian noise (AWGN) channels and Rayleighfading channels with finite output quantization. Finally, we study the computation of the lossy JSCC error exponent under the Hamming distortion measure. Index Terms—Discrete memoryless sources and channels, error exponent, Fenchel’s duality, Hamming distortion measure, joint source–channel coding, randomcoding exponent, reliability function, spherepacking exponent, symmetric channels, tandem source and channel coding. I.
Optimal error exponents in hidden Markov models order estimation
 IEEE Trans. Inf. Theory
, 2003
"... Abstract—We consider the estimation of the number of hidden states (the order) of a discretetime finitealphabet hidden Markov model (HMM). The estimators we investigate are related to codebased order estimators: penalized maximumlikelihood (ML) estimators and penalized versions of the mixture es ..."
Abstract

Cited by 24 (5 self)
 Add to MetaCart
Abstract—We consider the estimation of the number of hidden states (the order) of a discretetime finitealphabet hidden Markov model (HMM). The estimators we investigate are related to codebased order estimators: penalized maximumlikelihood (ML) estimators and penalized versions of the mixture estimator introduced by Liu and Narayan. We prove strong consistency of those estimators without assuming any a priori upper bound on the order and smaller penalties than previous works. We prove a version of Stein’s lemma for HMM order estimation and derive an upper bound on underestimation exponents. Then we prove that this upper bound can be achieved by the penalized ML estimator and by the penalized mixture estimator. The proof of the latter result gets around the elusive nature of the ML in HMM by resorting to largedeviation techniques for empirical processes. Finally, we prove that for any consistent HMM order estimator, for most HMM, the overestimation exponent is null. Index Terms—Composite hypothesis testing, error exponents, generalized likelihood ratio testing, hidden Markov model (HMM), large deviations, order estimation, Stein’s lemma.
Learning HighDimensional Markov Forest Distributions: Analysis of Error Rates
, 1005
"... The problem of learning foreststructured discrete graphical models from i.i.d. samples is considered. An algorithm based on pruning of the ChowLiu tree through adaptive thresholding is proposed. It is shown that this algorithm is both structurally consistent and risk consistent and the error proba ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
The problem of learning foreststructured discrete graphical models from i.i.d. samples is considered. An algorithm based on pruning of the ChowLiu tree through adaptive thresholding is proposed. It is shown that this algorithm is both structurally consistent and risk consistent and the error probability of structure learning decays faster than any polynomial in the number of samples under fixed model size. For the highdimensional scenario where the size of the model d and the number of edges k scale with the number of samples n, sufficient conditions on (n,d,k) are given for the algorithm to satisfy structural and risk consistencies. In addition, the extremal structures for learning are identified; we prove that the independent (resp. tree) model is the hardest (resp. easiest) to learn using the proposed algorithm in terms of error rates for structure learning.
Stochastic chains with memory of variable length. Festschrift for Jorma Rissanen, Grünwald et al
 eds), TICSP Series 38:117–133
, 2008
"... Dedicated to Jorma Rissanen on his 75’th birthday Stochastic chains with memory of variable length constitute an interesting family of stochastic chains of infinite order on a finite alphabet. The idea is that for each past, only a finite suffix of the past, called context, is enough to predict the ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Dedicated to Jorma Rissanen on his 75’th birthday Stochastic chains with memory of variable length constitute an interesting family of stochastic chains of infinite order on a finite alphabet. The idea is that for each past, only a finite suffix of the past, called context, is enough to predict the next symbol. These models were first introduced in the information theory literature by Rissanen (1983) as a universal tool to perform data compression. Recently, they have been used to model up scientific data in areas as different as biology, linguistics and music. This paper presents a personal introductory guide to this class of models focusing on the algorithm Context and its rate of convergence. 1
Estimating and testing the order of a model
, 2002
"... This paper deals with order identification for nested models in the i.i.d. framework. We study the asymptotic efficiency of two generalized likelihood ratio tests of the order. They are based on two estimators which are proved to be strongly consistent. A version of Stein’s lemma yields an optimal u ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
(Show Context)
This paper deals with order identification for nested models in the i.i.d. framework. We study the asymptotic efficiency of two generalized likelihood ratio tests of the order. They are based on two estimators which are proved to be strongly consistent. A version of Stein’s lemma yields an optimal underestimation error exponent. The lemma also implies that the overestimation error exponent is necessarily trivial. Our tests admit nontrivial underestimation error exponents. The optimal underestimation error exponent is achieved in some situations. The overestimation error can decay exponentially with respect to a positive power of the number of observations. These results are proved under mild assumptions by relating the underestimation (resp. overestimation) error to large (resp. moderate) deviations of the loglikelihood process. In particular, it is not necessary that the classical Cramér condition be satisfied; namely, the logdensities are not required to admit every exponential moment. Three benchmark examples with specific difficulties (location mixture of normal distributions, abrupt changes and various regressions) are detailed so as to illustrate the generality of our results.
Order Estimation for a Special Class of Hidden Markov Sources and Binary Renewal Processes
 IEEE Trans. Inform. Theory
, 2002
"... We consider the estimation of the order, i.e., the number of hidden states, of a special class of discretetime finitealphabet hidden Markov sources. This class can be characterized in terms of equivalent renewal processes. No a priori bound is assumed on the maximum permissible order. An order est ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
We consider the estimation of the order, i.e., the number of hidden states, of a special class of discretetime finitealphabet hidden Markov sources. This class can be characterized in terms of equivalent renewal processes. No a priori bound is assumed on the maximum permissible order. An order estimator based on renewal types is constructed, and is shown to be strongly consistent by computing the precise asymptotics of the probability of estimation error. The probability of underestimation of the true order decays exponentially in the number of observations while the probability of overestimation goes to zero sufficiently fast. It is further shown that this estimator has the best possible error exponent in a large class of estimators. Our results are also valid for the general class of binary independentrenewal processes with finite mean renewal times.