Results 1 
8 of
8
Optimal error exponents in hidden Markov models order estimation
 IEEE Trans. Inf. Theory
, 2003
"... Abstract—We consider the estimation of the number of hidden states (the order) of a discretetime finitealphabet hidden Markov model (HMM). The estimators we investigate are related to codebased order estimators: penalized maximumlikelihood (ML) estimators and penalized versions of the mixture es ..."
Abstract

Cited by 28 (6 self)
 Add to MetaCart
(Show Context)
Abstract—We consider the estimation of the number of hidden states (the order) of a discretetime finitealphabet hidden Markov model (HMM). The estimators we investigate are related to codebased order estimators: penalized maximumlikelihood (ML) estimators and penalized versions of the mixture estimator introduced by Liu and Narayan. We prove strong consistency of those estimators without assuming any a priori upper bound on the order and smaller penalties than previous works. We prove a version of Stein’s lemma for HMM order estimation and derive an upper bound on underestimation exponents. Then we prove that this upper bound can be achieved by the penalized ML estimator and by the penalized mixture estimator. The proof of the latter result gets around the elusive nature of the ML in HMM by resorting to largedeviation techniques for empirical processes. Finally, we prove that for any consistent HMM order estimator, for most HMM, the overestimation exponent is null. Index Terms—Composite hypothesis testing, error exponents, generalized likelihood ratio testing, hidden Markov model (HMM), large deviations, order estimation, Stein’s lemma.
NUMBER OF HIDDEN STATES AND MEMORY: A JOINT ORDER ESTIMATION PROBLEM FOR MARKOV CHAINS WITH MARKOV REGIME
"... Abstract. This paper deals with order identification for Markov chains with Markov regime (MCMR) in the context of finite alphabets. We define the joint order of a MCMR process in terms of the number k of states of the hidden Markov chain and the memory m of the conditional Markov chain. We study th ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
Abstract. This paper deals with order identification for Markov chains with Markov regime (MCMR) in the context of finite alphabets. We define the joint order of a MCMR process in terms of the number k of states of the hidden Markov chain and the memory m of the conditional Markov chain. We study the properties of penalized maximum likelihood estimators for the unknown order (k, m) of an observed MCMR process, relying on information theoretic arguments. The novelty of our work relies in the joint estimation of two structural parameters. Furthermore, the different models in competition are not nested. In an asymptotic framework, we prove that a penalized maximum likelihood estimator is strongly consistent without prior bounds on k and m. We complement our theoretical work with a simulation study of its behaviour. We also study numerically the behaviour of the BIC criterion. A theoretical proof of its consistency seems to us presently out of reach for MCMR, as such a result does not yet exist in the simpler case where m = 0 (that is for hidden Markov models). Résumé. Ce travail porte sur l’identification de l’ordre d’une chaîne de Markov à régime Markovien (MCMR) sur un alphabet fini. L’ordre d’une MCMR est défini comme le couple (k, m) où k est le nombre d’états de la chaîne cachée et m la mémoire de la chaîne de Markov conditionnelle. Nous étudions des estimateurs du maximum de vraisemblance pénalisée en utilisant des techniques issues de
Order Estimation and Model Selection
, 2003
"... reason why source coding concepts and techniques have become a standard tool in the area. This chapter presents four kinds of results: a rst very general consistency result in a Bayesian setting provides hints about the ideal penalties that could be used in penalized maximum likelihood order estimat ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
reason why source coding concepts and techniques have become a standard tool in the area. This chapter presents four kinds of results: a rst very general consistency result in a Bayesian setting provides hints about the ideal penalties that could be used in penalized maximum likelihood order estimation. Then we provide a general construction for strongly consistent order estimators based on universal coding arguments. The third main result reports a recent tour de force by Csiszar and Shields (2000) who show that the Bayesian Information Criterion provides a strongly consistent Markov order estimator. We conclude by presenting a general framework for analyzing the Bahadur eciency of order estimation procedures following the line Gassiat and Boucheron (to appear). LRI UMR 8623 CNRS, Universite ParisSud Mathematiques, Universite ParisSud 2.1 Model Order Identi cation: what is it about ? In the preceding chapters, we have been concerned with inference problems in HMMs where th
Asymptotic Optimality of Empirical Likelihood for Selecting Moment Restrictions", Working paper
, 2005
"... This paper proposes large deviation optimal properties of the empirical likelihood testing (ELT) moment selection procedures for moment restriction models. Since the parameter spaces of the moment selection problem is discrete, the conventional Pitmantype local alternative approach is not very help ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper proposes large deviation optimal properties of the empirical likelihood testing (ELT) moment selection procedures for moment restriction models. Since the parameter spaces of the moment selection problem is discrete, the conventional Pitmantype local alternative approach is not very helpful. By applying the theory of large deviations, we analyze convergence rates of the error probabilities under some
xed distribution. We propose three optimality results for the ELT procedures: (i) the generalized NeymanPearson optimality under
xed critical values, (ii) a modi
ed version of the generalized NeymanPearson optimality under decreasing critical values, and (iii) the minimax misclassi
cation error optimality. By comparing the convergence rates of the error probabilities, we can evaluate moment selection procedures beyond the consistency. 1
A. Nested Composite Hypothesis Testing
"... Abstract—This paper is concerned with error exponents in testing problems raised by autoregressive (AR) modeling. The tests to be considered are variants of generalized likelihood ratio testing corresponding to traditional approaches to autoregressive movingaverage (ARMA) modeling estimation. In se ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—This paper is concerned with error exponents in testing problems raised by autoregressive (AR) modeling. The tests to be considered are variants of generalized likelihood ratio testing corresponding to traditional approaches to autoregressive movingaverage (ARMA) modeling estimation. In several related problems, such as Markov order or hidden Markov model order estimation, optimal error exponents have been determined thanks to large deviations theory. AR order testing is specially challenging since the natural tests rely on quadratic forms of Gaussian processes. In sharp contrast with empirical measures of Markov chains, the large deviation principles (LDPs) satisfied by Gaussian quadratic forms do not always admit an informationtheoretic representation. Despite this impediment, we prove the existence of nontrivial error exponents for Gaussian AR order testing. And furthermore, we exhibit situations where the exponents are optimal. These results are obtained by showing that the loglikelihood process indexed by AR models of a given order satisfy an LDP upper bound with a weakened informationtheoretic representation. Index Terms—Error exponents, Gaussian processes, large deviations, Levinson–Durbin, order, test, time series.
ON UNIVERSAL ESTIMATES FOR BINARY RENEWAL PROCESSES
, 811
"... A binary renewal process is a stochastic process {Xn} taking values in {0,1} where the lengths of the runs of 1’s between successive zeros are independent. After observing X0,X1,...,Xn one would like to predict the future behavior, and the problem of universal estimators is to do so without any prio ..."
Abstract
 Add to MetaCart
(Show Context)
A binary renewal process is a stochastic process {Xn} taking values in {0,1} where the lengths of the runs of 1’s between successive zeros are independent. After observing X0,X1,...,Xn one would like to predict the future behavior, and the problem of universal estimators is to do so without any prior knowledge of the distribution. We prove a variety of results of this type, including universal estimates for the expected time to renewal as well as estimates for the conditional distribution of the time to renewal. Some of our results require a moment condition on the time to renewal and we show by an explicit construction how some moment condition is necessary. 1. Introduction. The
unknown title
"... Error exponents for AR order testing Abstract — This paper is concerned with error exponents in testing problems raised by autoregressive (AR) modeling. The tests to be considered are variants of generalized likelihood ratio testing corresponding to traditional approaches to autoregressive movinga ..."
Abstract
 Add to MetaCart
(Show Context)
Error exponents for AR order testing Abstract — This paper is concerned with error exponents in testing problems raised by autoregressive (AR) modeling. The tests to be considered are variants of generalized likelihood ratio testing corresponding to traditional approaches to autoregressive movingaverage (ARMA) modeling estimation. In several related problems like Markov order or hidden Markov model order estimation, optimal error exponents have been determined thanks to large deviations theory. AR order testing is specially challenging since the natural tests rely on quadratic forms of Gaussian processes. In sharp contrast with empirical measures of Markov chains, the large deviation principles satisfied by Gaussian quadratic forms do not always admit an informationtheoretical representation. Despite this impediment, we prove the existence of nontrivial error exponents for Gaussian AR order testing. And furthermore, we exhibit situations where the exponents are optimal. These results are obtained by showing that the loglikelihood process indexed by AR models of a given order satisfy a large deviation principle upperbound with a weakened informationtheoretical representation.