Results 1 
5 of
5
NUMBER OF HIDDEN STATES AND MEMORY: A JOINT ORDER ESTIMATION PROBLEM FOR MARKOV CHAINS WITH MARKOV REGIME
"... Abstract. This paper deals with order identification for Markov chains with Markov regime (MCMR) in the context of finite alphabets. We define the joint order of a MCMR process in terms of the number k of states of the hidden Markov chain and the memory m of the conditional Markov chain. We study th ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
Abstract. This paper deals with order identification for Markov chains with Markov regime (MCMR) in the context of finite alphabets. We define the joint order of a MCMR process in terms of the number k of states of the hidden Markov chain and the memory m of the conditional Markov chain. We study the properties of penalized maximum likelihood estimators for the unknown order (k, m) of an observed MCMR process, relying on information theoretic arguments. The novelty of our work relies in the joint estimation of two structural parameters. Furthermore, the different models in competition are not nested. In an asymptotic framework, we prove that a penalized maximum likelihood estimator is strongly consistent without prior bounds on k and m. We complement our theoretical work with a simulation study of its behaviour. We also study numerically the behaviour of the BIC criterion. A theoretical proof of its consistency seems to us presently out of reach for MCMR, as such a result does not yet exist in the simpler case where m = 0 (that is for hidden Markov models). Résumé. Ce travail porte sur l’identification de l’ordre d’une chaîne de Markov à régime Markovien (MCMR) sur un alphabet fini. L’ordre d’une MCMR est défini comme le couple (k, m) où k est le nombre d’états de la chaîne cachée et m la mémoire de la chaîne de Markov conditionnelle. Nous étudions des estimateurs du maximum de vraisemblance pénalisée en utilisant des techniques issues de
Order Estimation and Model Selection
, 2003
"... reason why source coding concepts and techniques have become a standard tool in the area. This chapter presents four kinds of results: a rst very general consistency result in a Bayesian setting provides hints about the ideal penalties that could be used in penalized maximum likelihood order estimat ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
reason why source coding concepts and techniques have become a standard tool in the area. This chapter presents four kinds of results: a rst very general consistency result in a Bayesian setting provides hints about the ideal penalties that could be used in penalized maximum likelihood order estimation. Then we provide a general construction for strongly consistent order estimators based on universal coding arguments. The third main result reports a recent tour de force by Csiszar and Shields (2000) who show that the Bayesian Information Criterion provides a strongly consistent Markov order estimator. We conclude by presenting a general framework for analyzing the Bahadur eciency of order estimation procedures following the line Gassiat and Boucheron (to appear). LRI UMR 8623 CNRS, Universite ParisSud Mathematiques, Universite ParisSud 2.1 Model Order Identi cation: what is it about ? In the preceding chapters, we have been concerned with inference problems in HMMs where th
ON UNIVERSAL ESTIMATES FOR BINARY RENEWAL PROCESSES
, 811
"... A binary renewal process is a stochastic process {Xn} taking values in {0,1} where the lengths of the runs of 1’s between successive zeros are independent. After observing X0,X1,...,Xn one would like to predict the future behavior, and the problem of universal estimators is to do so without any prio ..."
Abstract
 Add to MetaCart
(Show Context)
A binary renewal process is a stochastic process {Xn} taking values in {0,1} where the lengths of the runs of 1’s between successive zeros are independent. After observing X0,X1,...,Xn one would like to predict the future behavior, and the problem of universal estimators is to do so without any prior knowledge of the distribution. We prove a variety of results of this type, including universal estimates for the expected time to renewal as well as estimates for the conditional distribution of the time to renewal. Some of our results require a moment condition on the time to renewal and we show by an explicit construction how some moment condition is necessary. 1. Introduction. The
unknown title
"... Error exponents for AR order testing Abstract — This paper is concerned with error exponents in testing problems raised by autoregressive (AR) modeling. The tests to be considered are variants of generalized likelihood ratio testing corresponding to traditional approaches to autoregressive movinga ..."
Abstract
 Add to MetaCart
(Show Context)
Error exponents for AR order testing Abstract — This paper is concerned with error exponents in testing problems raised by autoregressive (AR) modeling. The tests to be considered are variants of generalized likelihood ratio testing corresponding to traditional approaches to autoregressive movingaverage (ARMA) modeling estimation. In several related problems like Markov order or hidden Markov model order estimation, optimal error exponents have been determined thanks to large deviations theory. AR order testing is specially challenging since the natural tests rely on quadratic forms of Gaussian processes. In sharp contrast with empirical measures of Markov chains, the large deviation principles satisfied by Gaussian quadratic forms do not always admit an informationtheoretical representation. Despite this impediment, we prove the existence of nontrivial error exponents for Gaussian AR order testing. And furthermore, we exhibit situations where the exponents are optimal. These results are obtained by showing that the loglikelihood process indexed by AR models of a given order satisfy a large deviation principle upperbound with a weakened informationtheoretical representation.