Results 1  10
of
46
Hidden Markov processes
 IEEE Trans. Inform. Theory
, 2002
"... Abstract—An overview of statistical and informationtheoretic aspects of hidden Markov processes (HMPs) is presented. An HMP is a discretetime finitestate homogeneous Markov chain observed through a discretetime memoryless invariant channel. In recent years, the work of Baum and Petrie on finite ..."
Abstract

Cited by 185 (4 self)
 Add to MetaCart
Abstract—An overview of statistical and informationtheoretic aspects of hidden Markov processes (HMPs) is presented. An HMP is a discretetime finitestate homogeneous Markov chain observed through a discretetime memoryless invariant channel. In recent years, the work of Baum and Petrie on finitestate finitealphabet HMPs was expanded to HMPs with finite as well as continuous state spaces and a general alphabet. In particular, statistical properties and ergodic theorems for relative entropy densities of HMPs were developed. Consistency and asymptotic normality of the maximumlikelihood (ML) parameter estimator were proved under some mild conditions. Similar results were established for switching autoregressive processes. These processes generalize HMPs. New algorithms were developed for estimating the state, parameter, and order of an HMP, for universal coding and classification of HMPs, and for universal decoding of hidden Markov channels. These and other related topics are reviewed in this paper. Index Terms—Baum–Petrie algorithm, entropy ergodic theorems, finitestate channels, hidden Markov models, identifiability, Kalman filter, maximumlikelihood (ML) estimation, order estimation, recursive parameter estimation, switching autoregressive processes, Ziv inequality. I.
Recursive Monte Carlo filters: Algorithms and theoretical analysis
, 2003
"... powerful tool to perform computations in general state space models. We discuss and compare the accept–reject version with the more common sampling importance resampling version of the algorithm. In particular, we show how auxiliary variable methods and stratification can be used in the accept–rejec ..."
Abstract

Cited by 47 (0 self)
 Add to MetaCart
(Show Context)
powerful tool to perform computations in general state space models. We discuss and compare the accept–reject version with the more common sampling importance resampling version of the algorithm. In particular, we show how auxiliary variable methods and stratification can be used in the accept–reject version, and we compare different resampling techniques. In a second part, we show laws of large numbers and a central limit theorem for these Monte Carlo filters by simple induction arguments that need only weak conditions. We also show that, under stronger conditions, the required sample size is independent of the length of the observed series. 1. State space and hidden Markov models. A general state space or hidden Markov model consists of an unobserved state sequence (Xt) and an observation sequence (Yt) with the following properties: State evolution: X0,X1,X2,... is a Markov chain with X0 ∼ a0(x)dµ(x) and XtXt−1 = xt−1 ∼ at(xt−1,x)dµ(x). Generation of observations: Conditionally on (Xt), the Yt’s are independent and Yt depends on Xt only with YtXt = xt ∼ bt(xt,y)dν(y). These models occur in a variety of applications. Linear state space models are equivalent to ARMA models (see, e.g., [16]) and have become popular Received January 2003; revised August 2004. AMS 2000 subject classifications. Primary 62M09; secondary 60G35, 60J22, 65C05. Key words and phrases. State space models, hidden Markov models, filtering and smoothing, particle filters, auxiliary variables, sampling importance resampling, central limit theorem. This is an electronic reprint of the original article published by the
Recursive algorithms for estimation of hidden Markov models and autoregressive models with Markov regime
 IEEE Trans. Inform. Theory
, 2002
"... Abstract—This paper is concerned with recursive algorithms for the estimation of hidden Markov models (HMMs) and autoregressive (AR) models under Markov regime. Convergence and rate of convergence results are derived. Acceleration of convergence by averaging of the iterates and the observations are ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
(Show Context)
Abstract—This paper is concerned with recursive algorithms for the estimation of hidden Markov models (HMMs) and autoregressive (AR) models under Markov regime. Convergence and rate of convergence results are derived. Acceleration of convergence by averaging of the iterates and the observations are treated. Finally, constant stepsize tracking algorithms are presented and examined. Index Terms—Convergence, hidden Markov estimation, rate of convergence, recursive estimation. I.
Sequential Monte Carlo smoothing with application to parameter estimation in nonlinear state space models
 Bernoulli
, 2008
"... Abstract. This paper concerns the use of Sequential Monte Carlo methods (SMC) for smoothing in general state space models. A well known problem when applying the standard SMC technique in the smoothing mode is that the resampling mechanism introduces degeneracy of the approximation in the pathspace ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
Abstract. This paper concerns the use of Sequential Monte Carlo methods (SMC) for smoothing in general state space models. A well known problem when applying the standard SMC technique in the smoothing mode is that the resampling mechanism introduces degeneracy of the approximation in the pathspace. However, when performing maximum likelihood estimation via the EM algorithm, all involved functionals will be of additive form for a large subclass of models. To cope with the problem in this case, a modification, relying on forgetting properties of the filtering dynamics, of the standard method is proposed. In this setting, the quality of the produced estimates is investigated both theoretically and through simulations. 1.
An Autoregressive Model with TimeVarying Coefficients for Wind Fields
, 2005
"... In this paper, an original Markovswitching autoregressive model is proposed to describe the spacetime evolution of wind fields. At first, a nonobservable process is introduced in order to model the motion of the meteorological structures. Then, ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
In this paper, an original Markovswitching autoregressive model is proposed to describe the spacetime evolution of wind fields. At first, a nonobservable process is introduced in order to model the motion of the meteorological structures. Then,
Efficient likelihood estimation in state space models
 Ann. Statist
"... Motivated by studying asymptotic properties of the maximum likelihood estimator (MLE) in stochastic volatility (SV) models, in this paper we investigate likelihood estimation in state space models. We first prove, under some regularity conditions, there is a consistent sequence of roots of the likel ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Motivated by studying asymptotic properties of the maximum likelihood estimator (MLE) in stochastic volatility (SV) models, in this paper we investigate likelihood estimation in state space models. We first prove, under some regularity conditions, there is a consistent sequence of roots of the likelihood equation that is asymptotically normal with the inverse of the Fisher information as its variance. With an extra assumption that the likelihood equation has a unique root for each n, then there is a consistent sequence of estimators of the unknown parameters. If, in addition, the supremum of the log likelihood function is integrable, the MLE exists and is strongly consistent. Edgeworth expansion of the approximate solution of likelihood equation is also established. Several examples, including Markov switching models, ARMA models, (G)ARCH models and stochastic volatility (SV) models, are given for illustration. 1. Introduction. Motivated
Parameter estimation and asymptotic stability in stochastic filtering
 Stochastic Process. Appl
"... In this paper, we study the problem of estimating a Markov chain X(signal) from its noisy partial information Y, when the transition probability kernel depends on some unknown parameters. Our goal is to compute the conditional distribution process P{XnYn,..., Y1}, referred to hereafter as the optim ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
In this paper, we study the problem of estimating a Markov chain X(signal) from its noisy partial information Y, when the transition probability kernel depends on some unknown parameters. Our goal is to compute the conditional distribution process P{XnYn,..., Y1}, referred to hereafter as the optimal filter. Following a standard Bayesian technique, we treat the parameters as a nondynamic component of the Markov chain. As a result, the new Markov chain is not going to be mixing, even if the original one is. We show that, under certain conditions, the optimal filters are still going to be asymptotically stable with respect to the initial conditions. Thus, by computing the optimal filter of the new system, we can estimate the signal adaptively. Key words: nonlinear filtering, asymptotic stability, ergodic decomposition, Bayesian estimators.
NUMBER OF HIDDEN STATES AND MEMORY: A JOINT ORDER ESTIMATION PROBLEM FOR MARKOV CHAINS WITH MARKOV REGIME
"... Abstract. This paper deals with order identification for Markov chains with Markov regime (MCMR) in the context of finite alphabets. We define the joint order of a MCMR process in terms of the number k of states of the hidden Markov chain and the memory m of the conditional Markov chain. We study th ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
Abstract. This paper deals with order identification for Markov chains with Markov regime (MCMR) in the context of finite alphabets. We define the joint order of a MCMR process in terms of the number k of states of the hidden Markov chain and the memory m of the conditional Markov chain. We study the properties of penalized maximum likelihood estimators for the unknown order (k, m) of an observed MCMR process, relying on information theoretic arguments. The novelty of our work relies in the joint estimation of two structural parameters. Furthermore, the different models in competition are not nested. In an asymptotic framework, we prove that a penalized maximum likelihood estimator is strongly consistent without prior bounds on k and m. We complement our theoretical work with a simulation study of its behaviour. We also study numerically the behaviour of the BIC criterion. A theoretical proof of its consistency seems to us presently out of reach for MCMR, as such a result does not yet exist in the simpler case where m = 0 (that is for hidden Markov models). Résumé. Ce travail porte sur l’identification de l’ordre d’une chaîne de Markov à régime Markovien (MCMR) sur un alphabet fini. L’ordre d’une MCMR est défini comme le couple (k, m) où k est le nombre d’états de la chaîne cachée et m la mémoire de la chaîne de Markov conditionnelle. Nous étudions des estimateurs du maximum de vraisemblance pénalisée en utilisant des techniques issues de
Markovswitching autoregressive models for wind time series. Environmental Modelling and Software
 Journal of the Royal Statistical Society, Series C (Applied Statistics
"... In this paper, two original Markov switching autoregressive models are proposed for modeling wind time series. We give some theoretical results concerning the asymptotic properties of the maximum likelihood estimates and the stability of these models. Then they are validated on real wind time series ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
In this paper, two original Markov switching autoregressive models are proposed for modeling wind time series. We give some theoretical results concerning the asymptotic properties of the maximum likelihood estimates and the stability of these models. Then they are validated on real wind time series. In particular, we show that they successfully restore some nonlinearities which cannot be caught by models based on gaussian processes.
THEORY AND INFERENCE FOR A MARKOV SWITCHING GARCH MODEL
, 2007
"... We develop a Markovswitching GARCH model (MSGARCH) wherein the conditional mean and variance switch in time from one GARCH process to another. The switching is governed by a hidden Markov chain. We provide sufficient conditions for geometric ergodicity and existence of moments of the process. Beca ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
We develop a Markovswitching GARCH model (MSGARCH) wherein the conditional mean and variance switch in time from one GARCH process to another. The switching is governed by a hidden Markov chain. We provide sufficient conditions for geometric ergodicity and existence of moments of the process. Because of path dependence, maximum likelihood estimation is not feasible. By enlarging the parameter space to include the state variables, Bayesian estimation using a Gibbs sampling algorithm is feasible. We illustrate the model on S&P500 daily returns. Keywords: GARCH, Markovswitching, Bayesian inference.