Results 1  10
of
13
Asymptotic properties of the maximum likelihood estimator in autoregressive models with Markov regime
 ANN. STATIST
, 2004
"... An autoregressive process with Markov regime is an autoregressive process for which the regression function at each time point is given by a nonobservable Markov chain. In this paper we consider the asymptotic properties of the maximum likelihood estimator in a possibly nonstationary process of this ..."
Abstract

Cited by 36 (6 self)
 Add to MetaCart
An autoregressive process with Markov regime is an autoregressive process for which the regression function at each time point is given by a nonobservable Markov chain. In this paper we consider the asymptotic properties of the maximum likelihood estimator in a possibly nonstationary process of this kind for which the hidden state space is compact but not necessarily finite. Consistency and asymptotic normality are shown to follow from uniform exponential forgetting of the initial distribution for the hidden Markov chain conditional on the observations.
Network inference from cooccurrences
, 2006
"... Abstract—The discovery of networks is a fundamental problem ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
Abstract—The discovery of networks is a fundamental problem
QuasiMonte Carlo sampling to improve the efficiency of Monte Carlo EM
 Computational Statistics and Data Analysis
, 2005
"... In this paper we investigate an efficient implementation of the Monte Carlo EM algorithm based on QuasiMonte Carlo sampling. The Monte Carlo EM algorithm is a stochastic version of the deterministic EM (ExpectationMaximization) algorithm in which an intractable Estep is replaced by a Monte Carlo ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
In this paper we investigate an efficient implementation of the Monte Carlo EM algorithm based on QuasiMonte Carlo sampling. The Monte Carlo EM algorithm is a stochastic version of the deterministic EM (ExpectationMaximization) algorithm in which an intractable Estep is replaced by a Monte Carlo approximation. QuasiMonte Carlo methods produce deterministic sequences of points that can significantly improve the accuracy of Monte Carlo approximations over purely random sampling. One drawback to deterministic QuasiMonte Carlo methods is that it is generally difficult to determine the magnitude of the approximation error. However, in order to implement the Monte Carlo EM algorithm in an automated way, the ability to measure this error is fundamental. Recent developments of randomized QuasiMonte Carlo methods can overcome this drawback. We investigate the implementation of an automated, datadriven Monte Carlo EM algorithm based on randomized QuasiMonte Carlo methods. We apply this algorithm to a geostatistical model of online purchases and find that it can significantly decrease the total simulation effort, thus showing great potential for improving upon the efficiency of the classical Monte Carlo EM algorithm. Key words and phrases: Monte Carlo error; lowdiscrepancy sequence; Halton sequence; EM algorithm; geostatistical model.
Ascentbased Monte Carlo EM
, 2004
"... The EM algorithm is a popular tool for maximizing likelihood functions in the presence of missing data. Unfortunately, EM often requires the evaluation of analytically intractable and highdimensional integrals. The Monte Carlo EM (MCEM) algorithm is the natural extension of EM that employs Monte Ca ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
The EM algorithm is a popular tool for maximizing likelihood functions in the presence of missing data. Unfortunately, EM often requires the evaluation of analytically intractable and highdimensional integrals. The Monte Carlo EM (MCEM) algorithm is the natural extension of EM that employs Monte Carlo methods to estimate the relevant integrals. Typically, a very large Monte Carlo sample size is required to estimate these integrals within an acceptable tolerance when the algorithm is near convergence. Even if this sample size were known at the onset of implementation of MCEM, its use throughout all iterations is wasteful, especially when accurate starting values are not available. We propose a datadriven strategy for controlling Monte Carlo resources in MCEM. The proposed algorithm improves on similar existing methods by: (i) recovering EM’s ascent (i.e., likelihoodincreasing) property with high probability, (ii) being more robust to the impact of user defined inputs, and (iii) handling classical Monte Carlo and Markov chain Monte Carlo within a common framework. Because of (i) we refer to the algorithm as “Ascentbased MCEM”. We apply Ascentbased MCEM to a variety of examples, including one where it is used to dramatically accelerate the convergence of deterministic EM.
The EM Algorithm, Its Stochastic Implementation and Global Optimization: Some Challenges and Opportunities for OR
, 2006
"... The EM algorithm is a very powerful optimization method and has reached popularity in many fields. Unfortunately, EM is only a local optimization method and can get stuck in suboptimal solutions. While more and more contemporary data/model combinations yield more than one optimum, there have been on ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
The EM algorithm is a very powerful optimization method and has reached popularity in many fields. Unfortunately, EM is only a local optimization method and can get stuck in suboptimal solutions. While more and more contemporary data/model combinations yield more than one optimum, there have been only very few attempts at making EM suitable for global optimization. In this paper we review the basic EM algorithm, its properties and challenges and we focus in particular on its stochastic implementation. The stochastic EM implementation promises relief to some of the contemporary data/model challenges and it is particularly wellsuited for a wedding with global optimization ideas since most global optimization paradigms are also based on the principles of stochasticity. We review some of the challenges of the stochastic EM implementation and propose a new algorithm that combines the principles of EM with that of the Genetic Algorithm. While this new algorithm shows some promising results for clustering of an online auction database of functional objects, the primary goal of this work is to bridge a gap between the field of statistics, which is home to extensive research on the EM algorithm, and the field of operations research, in which work on global optimization thrives, and to stir new ideas for joint research between the two.
An EM and a stochastic version of the EM algorithm for nonparametric Hidden semiMarkov models
, 2011
"... The Hidden semiMarkov models (HSMMs) have been introduced to overcome the constraint of a geometric sojourn time distribution for the different hidden states in the classical hidden Markov models. Several variations of HSMMs have been proposed that model the sojourn times by a parametric or a nonpa ..."
Abstract
 Add to MetaCart
The Hidden semiMarkov models (HSMMs) have been introduced to overcome the constraint of a geometric sojourn time distribution for the different hidden states in the classical hidden Markov models. Several variations of HSMMs have been proposed that model the sojourn times by a parametric or a nonparametric family of distributions. In this article, we concentrate our interest on the nonparametric case where the duration distributions are attached to transitions and not to states as in most of the published papers in HSMMs. Therefore, it is worth noticing that here we treat the underlying hidden semi–Markov chain in its general probabilistic structure. In that case, Barbu and Limnios (2008) proposed an Expectation–Maximization (EM) algorithm in order to estimate the semiMarkov kernel and the emission probabilities that characterize the dynamics of the model. In this paper, we consider an improved version of Barbu and Limnios ’ EM algorithm which is faster than the original one. Moreover, we propose a stochastic version of the EM algorithm that achieves comparable estimates with the EM algorithm in less execution time. Some numerical examples are provided which illustrate the efficient performance of the proposed algorithms.
Monte Carlo Methods for Channel, Phase 1 Noise and Frequency Offset Estimation with Unknown Noise Variances in OFDM Systems
, 2013
"... ..."
1 Network Inference from CoOccurrences
"... Abstract — The discovery of networks is a fundamental problem ..."
Trondheim, Norway. On a Hybrid Data Cloning Method and Its Application in Generalized Linear Mixed Models
"... Data cloning method is a new computational tool for computing maximum likelihood estimates in complex statistical models such as mixed models. The data cloning method is synthesized with integrated nested Laplace approximation to compute maximum likelihood estimates efficiently via a fast implementa ..."
Abstract
 Add to MetaCart
Data cloning method is a new computational tool for computing maximum likelihood estimates in complex statistical models such as mixed models. The data cloning method is synthesized with integrated nested Laplace approximation to compute maximum likelihood estimates efficiently via a fast implementation in generalized linear mixed models. Asymptotic normality of the hybrid data cloning based distribution is established aided by modification of Stein’s Identity. The results are illustrated through a series of well known examples. It is shown that the proposed method as well as normal approximation perform very well and justify the theory.
Efficient Simulated Maximum Likelihood with an Application to Online Retailing
"... Simulated maximum likelihood estimates an analytically intractable likelihood function with an empirical average based on data simulated from a suitable importance sampling distribution. In order to use simulated maximum likelihood in an efficient way, the choice of the importance sampling distribu ..."
Abstract
 Add to MetaCart
Simulated maximum likelihood estimates an analytically intractable likelihood function with an empirical average based on data simulated from a suitable importance sampling distribution. In order to use simulated maximum likelihood in an efficient way, the choice of the importance sampling distribution as well as the mechanism to generate the simulated data are crucial. In this paper we develop an automated, multistage implementation of simulated maximum likelihood which, by adaptively updating the importance sampler, approximates the optimal importance sampling distribution. The proposed method also allows for a convenient incorporation of quasiMonte Carlo methods. QuasiMonte Carlo methods produce simulated data which can significantly increase the accuracy of the likelihoodestimate over regular Monte Carlo methods. Several examples provide evidence for the potential efficiency gain of this new method. We apply the method to a computationally challenging geostatistical model of online retailing.