Results 11  20
of
475
On the ergodicity properties of some adaptive MCMC algorithms
 Annals of Applied Probability
"... In this paper we study the ergodicity properties of some adaptive Monte Carlo Markov chain algorithms (MCMC) that have been recently proposed in the literature. We prove that under a set of verifiable conditions, ergodic averages calculated from the output of a socalled adaptive MCMC sampler conver ..."
Abstract

Cited by 56 (7 self)
 Add to MetaCart
In this paper we study the ergodicity properties of some adaptive Monte Carlo Markov chain algorithms (MCMC) that have been recently proposed in the literature. We prove that under a set of verifiable conditions, ergodic averages calculated from the output of a socalled adaptive MCMC sampler converge to the required value and can even, under more stringent assumptions, satisfy a central limit theorem. We prove that the conditions required are satisfied for the Independent MetropolisHastings algorithm and the Random Walk Metropolis algorithm with symmetric increments. Finally we propose an application of these results to the case where the proposal distribution of the MetropolisHastings update is a mixture of distributions from a curved exponential family.
Convergence of slice sampler Markov chains
, 1998
"... In this paper, we analyse theoretical properties of the slice sampler. We find that the algorithm has extremely robust geometric ergodicity properties. For the case of just one auxiliary variable, we demonstrate that the algorithm is stochastic monotone, and deduce analytic bounds on the total varia ..."
Abstract

Cited by 55 (10 self)
 Add to MetaCart
In this paper, we analyse theoretical properties of the slice sampler. We find that the algorithm has extremely robust geometric ergodicity properties. For the case of just one auxiliary variable, we demonstrate that the algorithm is stochastic monotone, and deduce analytic bounds on the total variation distance from stationarity of the method using FosterLyapunov drift condition methodology.
Likelihood Ratio Gradient Estimation For Stochastic Recursions
 Communications of the ACM
, 1995
"... . In this paper, we develop mathematical machinery for verifying that a broad class of general state space Markov chains reacts smoothly to certain types of perturbations in the underlying transition structure. Our main result provides conditions under which the stationary probability measure of an ..."
Abstract

Cited by 54 (7 self)
 Add to MetaCart
. In this paper, we develop mathematical machinery for verifying that a broad class of general state space Markov chains reacts smoothly to certain types of perturbations in the underlying transition structure. Our main result provides conditions under which the stationary probability measure of an ergodic Harris recurrent Markov chain is differentiable in a certain strong sense. The approach is based on likelihood ratio "changeofmeasure" arguments, and leads directly to a "likelihood ratio gradient estimator" that can be computed numerically. Keywords: Harris recurrent Markov chain, likelihood ratio, gradient estimation, regeneration. 1 The research of this author was supported by the U. S. Army Research Office under Contract No. DAAL0391G 0101 and by the National Science Foundation under Contract No. DDM9101580. 2 This author's research was supported by NSERCCanada grant No. OGP0110050 and FCARQu'ebec grant No. 93ER1654. 1. Introduction In this paper, we will study the cl...
Fixedwidth output analysis for Markov chain Monte Carlo
 JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
, 2006
"... Markov chain Monte Carlo is a method of producing a correlated sample in order to estimate features of a target distribution via ergodic averages. A fundamental question is when should sampling stop? That is, when are the ergodic averages good estimates of the desired quantities? We consider a metho ..."
Abstract

Cited by 48 (17 self)
 Add to MetaCart
Markov chain Monte Carlo is a method of producing a correlated sample in order to estimate features of a target distribution via ergodic averages. A fundamental question is when should sampling stop? That is, when are the ergodic averages good estimates of the desired quantities? We consider a method that stops the simulation when the width of a confidence interval based on an ergodic average is less than a userspecified value. Hence calculating a Monte Carlo standard error is a critical step in assessing the simulation output. We consider the regenerative simulation and batch means methods of estimating the variance of the asymptotic normal distribution. We give sufficient conditions for the strong consistency of both methods and investigate their finite sample properties in a variety of examples.
On the Markov chain central limit theorem. Probability Surveys
, 2004
"... The goal of this mainly expository paper is to describe conditions which guarantee a central limit theorem for functionals of general state space Markov chains with a view towards Markov chain Monte Carlo settings. Thus the focus is on the connections between drift and mixing conditions and their im ..."
Abstract

Cited by 46 (11 self)
 Add to MetaCart
The goal of this mainly expository paper is to describe conditions which guarantee a central limit theorem for functionals of general state space Markov chains with a view towards Markov chain Monte Carlo settings. Thus the focus is on the connections between drift and mixing conditions and their implications. In particular, we consider three commonly cited central limit theorems and discuss their relationship to classical results for mixing processes. Several motivating examples are given which range from toy onedimensional settings to complicated settings encountered in Markov chain Monte Carlo. 1
A Direct Approach to Conformational Dynamics based on Hybrid Monte Carlo
, 1999
"... Recently, a novel concept for the computation of essential features of the dynamics of Hamiltonian systems (such as molecular dynamics) has been proposed [1]. The realization of this concept had been based on subdivision techniques applied to the FrobeniusPerron operator for the dynamical system. T ..."
Abstract

Cited by 44 (14 self)
 Add to MetaCart
Recently, a novel concept for the computation of essential features of the dynamics of Hamiltonian systems (such as molecular dynamics) has been proposed [1]. The realization of this concept had been based on subdivision techniques applied to the FrobeniusPerron operator for the dynamical system. The present paper suggests an alternative but related concept that merges the conceptual advantages of the dynamical systems approach with the appropriate statistical physics framework. This approach allows to de ne the phrase "conformation" in terms of the dynamical behavior of the molecular system and to characterize the dynamical stability of conformations. In a first step, the frequency of conformational changes is characterized in statistical terms leading to the definition of some Markov operator T that describes the corresponding transition probabilities within the canonical ensemble. In a second step, a discretization of T via specific hybrid Monte Carlo techniques is shown ...
Convergence rates of posterior distributions
 Ann. Statist
, 2000
"... We consider the asymptotic behavior of posterior distributions and Bayes estimators for infinitedimensional statistical models. We give general results on the rate of convergence of the posterior measure. These are applied to several examples, including priors on finite sieves, logspline models, D ..."
Abstract

Cited by 43 (11 self)
 Add to MetaCart
We consider the asymptotic behavior of posterior distributions and Bayes estimators for infinitedimensional statistical models. We give general results on the rate of convergence of the posterior measure. These are applied to several examples, including priors on finite sieves, logspline models, Dirichlet processes and interval censoring. 1. Introduction. Suppose
Polynomial convergence rates of Markov chains
 ANN. APPL. PROB
, 2000
"... In this paper we consider FosterLyapunov type drift conditions for Markov chains which imply polynomial rate convergence to stationarity in appropriate V norms. We also show how these results can be used to prove Central Limit Theorems for functions of the Markov chain. Examples are considered to ..."
Abstract

Cited by 42 (12 self)
 Add to MetaCart
In this paper we consider FosterLyapunov type drift conditions for Markov chains which imply polynomial rate convergence to stationarity in appropriate V norms. We also show how these results can be used to prove Central Limit Theorems for functions of the Markov chain. Examples are considered to random walks on the half line and the independence sampler.
Evolving Aspirations and Cooperation
 Journal of Economic Theory
, 1998
"... This paper therefore builds on [3], in which a model of consistent aspirationsbased learning was introduced ..."
Abstract

Cited by 42 (2 self)
 Add to MetaCart
This paper therefore builds on [3], in which a model of consistent aspirationsbased learning was introduced
Renewal theory and computable convergence rates for geometrically ergodic Markov chains
, 2003
"... We give computable bounds on the rate of convergence of the transition probabilities to the stationary distribution for a certain class of geometrically ergodic Markov chains. Our results are different from earlier estimates of Meyn and Tweedie, and from estimates using coupling, although we start f ..."
Abstract

Cited by 41 (0 self)
 Add to MetaCart
We give computable bounds on the rate of convergence of the transition probabilities to the stationary distribution for a certain class of geometrically ergodic Markov chains. Our results are different from earlier estimates of Meyn and Tweedie, and from estimates using coupling, although we start from essentially the same assumptions of a drift condition toward a “small set. ” The estimates show a noticeable improvement on existing results if the Markov chain is reversible with respect to its stationary distribution, and especially so if the chain is also positive. The method of proof uses the firstentrance– lastexit decomposition, together with new quantitative versions of a result of Kendall from discrete renewal theory. 1. Introduction. Let {Xn:n ≥ 0