Results 1  10
of
93
Fixedwidth output analysis for Markov chain Monte Carlo
 JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
, 2006
"... Markov chain Monte Carlo is a method of producing a correlated sample in order to estimate features of a target distribution via ergodic averages. A fundamental question is when should sampling stop? That is, when are the ergodic averages good estimates of the desired quantities? We consider a metho ..."
Abstract

Cited by 48 (17 self)
 Add to MetaCart
Markov chain Monte Carlo is a method of producing a correlated sample in order to estimate features of a target distribution via ergodic averages. A fundamental question is when should sampling stop? That is, when are the ergodic averages good estimates of the desired quantities? We consider a method that stops the simulation when the width of a confidence interval based on an ergodic average is less than a userspecified value. Hence calculating a Monte Carlo standard error is a critical step in assessing the simulation output. We consider the regenerative simulation and batch means methods of estimating the variance of the asymptotic normal distribution. We give sufficient conditions for the strong consistency of both methods and investigate their finite sample properties in a variety of examples.
On the Markov chain central limit theorem. Probability Surveys
, 2004
"... The goal of this mainly expository paper is to describe conditions which guarantee a central limit theorem for functionals of general state space Markov chains with a view towards Markov chain Monte Carlo settings. Thus the focus is on the connections between drift and mixing conditions and their im ..."
Abstract

Cited by 46 (11 self)
 Add to MetaCart
The goal of this mainly expository paper is to describe conditions which guarantee a central limit theorem for functionals of general state space Markov chains with a view towards Markov chain Monte Carlo settings. Thus the focus is on the connections between drift and mixing conditions and their implications. In particular, we consider three commonly cited central limit theorems and discuss their relationship to classical results for mixing processes. Several motivating examples are given which range from toy onedimensional settings to complicated settings encountered in Markov chain Monte Carlo. 1
Markov chain Monte Carlo: Can we trust the third significant figure
 University of Minnesota, School of Statistics
, 2007
"... Abstract. Current reporting of results based on Markov chain Monte Carlo computations could be improved. In particular, a measure of the accuracy of the resulting estimates is rarely reported. Thus we have little ability to objectively assess the quality of the reported estimates. We address this is ..."
Abstract

Cited by 32 (14 self)
 Add to MetaCart
Abstract. Current reporting of results based on Markov chain Monte Carlo computations could be improved. In particular, a measure of the accuracy of the resulting estimates is rarely reported. Thus we have little ability to objectively assess the quality of the reported estimates. We address this issue in that we discuss why Monte Carlo standard errors are important, how they can be easily calculated in Markov chain Monte Carlo and how they can be used to decide when to stop the simulation. We compare their use to a popular alternative in the context of two examples.
Stochastic Approximation in Monte Carlo Computation
, 2006
"... The WangLandau algorithm is an adaptive Markov chain Monte Carlo algorithm to calculate the spectral density for a physical system. A remarkable feature of the algorithm is that it is not trapped by local energy minima, which is very important for systems with rugged energy landscapes. This feature ..."
Abstract

Cited by 23 (13 self)
 Add to MetaCart
The WangLandau algorithm is an adaptive Markov chain Monte Carlo algorithm to calculate the spectral density for a physical system. A remarkable feature of the algorithm is that it is not trapped by local energy minima, which is very important for systems with rugged energy landscapes. This feature has led to many successful applications of the algorithm in statistical physics and biophysics. However, there does not exist rigorous theory to support its convergence, and the estimates produced by the algorithm can only reach a limited statistical accuracy. In this paper, we propose the stochastic approximation Monte Carlo (SAMC) algorithm, which overcomes the shortcomings of the WangLandau algorithm. We establish a theorem concerning its convergence. The estimates produced by SAMC can be improved continuously as the simulation goes on. SAMC also extends applications of the WangLandau algorithm to continuum systems. The potential uses of SAMC in statistics are discussed through two classes of applications, importance sampling and model selection. The results show that SAMC can work as a general importance
Weak convergence of Metropolis algorithms for noniid target distributions
, 2007
"... In this paper, we shall optimize the efficiency of Metropolis algorithms for multidimensional target distributions with scaling terms possibly depending on the dimension. We propose a method to determine the appropriate form for the scaling of the proposal distribution as a function of the dimension ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
In this paper, we shall optimize the efficiency of Metropolis algorithms for multidimensional target distributions with scaling terms possibly depending on the dimension. We propose a method to determine the appropriate form for the scaling of the proposal distribution as a function of the dimension, which leads to the proof of an asymptotic diffusion theorem. We show that when there does not exist any component with a scaling term significantly smaller than the others, the asymptotically optimal acceptance rate is the wellknown 0.234.
A theoretical comparison of the data augmentation, marginal augmentation and PXDA algorithms
 The Annals of Statistics
, 2008
"... The data augmentation (DA) algorithm is a widely used Markov chain Monte Carlo (MCMC) algorithm that is based on a Markov transition density of the form p(xx ′ ) = ∫ Y fXY (xy)fY X(yx ′)dy, and fY X are conditional densities. The PXDA and where fXY marginal augmentation algorithms of Liu an ..."
Abstract

Cited by 20 (10 self)
 Add to MetaCart
The data augmentation (DA) algorithm is a widely used Markov chain Monte Carlo (MCMC) algorithm that is based on a Markov transition density of the form p(xx ′ ) = ∫ Y fXY (xy)fY X(yx ′)dy, and fY X are conditional densities. The PXDA and where fXY marginal augmentation algorithms of Liu and Wu [J. Amer. Statist. Assoc. 94 (1999) 1264–1274] and Meng and van Dyk [Biometrika 86 (1999) 301–320] are alternatives to DA that often converge much faster and are only slightly more computationally demanding. The transition densities of these alternative algorithms can be written in the form pR(xx ′ ) = ∫ Y Y fXY (xy ′)R(y,dy ′)fY X(yx ′)dy, where R is a Markov transition function on Y. We prove that when R satisfies
Limit theorems for some adaptive MCMC algorithms with subgeometric kernels. Part II
, 2009
"... We prove a central limit theorem for a general class of adaptive Markov Chain Monte Carlo algorithms driven by subgeometrically ergodic Markov kernels. We discuss in detail the special case of stochastic approximation. We use the result to analyze the asymptotic behavior of an adaptive version of ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
We prove a central limit theorem for a general class of adaptive Markov Chain Monte Carlo algorithms driven by subgeometrically ergodic Markov kernels. We discuss in detail the special case of stochastic approximation. We use the result to analyze the asymptotic behavior of an adaptive version of the Metropolis Adjusted Langevin algorithm with a heavy tailed target density.
Harris Recurrence of MetropolisWithinGibbs and Transdimensional MCMC Algorithms
, 2007
"... A φirreducible and aperiodic Markov chain with stationary probability distribution will converge to its stationary distribution from almost all starting points. The property of Harris recurrence allows us to replace “almost all ” by “all, ” which is potentially important when running Markov chain M ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
A φirreducible and aperiodic Markov chain with stationary probability distribution will converge to its stationary distribution from almost all starting points. The property of Harris recurrence allows us to replace “almost all ” by “all, ” which is potentially important when running Markov chain Monte Carlo algorithms. Fulldimensional Metropolis–Hastings algorithms are known to be Harris recurrent. In this paper, we consider conditions under which MetropoliswithinGibbs and transdimensional Markov chains are or are not Harris recurrent. We present a simple but natural twodimensional counterexample showing how Harris recurrence can fail, and also a variety of positive results which guarantee Harris recurrence. We also present some open problems. We close with a discussion of the practical implications for MCMC algorithms.
Error bounds for computing the expectation by Markov chain Monte Carlo, preprint, arXiv:0906.2359 C. Villani, Topics in optimal transportation
 Graduate Studies in Mathematics 58, American Mathematical Society
, 2003
"... Abstract. We study the error of reversible Markov chain Monte Carlo methods for approximating the expectation of a function. Explicit error bounds with respect to the l2, l4 and l∞norm of the function are proven. By the estimation the well known asymptotical limit of the error is attained, i.e. t ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Abstract. We study the error of reversible Markov chain Monte Carlo methods for approximating the expectation of a function. Explicit error bounds with respect to the l2, l4 and l∞norm of the function are proven. By the estimation the well known asymptotical limit of the error is attained, i.e. there is no gap between the estimate and the asymptotical behavior. We discuss the dependence of the error on a burnin of the Markov chain. Furthermore we suggest and justify a specific burnin for optimizing the algorithm. 1.