Results 1  10
of
186
Error bounds for computing the expectation by Markov chain Monte Carlo
, 2009
"... We study the error of reversible Markov chain Monte Carlo methods for approximating the expectation of a function. Explicit error bounds with respect to the l2, l4 and l∞norm of the function are proven. By the estimation the well known asymptotical limit of the error is attained, i.e. there is n ..."
Abstract

Cited by 116 (2 self)
 Add to MetaCart
(Show Context)
We study the error of reversible Markov chain Monte Carlo methods for approximating the expectation of a function. Explicit error bounds with respect to the l2, l4 and l∞norm of the function are proven. By the estimation the well known asymptotical limit of the error is attained, i.e. there is no gap between the estimate and the asymptotical behavior. We discuss the dependence of the error on a burnin of the Markov chain. Furthermore we suggest and justify a specific burnin for optimizing the algorithm.
FixedWidth Output Analysis for Markov Chain Monte Carlo
, 2005
"... Markov chain Monte Carlo is a method of producing a correlated sample in order to estimate features of a target distribution via ergodic averages. A fundamental question is when should sampling stop? That is, when are the ergodic averages good estimates of the desired quantities? We consider a metho ..."
Abstract

Cited by 98 (31 self)
 Add to MetaCart
Markov chain Monte Carlo is a method of producing a correlated sample in order to estimate features of a target distribution via ergodic averages. A fundamental question is when should sampling stop? That is, when are the ergodic averages good estimates of the desired quantities? We consider a method that stops the simulation when the width of a confidence interval based on an ergodic average is less than a userspecified value. Hence calculating a Monte Carlo standard error is a critical step in assessing the simulation output. We consider the regenerative simulation and batch means methods of estimating the variance of the asymptotic normal distribution. We give sufficient conditions for the strong consistency of both methods and investigate their finite sample properties in a variety of examples.
On the Markov chain central limit theorem. Probability Surveys
, 2004
"... The goal of this mainly expository paper is to describe conditions which guarantee a central limit theorem for functionals of general state space Markov chains with a view towards Markov chain Monte Carlo settings. Thus the focus is on the connections between drift and mixing conditions and their im ..."
Abstract

Cited by 86 (15 self)
 Add to MetaCart
(Show Context)
The goal of this mainly expository paper is to describe conditions which guarantee a central limit theorem for functionals of general state space Markov chains with a view towards Markov chain Monte Carlo settings. Thus the focus is on the connections between drift and mixing conditions and their implications. In particular, we consider three commonly cited central limit theorems and discuss their relationship to classical results for mixing processes. Several motivating examples are given which range from toy onedimensional settings to complicated settings encountered in Markov chain Monte Carlo. 1
Markov chain Monte Carlo: Can we trust the third significant figure
 University of Minnesota, School of Statistics
, 2007
"... Abstract. Current reporting of results based on Markov chain Monte Carlo computations could be improved. In particular, a measure of the accuracy of the resulting estimates is rarely reported. Thus we have little ability to objectively assess the quality of the reported estimates. We address this is ..."
Abstract

Cited by 68 (25 self)
 Add to MetaCart
Abstract. Current reporting of results based on Markov chain Monte Carlo computations could be improved. In particular, a measure of the accuracy of the resulting estimates is rarely reported. Thus we have little ability to objectively assess the quality of the reported estimates. We address this issue in that we discuss why Monte Carlo standard errors are important, how they can be easily calculated in Markov chain Monte Carlo and how they can be used to decide when to stop the simulation. We compare their use to a popular alternative in the context of two examples.
Stochastic Approximation in Monte Carlo Computation
, 2006
"... The WangLandau algorithm is an adaptive Markov chain Monte Carlo algorithm to calculate the spectral density for a physical system. A remarkable feature of the algorithm is that it is not trapped by local energy minima, which is very important for systems with rugged energy landscapes. This feature ..."
Abstract

Cited by 42 (15 self)
 Add to MetaCart
The WangLandau algorithm is an adaptive Markov chain Monte Carlo algorithm to calculate the spectral density for a physical system. A remarkable feature of the algorithm is that it is not trapped by local energy minima, which is very important for systems with rugged energy landscapes. This feature has led to many successful applications of the algorithm in statistical physics and biophysics. However, there does not exist rigorous theory to support its convergence, and the estimates produced by the algorithm can only reach a limited statistical accuracy. In this paper, we propose the stochastic approximation Monte Carlo (SAMC) algorithm, which overcomes the shortcomings of the WangLandau algorithm. We establish a theorem concerning its convergence. The estimates produced by SAMC can be improved continuously as the simulation goes on. SAMC also extends applications of the WangLandau algorithm to continuum systems. The potential uses of SAMC in statistics are discussed through two classes of applications, importance sampling and model selection. The results show that SAMC can work as a general importance
Weak convergence of Metropolis algorithms for noniid target distributions
, 2007
"... In this paper, we shall optimize the efficiency of Metropolis algorithms for multidimensional target distributions with scaling terms possibly depending on the dimension. We propose a method to determine the appropriate form for the scaling of the proposal distribution as a function of the dimension ..."
Abstract

Cited by 39 (6 self)
 Add to MetaCart
(Show Context)
In this paper, we shall optimize the efficiency of Metropolis algorithms for multidimensional target distributions with scaling terms possibly depending on the dimension. We propose a method to determine the appropriate form for the scaling of the proposal distribution as a function of the dimension, which leads to the proof of an asymptotic diffusion theorem. We show that when there does not exist any component with a scaling term significantly smaller than the others, the asymptotically optimal acceptance rate is the wellknown 0.234.
Curvature, concentration, and error estimates for Markov chain
"... We provide explicit nonasymptotic estimates for the rate of convergence of empirical means of Markov chains, together with a Gaussian or exponential control on the deviations of empirical means. These estimates hold under a “positive curvature ” assumption expressing a kind of metric ergodicity, wh ..."
Abstract

Cited by 30 (3 self)
 Add to MetaCart
We provide explicit nonasymptotic estimates for the rate of convergence of empirical means of Markov chains, together with a Gaussian or exponential control on the deviations of empirical means. These estimates hold under a “positive curvature ” assumption expressing a kind of metric ergodicity, which generalizes the Ricci curvature from differential geometry and, on finite graphs, amounts to contraction under path coupling. The goal of the Markov chain Monte Carlo method is to provide an efficient way to approximate the integral π(f): = ∫ f(x)π(dx) of a function f under a finite measure π on some space X. This approach, which has been very successful, consists in constructing a hopefully easytosimulate Markov chain (X1, X2,...,Xk,...) on X with stationary distribution π, waiting for a time T0 (the burnin) so that the chain gets close to its stationary distribution, and then estimating π(f) by the empirical mean on the next T steps of the trajectory, with T large enough:
A theoretical comparison of the data augmentation, marginal augmentation and PXDA algorithms
 The Annals of Statistics
, 2008
"... The data augmentation (DA) algorithm is a widely used Markov chain Monte Carlo (MCMC) algorithm that is based on a Markov transition density of the form p(xx ′ ) = ∫ Y fXY (xy)fY X(yx ′)dy, and fY X are conditional densities. The PXDA and where fXY marginal augmentation algorithms of Liu an ..."
Abstract

Cited by 30 (15 self)
 Add to MetaCart
(Show Context)
The data augmentation (DA) algorithm is a widely used Markov chain Monte Carlo (MCMC) algorithm that is based on a Markov transition density of the form p(xx ′ ) = ∫ Y fXY (xy)fY X(yx ′)dy, and fY X are conditional densities. The PXDA and where fXY marginal augmentation algorithms of Liu and Wu [J. Amer. Statist. Assoc. 94 (1999) 1264–1274] and Meng and van Dyk [Biometrika 86 (1999) 301–320] are alternatives to DA that often converge much faster and are only slightly more computationally demanding. The transition densities of these alternative algorithms can be written in the form pR(xx ′ ) = ∫ Y Y fXY (xy ′)R(y,dy ′)fY X(yx ′)dy, where R is a Markov transition function on Y. We prove that when R satisfies
Limit theorems for some adaptive MCMC algorithms with subgeometric kernels. Part II
, 2009
"... We prove a central limit theorem for a general class of adaptive Markov Chain Monte Carlo algorithms driven by subgeometrically ergodic Markov kernels. We discuss in detail the special case of stochastic approximation. We use the result to analyze the asymptotic behavior of an adaptive version of ..."
Abstract

Cited by 29 (4 self)
 Add to MetaCart
(Show Context)
We prove a central limit theorem for a general class of adaptive Markov Chain Monte Carlo algorithms driven by subgeometrically ergodic Markov kernels. We discuss in detail the special case of stochastic approximation. We use the result to analyze the asymptotic behavior of an adaptive version of the Metropolis Adjusted Langevin algorithm with a heavy tailed target density.