Results 1  10
of
68
General state space Markov chains and MCMC algorithm
 PROBABILITY SURVEYS
, 2004
"... This paper surveys various results about Markov chains on general (noncountable) state spaces. It begins with an introduction to Markov chain Monte Carlo (MCMC) algorithms, which provide the motivation and context for the theory which follows. Then, sufficient conditions for geometric and uniform e ..."
Abstract

Cited by 114 (27 self)
 Add to MetaCart
This paper surveys various results about Markov chains on general (noncountable) state spaces. It begins with an introduction to Markov chain Monte Carlo (MCMC) algorithms, which provide the motivation and context for the theory which follows. Then, sufficient conditions for geometric and uniform ergodicity are presented, along with quantitative bounds on the rate of convergence to stationarity. Many of these results are proved using direct coupling constructions based on minorisation and drift conditions. Necessary and sufficient conditions for Central Limit Theorems (CLTs) are also presented, in some cases proved via the Poisson Equation or direct regeneration constructions. Finally, optimal scaling and weak convergence results for MetropolisHastings algorithms are discussed. None of the results presented is new, though many of the proofs are. We also describe some Open Problems.
Geometric ergodicity of Metropolis algorithms
 STOCHASTIC PROCESSES AND THEIR APPLICATIONS
, 1998
"... In this paper we derive conditions for geometric ergodicity of the random walkbased Metropolis algorithm on R k . We show that at least exponentially light tails of the target density is a necessity. This extends the onedimensional result of (Mengersen and Tweedie, 1996). For subexponential targe ..."
Abstract

Cited by 57 (2 self)
 Add to MetaCart
In this paper we derive conditions for geometric ergodicity of the random walkbased Metropolis algorithm on R k . We show that at least exponentially light tails of the target density is a necessity. This extends the onedimensional result of (Mengersen and Tweedie, 1996). For subexponential target densities we characterize the geometrically ergodic algorithms and we derive a practical sufficient condition which is stable under addition and multiplication. This condition is especially satisfied for the class of densities considered in (Roberts and Tweedie, 1996).
Fixedwidth output analysis for Markov chain Monte Carlo
 JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
, 2006
"... Markov chain Monte Carlo is a method of producing a correlated sample in order to estimate features of a target distribution via ergodic averages. A fundamental question is when should sampling stop? That is, when are the ergodic averages good estimates of the desired quantities? We consider a metho ..."
Abstract

Cited by 48 (17 self)
 Add to MetaCart
Markov chain Monte Carlo is a method of producing a correlated sample in order to estimate features of a target distribution via ergodic averages. A fundamental question is when should sampling stop? That is, when are the ergodic averages good estimates of the desired quantities? We consider a method that stops the simulation when the width of a confidence interval based on an ergodic average is less than a userspecified value. Hence calculating a Monte Carlo standard error is a critical step in assessing the simulation output. We consider the regenerative simulation and batch means methods of estimating the variance of the asymptotic normal distribution. We give sufficient conditions for the strong consistency of both methods and investigate their finite sample properties in a variety of examples.
On the Markov chain central limit theorem. Probability Surveys
, 2004
"... The goal of this mainly expository paper is to describe conditions which guarantee a central limit theorem for functionals of general state space Markov chains with a view towards Markov chain Monte Carlo settings. Thus the focus is on the connections between drift and mixing conditions and their im ..."
Abstract

Cited by 46 (11 self)
 Add to MetaCart
The goal of this mainly expository paper is to describe conditions which guarantee a central limit theorem for functionals of general state space Markov chains with a view towards Markov chain Monte Carlo settings. Thus the focus is on the connections between drift and mixing conditions and their implications. In particular, we consider three commonly cited central limit theorems and discuss their relationship to classical results for mixing processes. Several motivating examples are given which range from toy onedimensional settings to complicated settings encountered in Markov chain Monte Carlo. 1
Renewal theory and computable convergence rates for geometrically ergodic Markov chains
, 2003
"... We give computable bounds on the rate of convergence of the transition probabilities to the stationary distribution for a certain class of geometrically ergodic Markov chains. Our results are different from earlier estimates of Meyn and Tweedie, and from estimates using coupling, although we start f ..."
Abstract

Cited by 41 (0 self)
 Add to MetaCart
We give computable bounds on the rate of convergence of the transition probabilities to the stationary distribution for a certain class of geometrically ergodic Markov chains. Our results are different from earlier estimates of Meyn and Tweedie, and from estimates using coupling, although we start from essentially the same assumptions of a drift condition toward a “small set. ” The estimates show a noticeable improvement on existing results if the Markov chain is reversible with respect to its stationary distribution, and especially so if the chain is also positive. The method of proof uses the firstentrance– lastexit decomposition, together with new quantitative versions of a result of Kendall from discrete renewal theory. 1. Introduction. Let {Xn:n ≥ 0
Markov Chain Decomposition for Convergence Rate Analysis
"... In this paper we develop tools for analyzing the rate at which a reversible Markov chain converges to stationarity. Our techniques are useful when the Markov chain can be decomposed into pieces which are themselves easier to analyze. The main theorems relate the spectral gap of the original Markov c ..."
Abstract

Cited by 40 (8 self)
 Add to MetaCart
In this paper we develop tools for analyzing the rate at which a reversible Markov chain converges to stationarity. Our techniques are useful when the Markov chain can be decomposed into pieces which are themselves easier to analyze. The main theorems relate the spectral gap of the original Markov chains to the spectral gap of the pieces. In the first case the pieces are restrictions of the Markov chain to subsets of the state space; the second case treats a MetropolisHastings chain whose equilibrium distribution is a weighted average of equilibrium distributions of other MetropolisHastings chains on the same state space.
On the Applicability of Regenerative Simulation in Markov Chain Monte Carlo
, 2001
"... We consider the central limit theorem and the calculation of asymptotic standard errors for the ergodic averages constructed in Markov chain Monte Carlo. Chan & Geyer (1994) established a central limit theorem for ergodic averages by assuming that the underlying Markov chain is geometrically ergo ..."
Abstract

Cited by 35 (24 self)
 Add to MetaCart
We consider the central limit theorem and the calculation of asymptotic standard errors for the ergodic averages constructed in Markov chain Monte Carlo. Chan & Geyer (1994) established a central limit theorem for ergodic averages by assuming that the underlying Markov chain is geometrically ergodic and that a simple moment condition is satisfied. While it is relatively straightforward to check Chan and Geyer's conditions, their theorem does not lead to a consistent and easily computed estimate of the variance of the asymptotic normal distribution. Conversely, Mykland, Tierney & Yu (1995) discuss the use of regeneration to establish an alternative central limit theorem with the advantage that a simple, consistent estimate of the asymptotic variance is readily available. However, their result assumes a pair of unwieldy moment conditions whose verification is difficult in practice. In this paper, we show that the conditions of Chan and Geyer's theorem are sucient to establish Mykland, Tierney, and Yu's central limit theorem. This result, in conjunction with other recent developments, should pave the way for more widespread use of the regenerative method in Markov chain Monte Carlo. Our results are applied to the slice sampler for illustration.
Geometric Ergodicity of Gibbs and Block Gibbs Samplers for a Hierarchical Random Effects Model
, 1998
"... We consider fixed scan Gibbs and block Gibbs samplers for a Bayesian hierarchical random effects model with proper conjugate priors. A drift condition given in Meyn and Tweedie (1993, Chapter 15) is used to show that these Markov chains are geometrically ergodic. Showing that a Gibbs sampler is geom ..."
Abstract

Cited by 33 (8 self)
 Add to MetaCart
We consider fixed scan Gibbs and block Gibbs samplers for a Bayesian hierarchical random effects model with proper conjugate priors. A drift condition given in Meyn and Tweedie (1993, Chapter 15) is used to show that these Markov chains are geometrically ergodic. Showing that a Gibbs sampler is geometrically ergodic is the first step towards establishing central limit theorems, which can be used to approximate the error associated with Monte Carlo estimates of posterior quantities of interest. Thus, our results will be of practical interest to researchers using these Gibbs samplers for Bayesian data analysis. Key words and phrases: Bayesian model, Central limit theorem, Drift condition, Markov chain, Monte Carlo, Rate of convergence, Variance Components AMS 1991 subject classifications: Primary 60J27, secondary 62F15 1 Introduction Gelfand and Smith (1990, Section 3.4) introduced the Gibbs sampler for the hierarchical oneway random effects model with proper conjugate priors. Rosen...
Ordering Monte Carlo Markov Chains
 School of Statistics, University of Minnesota
, 1999
"... Markov chains having the same stationary distribution ß can be partially ordered by performance in the central limit theorem. We say that one chain is at least as good as another in the efficiency partial ordering if the variance in the central limit theorem is at least as small for every L 2 (ß) ..."
Abstract

Cited by 21 (5 self)
 Add to MetaCart
Markov chains having the same stationary distribution ß can be partially ordered by performance in the central limit theorem. We say that one chain is at least as good as another in the efficiency partial ordering if the variance in the central limit theorem is at least as small for every L 2 (ß) functional of the chain. Peskun partial ordering implies efficiency partial ordering [25, 30]. Here we show that Peskun partial ordering implies, for finite state spaces, ordering of all the eigenvalues of the transition matrices, and, for general state spaces, ordering of the suprema of the spectra of the transition operators. We also define a covariance partial ordering based on lag one autocovariances and show that it is equivalent to the efficiency partial ordering when restricted to reversible Markov chains. Similar but weaker results are provided for nonreversible Markov chains. Keywords: Peskun ordering, Eigenvalues, Spectral decomposition, Nonreversible kernels. 1 Introduction I...
A theoretical comparison of the data augmentation, marginal augmentation and PXDA algorithms
 The Annals of Statistics
, 2008
"... The data augmentation (DA) algorithm is a widely used Markov chain Monte Carlo (MCMC) algorithm that is based on a Markov transition density of the form p(xx ′ ) = ∫ Y fXY (xy)fY X(yx ′)dy, and fY X are conditional densities. The PXDA and where fXY marginal augmentation algorithms of Liu an ..."
Abstract

Cited by 20 (10 self)
 Add to MetaCart
The data augmentation (DA) algorithm is a widely used Markov chain Monte Carlo (MCMC) algorithm that is based on a Markov transition density of the form p(xx ′ ) = ∫ Y fXY (xy)fY X(yx ′)dy, and fY X are conditional densities. The PXDA and where fXY marginal augmentation algorithms of Liu and Wu [J. Amer. Statist. Assoc. 94 (1999) 1264–1274] and Meng and van Dyk [Biometrika 86 (1999) 301–320] are alternatives to DA that often converge much faster and are only slightly more computationally demanding. The transition densities of these alternative algorithms can be written in the form pR(xx ′ ) = ∫ Y Y fXY (xy ′)R(y,dy ′)fY X(yx ′)dy, where R is a Markov transition function on Y. We prove that when R satisfies