Results 1 
9 of
9
An Introduction to MCMC for Machine Learning
, 2003
"... This purpose of this introductory paper is threefold. First, it introduces the Monte Carlo method with emphasis on probabilistic machine learning. Second, it reviews the main building blocks of modern Markov chain Monte Carlo simulation, thereby providing and introduction to the remaining papers of ..."
Abstract

Cited by 235 (2 self)
 Add to MetaCart
This purpose of this introductory paper is threefold. First, it introduces the Monte Carlo method with emphasis on probabilistic machine learning. Second, it reviews the main building blocks of modern Markov chain Monte Carlo simulation, thereby providing and introduction to the remaining papers of this special issue. Lastly, it discusses new interesting research horizons.
On the use of auxiliary variables in Markov chain Monte Carlo sampling
 Scandinavian Journal of Statistics
, 1997
"... We study the slice sampler, a method of constructing a reversible Markov chain with a specified invariant distribution. Given an independence MetropolisHastings algorithm it is always possible to construct a slice sampler that dominates it in the Peskun sense. This means that the resulting Mark ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
We study the slice sampler, a method of constructing a reversible Markov chain with a specified invariant distribution. Given an independence MetropolisHastings algorithm it is always possible to construct a slice sampler that dominates it in the Peskun sense. This means that the resulting Markov chain produces estimates with a smaller asymptotic variance. Furthermore the slice sampler has a smaller secondlargest eigenvalue than the corresponding independence MetropolisHastings algorithm. This ensures faster convergence to the distribution of interest. A sufficient condition for uniform ergodicity of the slice sampler is given and an upper bound for the rate of convergence to stationarity is provided. Keywords: Auxiliary variables, Slice sampler, Peskun ordering, MetropolisHastings algorithm, Uniform ergodicity. 1 Introduction The slice sampler is a method of constructing a reversible Markov transition kernel with a given invariant distribution. Auxiliary variables ar...
On MetropolisHastings algorithms with delayed rejection
 Metron
, 2001
"... this paper is part of my dissertation completed under his precious and careful guidance at School of Statistics, University of Minnesota ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
this paper is part of my dissertation completed under his precious and careful guidance at School of Statistics, University of Minnesota
Applications of geometric bounds to the convergence rate of Markov chains on R^n
, 2001
"... Quantitative geometric rates of convergence for reversible Markov chains are closely related to the spectral gap of the corresponding operator, which is hard to calculate for general state spaces. This thesis describes a geometric argument to give different types of bounds for spectral gaps of Marko ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Quantitative geometric rates of convergence for reversible Markov chains are closely related to the spectral gap of the corresponding operator, which is hard to calculate for general state spaces. This thesis describes a geometric argument to give different types of bounds for spectral gaps of Markov chains on bounded subsets of Rn and to compare the rates of convergence of different Markov chains. We also extend the discretetime results to homogeneous continuoustime reversible Markov processes. The limit path bounds and the limit Cheeger's bounds are introduced. Two quantitative examples of 1dimensional diffusions are studied for the limit Cheeger's bounds and a ndimensional diffusion is studied for the limit path bounds.
Graphical comparison of MCMC performance
 IN PREPARATION
, 2010
"... This paper presents a graphical method for comparing performance of Markov Chain Monte Carlo methods. Most researchers present comparisons of MCMC methods using tables of figures of merit; this paper presents a graphical alternative. It first discusses the computation of autocorrelation time, then u ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
This paper presents a graphical method for comparing performance of Markov Chain Monte Carlo methods. Most researchers present comparisons of MCMC methods using tables of figures of merit; this paper presents a graphical alternative. It first discusses the computation of autocorrelation time, then uses this to construct a figure of merit, log density function evaluations per independent observation. Then, it demonstrates how one can plot this figure of merit against a tuning parameter in a grid of plots where columns represent sampling methods and rows represent distributions. This type of visualization makes it possible to convey a greater depth of information without overwhelming the user with numbers, allowing researchers to put their contributions into a broader context than is possible with a textual presentation.
INSTITUTE OF PHYSICS PUBLISHING Class. Quantum Grav. 22 (2005) S901–S911 CLASSICAL AND QUANTUM GRAVITY
, 2005
"... doi:10.1088/02649381/22/18/S04 LISA source confusion: identification and characterization of signals ..."
Abstract
 Add to MetaCart
doi:10.1088/02649381/22/18/S04 LISA source confusion: identification and characterization of signals
Delayed Rejection in Reversible Jump MetropolisHastings
 Biometrika
, 1999
"... In a MetropolisHastings algorithm, rejection of proposed moves is an intrinsic part of ensuring that the chain converges to the intended target distribution. However, persistent rejection, perhaps in particular parts of the state space, may indicate that locally the proposal distribution is badly c ..."
Abstract
 Add to MetaCart
In a MetropolisHastings algorithm, rejection of proposed moves is an intrinsic part of ensuring that the chain converges to the intended target distribution. However, persistent rejection, perhaps in particular parts of the state space, may indicate that locally the proposal distribution is badly calibrated to the target. As an alternative to careful offline tuning of statedependent proposals, the basic algorithm can be modified so that on rejection, a second attempt to move is made. A different proposal can be generated from a new distribution, that is allowed to depend on the previously rejected proposal. We generalise this idea of delaying the rejection and adapting the proposal distribution, due to Tierney and Mira (1999), to generate a more flexible class of methods, that in particular applies to a variable dimension setting. The approach is illustrated by two pedagogical examples, and a more realistic application, to a changepoint analysis for point processes. 1 Introduction ...