Results 1  10
of
16
Efficient Sampling using Metropolis Algorithms: Applications of Optimal Scaling Results
, 2006
"... We recently considered the optimal scaling problem of Metropolis algorithms for multidimensional target distributions with nonIID components. The results that were proven have wide applications and the aim of this paper is to show how practitioners can take advantage of them. In particular, we illu ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
We recently considered the optimal scaling problem of Metropolis algorithms for multidimensional target distributions with nonIID components. The results that were proven have wide applications and the aim of this paper is to show how practitioners can take advantage of them. In particular, we illustrate with several examples the case where the asymptotically optimal acceptance rate is the usual 0.234, and also the latest developments where smaller acceptance rates should be adopted for optimal sampling from the target distributions involved. We study the impact of the proposal scaling on the performance of the algorithm, and finally perform simulation studies exploring the efficiency of the algorithm when sampling from some popular statistical models.
On the Robustness of Optimal Scaling for Random Walk Metropolis Algorithms
, 2006
"... In this thesis, we study the optimal scaling problem for sampling from a target distribution of interest using a random walk Metropolis (RWM) algorithm. In order to implement this method, the selection of a proposal distribution is required, which is assumed to be a multivariate normal distribution ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
In this thesis, we study the optimal scaling problem for sampling from a target distribution of interest using a random walk Metropolis (RWM) algorithm. In order to implement this method, the selection of a proposal distribution is required, which is assumed to be a multivariate normal distribution with independent components. We investigate how the proposal scaling (i.e. the variance of the normal distribution) should be selected for best performance of the algorithm. The ddimensional target distribution we consider is formed of independent components, each of which has its own scaling term θ −2 j (d) (j = 1,..., d). This constitutes an extension of the ddimensional iid target considered by Roberts, Gelman & Gilks (1997) who showed that for large d, the acceptance rate should be tuned to 0.234 for optimal performance of the algorithm. In a similar fashion, we show that for the aforementioned framework, the relative efficiency of the algorithm can be characterized by its overall acceptance rate.
2008a), Accelerating Markov chain Monte Carlo simulation using selfadaptive differD
 VRUGT ET AL.: TREATMENT OF FORCING DATA ERROR USING MCMC SAMPLING
"... Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a selfadaptive Differential Evolution learning strategy within a populationbased evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, highdimensionality, and multimodality show
Bayesian Inference for Irreducible Diffusion Processes Using the PseudoMarginal Approach
, 2010
"... In this article we examine two relatively new MCMC methods which allow for Bayesian inference in diffusion models. First, the Monte Carlo within Metropolis (MCWM) algorithm (O’Neil, Balding, Becker, Serola and Mollison, 2000) uses an importance sampling approximation for the likelihood and yields a ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In this article we examine two relatively new MCMC methods which allow for Bayesian inference in diffusion models. First, the Monte Carlo within Metropolis (MCWM) algorithm (O’Neil, Balding, Becker, Serola and Mollison, 2000) uses an importance sampling approximation for the likelihood and yields a limiting stationary distribution that can be made arbitrarily “close ” to the posterior distribution (MCWM is not a standard MetropolisHastings algorithm, however). The second method, described in Beaumont (2003) and generalized in Andrieu and Roberts (2009), introduces auxiliary variables and utilizes a standard MetropolisHastings algorithm on the enlarged space; this method preserves the original posterior distribution. When applied to diffusion models, this approach can be viewed as a generalization of the popular data augmentation schemes that sample jointly from the missing paths and the parameters of the diffusion volatility. We show that increasing the number of auxiliary variables dramatically increases the acceptance rates in the MCMC algorithm (compared to basic data augmentation schemes), allowing for rapid convergence and mixing. The efficacy of these methods is demonstrated in a simulation study of the CoxIngersollRoss (CIR) model and an analysis of a realworld dataset.
Adaptive Independence Samplers
, 2007
"... Markov chain Monte Carlo (MCMC) is an important computational technique for generating samples from nonstandard probability distributions. A major challenge in the design of practical MCMC samplers is to achieve efficient convergence and mixing properties. One way to accelerate convergence and mixi ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Markov chain Monte Carlo (MCMC) is an important computational technique for generating samples from nonstandard probability distributions. A major challenge in the design of practical MCMC samplers is to achieve efficient convergence and mixing properties. One way to accelerate convergence and mixing is to adapt the proposal distribution in light of previously sampled points, thus increasing the probability of acceptance. In this paper, we propose two new adaptive MCMC algorithms based on the Independent MetropolisHastings algorithm. In the first, we adjust the proposal to minimize an estimate of the crossentropy between the target and proposal distributions, using the experience of preruns. This approach provides a general technique for deriving natural adaptive formulae. The second approach uses multiple parallel chains, and involves updating chains individually, then updating a proposal density by fitting a Bayesian model to the population. An important feature of this approach is that adapting the proposal does not change the limiting distributions of the chains. Consequently, the adaptive phase of the sampler can be continued indefinitely. We include results of numerical experiments indicating that the new algorithms compete well with traditional MetropolisHastings algorithms. We also demonstrate the method for a realistic problem arising in Comparative Genomics.
Optimal Proposal Distributions and Adaptive MCMC
, 2008
"... We review recent work concerning optimal proposal scalings for MetropolisHastings MCMC algorithms, and adaptive MCMC algorithms for trying to improve the algorithm on the fly. ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We review recent work concerning optimal proposal scalings for MetropolisHastings MCMC algorithms, and adaptive MCMC algorithms for trying to improve the algorithm on the fly.
Slice Sampling with Multivariate Steps
, 2011
"... Markov chain Monte Carlo (MCMC) allows statisticians to sample from a wide variety of multidimensional probability distributions. Unfortunately, MCMC is often difficult to use when components of the target distribution are highly correlated or have disparate variances. This thesis presents three res ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Markov chain Monte Carlo (MCMC) allows statisticians to sample from a wide variety of multidimensional probability distributions. Unfortunately, MCMC is often difficult to use when components of the target distribution are highly correlated or have disparate variances. This thesis presents three results that attempt to address this problem. First, it demonstrates a means for graphical comparison of MCMC methods, which allows researchers to compare the behavior of a variety of samplers on a variety of distributions. Second, it presents a collection of new slicesampling MCMC methods. These methods either adapt globally or use the adaptive crumb framework for sampling with multivariate steps. They perform well with minimal tuning on distributions when popular methods do not. Methods in the first group learn an approximation to the covariance of the target distribution and use its eigendecomposition to take nonaxisaligned steps. Methods in the second group use the gradients at rejected proposed moves to approximate the local shape of the target distribution so that subsequent proposals move more efficiently through the state space. Finally, this thesis explores the scaling of slice sampling with multivariate steps with respect to dimension, resulting in a formula for optimally choosing
Optimal scaling of Metropolis algorithms: is 0.234 as robust as believed?
, 2007
"... The Metropolis algorithm with Gaussian proposal distribution is a popular sampling method; it is versatile and easy to implement. Optimal scaling theory aims to improve the speed of convergence of this algorithm to its stationary distribution by carefully selecting its tuning parameter. This paper i ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The Metropolis algorithm with Gaussian proposal distribution is a popular sampling method; it is versatile and easy to implement. Optimal scaling theory aims to improve the speed of convergence of this algorithm to its stationary distribution by carefully selecting its tuning parameter. This paper is an overview of existing optimal scaling results and addresses in more depth the case of highdimensional target distributions formed of independent, but not identically distributed components. It attempts to give an intuitive explanation as to when the previouslyderived optimal acceptance rate of 0.234 is indeed optimal, and when it is unsuitable. In the latter case, it also explains how to find the correct asymptotically optimal acceptance rate, and why we sometimes have to turn to inhomogeneous proposal variances in order to obtain an efficient algorithm. This is all illustrated with a simple example.
unknown title
, 2006
"... Selection of a MCMC simulation strategy via an entropy convergence criterion ..."
Abstract
 Add to MetaCart
Selection of a MCMC simulation strategy via an entropy convergence criterion