Results 1 
9 of
9
Markov chain monte carlo convergence diagnostics
 JASA
, 1996
"... A critical issue for users of Markov Chain Monte Carlo (MCMC) methods in applications is how to determine when it is safe to stop sampling and use the samples to estimate characteristics of the distribution of interest. Research into methods of computing theoretical convergence bounds holds promise ..."
Abstract

Cited by 232 (6 self)
 Add to MetaCart
A critical issue for users of Markov Chain Monte Carlo (MCMC) methods in applications is how to determine when it is safe to stop sampling and use the samples to estimate characteristics of the distribution of interest. Research into methods of computing theoretical convergence bounds holds promise for the future but currently has yielded relatively little that is of practical use in applied work. Consequently, most MCMC users address the convergence problem by applying diagnostic tools to the output produced by running their samplers. After giving a brief overview of the area, we provide an expository review of thirteen convergence diagnostics, describing the theoretical basis and practical implementation of each. We then compare their performance in two simple models and conclude that all the methods can fail to detect the sorts of convergence failure they were designed to identify. We thus recommend a combination of strategies aimed at evaluating and accelerating MCMC sampler convergence, including applying diagnostic procedures to a small number of parallel chains, monitoring autocorrelations and crosscorrelations, and modifying parameterizations or sampling algorithms appropriately. We emphasize, however, that it is not possible to say with certainty that a finite sample from an MCMC algorithm is representative of an underlying stationary distribution. 1
Slice sampling
 Annals of Statistics
, 2000
"... Abstract. Markov chain sampling methods that automatically adapt to characteristics of the distribution being sampled can be constructed by exploiting the principle that one can sample from a distribution by sampling uniformly from the region under the plot of its density function. A Markov chain th ..."
Abstract

Cited by 152 (5 self)
 Add to MetaCart
Abstract. Markov chain sampling methods that automatically adapt to characteristics of the distribution being sampled can be constructed by exploiting the principle that one can sample from a distribution by sampling uniformly from the region under the plot of its density function. A Markov chain that converges to this uniform distribution can be constructed by alternating uniform sampling in the vertical direction with uniform sampling from the horizontal ‘slice ’ defined by the current vertical position, or more generally, with some update that leaves the uniform distribution over this slice invariant. Variations on such ‘slice sampling ’ methods are easily implemented for univariate distributions, and can be used to sample from a multivariate distribution by updating each variable in turn. This approach is often easier to implement than Gibbs sampling, and more efficient than simple Metropolis updates, due to the ability of slice sampling to adaptively choose the magnitude of changes made. It is therefore attractive for routine and automated use. Slice sampling methods that update all variables simultaneously are also possible. These methods can adaptively choose the magnitudes of changes made to each variable, based on the local properties of the density function. More ambitiously, such methods could potentially allow the sampling to adapt to dependencies between variables by constructing local quadratic approximations. Another approach is to improve sampling efficiency by suppressing random walks. This can be done using ‘overrelaxed ’ versions of univariate slice sampling procedures, or by using ‘reflective ’ multivariate slice sampling methods, which bounce off the edges of the slice.
Auxiliary Variable Methods for Markov Chain Monte Carlo with Applications
 Journal of the American Statistical Association
, 1997
"... Suppose one wishes to sample from the density ß(x) using Markov chain Monte Carlo (MCMC). An auxiliary variable u and its conditional distribution ß(ujx) can be defined, giving the joint distribution ß(x; u) = ß(x)ß(ujx). A MCMC scheme which samples over this joint distribution can lead to substanti ..."
Abstract

Cited by 64 (1 self)
 Add to MetaCart
Suppose one wishes to sample from the density ß(x) using Markov chain Monte Carlo (MCMC). An auxiliary variable u and its conditional distribution ß(ujx) can be defined, giving the joint distribution ß(x; u) = ß(x)ß(ujx). A MCMC scheme which samples over this joint distribution can lead to substantial gains in efficiency compared to standard approaches. The revolutionary algorithm of Swendsen and Wang (1987) is one such example. In addition to reviewing the SwendsenWang algorithm and its generalizations, this paper introduces a new auxiliary variable method called partial decoupling. Two applications in Bayesian image analysis are considered. The first is a binary classification problem in which partial decoupling out performs SW and single site Metropolis. The second is a PET reconstruction which uses the gray level prior of Geman and McClure (1987). A generalized SwendsenWang algorithm is developed for this problem, which reduces the computing time to the point that MCMC is a viabl...
Markov Chain Monte Carlo Methods Based on `Slicing' the Density Function
, 1997
"... . One way to sample from a distribution is to sample uniformly from the region under the plot of its density function. A Markov chain that converges to this uniform distribution can be constructed by alternating uniform sampling in the vertical direction with uniform sampling from the horizontal `sl ..."
Abstract

Cited by 46 (0 self)
 Add to MetaCart
. One way to sample from a distribution is to sample uniformly from the region under the plot of its density function. A Markov chain that converges to this uniform distribution can be constructed by alternating uniform sampling in the vertical direction with uniform sampling from the horizontal `slice' defined by the current vertical position. Variations on such `slice sampling' methods can easily be implemented for univariate distributions, and can be used to sample from a multivariate distribution by updating each variable in turn. This approach is often easier to implement than Gibbs sampling, and may be more efficient than easilyconstructed versions of the Metropolis algorithm. Slice sampling is therefore attractive in routine Markov chain Monte Carlo applications, and for use by software that automatically generates a Markov chain sampler from a model specification. One can also easily devise overrelaxed versions of slice sampling, which sometimes greatly improve sampling effici...
Estimating Normalizing Constants and Reweighting Mixtures in Markov Chain Monte Carlo
, 1994
"... Markov chain Monte Carlo (the MetropolisHastings algorithm and the Gibbs sampler) is a general multivariate simulation method that permits sampling from any stochastic process whose density is known up to a constant of proportionality. It has recently received much attention as a method of carrying ..."
Abstract

Cited by 40 (0 self)
 Add to MetaCart
Markov chain Monte Carlo (the MetropolisHastings algorithm and the Gibbs sampler) is a general multivariate simulation method that permits sampling from any stochastic process whose density is known up to a constant of proportionality. It has recently received much attention as a method of carrying out Bayesian, likelihood, and frequentist inference in analytically intractable problems. Although many applications of Markov chain Monte Carlo do not need estimation of normalizing constants, three do: calculation of Bayes factors, calculation of likelihoods in the presence of missing data, and importance sampling from mixtures. Here reverse logistic regression is proposed as a solution to the problem of estimating normalizing constants, and convergence and asymptotic normality of the estimates are proved under very weak regularity conditions. Markov chain Monte Carlo is most useful when combined with importance reweighting so that a Monte Carlo sample from one distribution can be used fo...
Bayesian Variable Selection and the SwendsenWang Algorithm
"... The need to explore model uncertainty in linear regression models with many predictors has motivated improvements in Markov chain Monte Carlo sampling algorithms for Bayesian variable selection. Currently used sampling algorithms for Bayesian variable selection may perform poorly when there are seve ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
The need to explore model uncertainty in linear regression models with many predictors has motivated improvements in Markov chain Monte Carlo sampling algorithms for Bayesian variable selection. Currently used sampling algorithms for Bayesian variable selection may perform poorly when there are severe multicollinearities among the predictors. This article describes a new sampling method based on an analogy with the SwendsenWang algorithm for the Ising model, and which can give substantial improvements over alternative sampling schemes in the presence of multicollinearity. In linear regression with a given set of potential predictors we can index possible models by a binary parameter vector that indicates which of the predictors are included or excluded. By thinking of the posterior distribution of this parameter as a binary spatial field, we can use auxiliary variable methods inspired by the SwendsenWang algorithm for the Ising model to sample from the posterior where dependence among parameters is reduced by conditioning on auxiliary variables. Performance of the method is described for both simulated and real data.
Incorporating Toxicity Grade Information in the Continual Reassessment Method
, 1996
"... The Continual Reassessment Method (CRM) is a Bayesian method for estimating the Maximum Tolerated Dose (MTD) in Phase I cancer clinical trials. In the standard CRM a parametric model is assumed for the dosetoxicity relationship and prior distributions are chosen for the model parameter(s). Paramete ..."
Abstract
 Add to MetaCart
The Continual Reassessment Method (CRM) is a Bayesian method for estimating the Maximum Tolerated Dose (MTD) in Phase I cancer clinical trials. In the standard CRM a parametric model is assumed for the dosetoxicity relationship and prior distributions are chosen for the model parameter(s). Parameter estimates are updated sequentially after each patient using the dichotomized toxicity information obtained at that patient's treatment dose. Subsequent patients are then treated at the estimated MTD from the updated model. This approach to dose escalation has been shown to have significant advantages over standard doseescalation procedures in that fewer patients are required and fewer patients are given doses that are likely to be ineffective. However, this procedure has been recently criticized because doses that are too toxic may also be recommended. We present a modification to the standard CRM in which ordinal toxicity grade information is incorporated into the estimation of the MTD u...
Abstract Markov Chain Monte Carlo Convergence Diagnostics: A Comparative Review
"... A critical issue for users of Markov Chain Monte Carlo (MCMC) methods in applications is how to determine when it is safe to stop sampling and use the samples to estimate characteristics of the distribution of interest. Research into methods of computing theoretical convergence bounds holds promise ..."
Abstract
 Add to MetaCart
A critical issue for users of Markov Chain Monte Carlo (MCMC) methods in applications is how to determine when it is safe to stop sampling and use the samples to estimate characteristics of the distribution of interest. Research into methods of computing theoretical convergence bounds holds promise for the future but currently has yielded relatively little that is of practical use in applied work. Consequently, most MCMC users address the convergence problem by applying diagnostic tools to the output produced by running their samplers. After giving a brief overview of the area, we provide an expository review of thirteen convergence diagnostics, describing the theoretical basis and practical implementation of each. We then compare their performance in two simple models and conclude that all the methods can fail to detect the sorts of convergence failure they were designed to identify. We thus recommend a combination of strategies aimed at evaluating and accelerating MCMC sampler convergence, including applying diagnostic procedures to a small number of parallel chains, monitoring autocorrelations and crosscorrelations, and modifying parameterizations or sampling algorithms appropriately. We emphasize, however, that it is not possible to say with certainty that a finite sample from an MCMC algorithm is representative of an underlying stationary distribution.
Payphones, Parkingmeters, Vending Machines and Bayesian Prediction of FillTimes
"... Payphones, parking meters and vending machines illustrate the modern business practice of substituting manpower with machinery. These do not eliminate manual labor completely, for it is still needed to replace full coinboxes and to stock vending machines. This makes filltime prediction of such coi ..."
Abstract
 Add to MetaCart
Payphones, parking meters and vending machines illustrate the modern business practice of substituting manpower with machinery. These do not eliminate manual labor completely, for it is still needed to replace full coinboxes and to stock vending machines. This makes filltime prediction of such coinboxes an important problem. There are risks to both underand overestimation. This paper suggests methods to predict, with desired accuracy, the filltime for such coin boxes. The methodology uses collection history and specifies common prior distributions over average daily fillrate and standard deviation at each box. The methodology is implemented and tested on data collected on 11,308 payphones over a large geographical region. When the desired accuracy is 95%, our methods provide better optimal fillrate predictions than current methods in at least 69.9% of the cases. This translates into an average potential collection cost reduction of at least 21%. Ranjan Maitra is Assis...