Results 1  10
of
179
Exact Sampling with Coupled Markov Chains and Applications to Statistical Mechanics
, 1996
"... For many applications it is useful to sample from a finite set of objects in accordance with some particular distribution. One approach is to run an ergodic (i.e., irreducible aperiodic) Markov chain whose stationary distribution is the desired distribution on this set; after the Markov chain has ..."
Abstract

Cited by 548 (13 self)
 Add to MetaCart
For many applications it is useful to sample from a finite set of objects in accordance with some particular distribution. One approach is to run an ergodic (i.e., irreducible aperiodic) Markov chain whose stationary distribution is the desired distribution on this set; after the Markov chain has run for M steps, with M sufficiently large, the distribution governing the state of the chain approximates the desired distribution. Unfortunately it can be difficult to determine how large M needs to be. We describe a simple variant of this method that determines on its own when to stop, and that outputs samples in exact accordance with the desired distribution. The method uses couplings, which have also played a role in other sampling schemes; however, rather than running the coupled chains from the present into the future, one runs from a distant point in the past up until the present, where the distance into the past that one needs to go is determined during the running of the al...
Nonlinear Image Recovery with HalfQuadratic Regularization
, 1993
"... One popular method for the recovery of an ideal intensity image from corrupted or indirect measurements is regularization: minimize an objective function which enforces a roughness penalty in addition to coherence with the data. Linear estimates are relatively easy to compute but generally introduce ..."
Abstract

Cited by 207 (0 self)
 Add to MetaCart
One popular method for the recovery of an ideal intensity image from corrupted or indirect measurements is regularization: minimize an objective function which enforces a roughness penalty in addition to coherence with the data. Linear estimates are relatively easy to compute but generally introduce systematic errors; for example, they are incapable of recovering discontinuities and other important image attributes. In contrast, nonlinear estimates are more accurate, but often far less accessible. This is particularly true when the objective function is nonconvex and the distribution of each data component depends on many image components through a linear operator with broad support. Our approach is based on an auxiliary array and an extended objective function in which the original variables appear quadratically and the auxiliary variables are decoupled. Minimizing over the auxiliary array alone yields the original function, so the original image estimate can be obtained by joint min...
Error bounds for computing the expectation by Markov chain Monte Carlo
, 2009
"... We study the error of reversible Markov chain Monte Carlo methods for approximating the expectation of a function. Explicit error bounds with respect to the l2, l4 and l∞norm of the function are proven. By the estimation the well known asymptotical limit of the error is attained, i.e. there is n ..."
Abstract

Cited by 116 (2 self)
 Add to MetaCart
We study the error of reversible Markov chain Monte Carlo methods for approximating the expectation of a function. Explicit error bounds with respect to the l2, l4 and l∞norm of the function are proven. By the estimation the well known asymptotical limit of the error is attained, i.e. there is no gap between the estimate and the asymptotical behavior. We discuss the dependence of the error on a burnin of the Markov chain. Furthermore we suggest and justify a specific burnin for optimizing the algorithm.
Auxiliary Variable Methods for Markov Chain Monte Carlo with Applications
 Journal of the American Statistical Association
, 1997
"... Suppose one wishes to sample from the density ß(x) using Markov chain Monte Carlo (MCMC). An auxiliary variable u and its conditional distribution ß(ujx) can be defined, giving the joint distribution ß(x; u) = ß(x)ß(ujx). A MCMC scheme which samples over this joint distribution can lead to substanti ..."
Abstract

Cited by 85 (1 self)
 Add to MetaCart
Suppose one wishes to sample from the density ß(x) using Markov chain Monte Carlo (MCMC). An auxiliary variable u and its conditional distribution ß(ujx) can be defined, giving the joint distribution ß(x; u) = ß(x)ß(ujx). A MCMC scheme which samples over this joint distribution can lead to substantial gains in efficiency compared to standard approaches. The revolutionary algorithm of Swendsen and Wang (1987) is one such example. In addition to reviewing the SwendsenWang algorithm and its generalizations, this paper introduces a new auxiliary variable method called partial decoupling. Two applications in Bayesian image analysis are considered. The first is a binary classification problem in which partial decoupling out performs SW and single site Metropolis. The second is a PET reconstruction which uses the gray level prior of Geman and McClure (1987). A generalized SwendsenWang algorithm is developed for this problem, which reduces the computing time to the point that MCMC is a viabl...
Retrospective Markov chain Monte Carlo methods for Dirichlet process hierarchical models
 PROC. IEEE
, 2008
"... Inference for Dirichlet process hierarchical models is typically performed using Markov chain Monte Carlo methods, which can be roughly categorised into marginal and conditional methods. The former integrate out analytically the infinitedimensional component of the hierarchical model and sample fro ..."
Abstract

Cited by 84 (5 self)
 Add to MetaCart
(Show Context)
Inference for Dirichlet process hierarchical models is typically performed using Markov chain Monte Carlo methods, which can be roughly categorised into marginal and conditional methods. The former integrate out analytically the infinitedimensional component of the hierarchical model and sample from the marginal distribution of the remaining variables using the Gibbs sampler. Conditional methods impute the Dirichlet process and update it as a component of the Gibbs sampler. Since this requires imputation of an infinitedimensional process, implementation of the conditional method has relied on finite approximations. In this paper we show how to avoid such approximations by designing two novel Markov chain Monte Carlo algorithms which sample from the exact posterior distribution of quantities of interest. The approximations are avoided by the new technique of retrospective sampling. We also show how the algorithms can obtain samples from functionals of the Dirichlet process. The marginal and the conditional methods are compared and a careful simulation study is included, which involves a nonconjugate model, different datasets and prior specifications.
Convergence rates of Markov chains
, 1995
"... this paper, we attempt to describe various mathematical techniques which have been used to bound such rates of convergence. In particular, we describe eigenvalue analysis, random walks on groups, coupling, and minorization conditions. Connections are made to modern areas of research wherever possibl ..."
Abstract

Cited by 76 (4 self)
 Add to MetaCart
this paper, we attempt to describe various mathematical techniques which have been used to bound such rates of convergence. In particular, we describe eigenvalue analysis, random walks on groups, coupling, and minorization conditions. Connections are made to modern areas of research wherever possible. Elements of linear algebra, probability theory, group theory, and measure theory are used, but efforts are made to keep the presentation elementary and accessible. Acknowledgements. I thank Eric Belsley for comments and corrections, and thank Persi Diaconis for introducing me to this subject and teaching me so much. 1. Introduction and motivation.
BUGS for a Bayesian Analysis of Stochastic Volatility Models
, 2000
"... This paper reviews the general Bayesian approach to parameter estimation in stochastic volatility models with posterior computations performed by Gibbs sampling. The main purpose is to illustrate the ease with which the Bayesian stochastic volatility model can now be studied routinely via BUGS (Baye ..."
Abstract

Cited by 58 (17 self)
 Add to MetaCart
This paper reviews the general Bayesian approach to parameter estimation in stochastic volatility models with posterior computations performed by Gibbs sampling. The main purpose is to illustrate the ease with which the Bayesian stochastic volatility model can now be studied routinely via BUGS (Bayesian Inference Using Gibbs Sampling), a recently developed, userfriendly, and freely available software package. It is an ideal software tool for the exploratory phase of model building as any modifications of a model including changes of priors and sampling error distributions are readily realized with only minor changes of the code. However, due to the single move Gibbs sampler, convergence can be slow. BUGS automates the calculation of the full conditional posterior distributions using a model representation by directed acyclic graphs. It contains an expert system for choosing an effective sampling method for each full conditional. Furthermore, software for convergence diagnostics and statistical summaries is available for the BUGS output
Some Adaptive Monte Carlo Methods for Bayesian Inference
 Statistics in Medicine
"... This paper outlines some of the issues in developing adaptive methods and presents some preliminary results. 1 Introduction ..."
Abstract

Cited by 49 (6 self)
 Add to MetaCart
This paper outlines some of the issues in developing adaptive methods and presents some preliminary results. 1 Introduction
Slice Sampling Mixture Models
"... We propose a more efficient version of the slice sampler for Dirichlet process mixture models described by Walker (2007). This sampler allows the fitting of infinite mixture models with a wide–range of prior specification. To illustrate this flexiblity we develop a new nonparametric prior for mixtur ..."
Abstract

Cited by 42 (1 self)
 Add to MetaCart
We propose a more efficient version of the slice sampler for Dirichlet process mixture models described by Walker (2007). This sampler allows the fitting of infinite mixture models with a wide–range of prior specification. To illustrate this flexiblity we develop a new nonparametric prior for mixture models by normalizing an infinite sequence of independent positive random variables and show how the slice sampler can be applied to make inference in this model. Two submodels are studied in detail. The first one assumes that the positive random variables are Gamma distributed and the second assumes that they are inverse– Gaussian distributed. Both priors have two hyperparameters and we consider their effect on the prior distribution of the number of occupied clusters in a sample. Extensive computational comparisons with alternative ”conditional” simulation techniques for mixture models using the standard Dirichlet process prior and our new prior are made. The properties of the new prior are illustrated on a density estimation problem.