Results 1  10
of
45
Computational and Inferential Difficulties With Mixture Posterior Distributions
 Journal of the American Statistical Association
, 1999
"... This paper deals with both exploration and interpretation problems related to posterior distributions for mixture models. The specification of mixture posterior distributions means that the presence of k! modes is known immediately. Standard Markov chain Monte Carlo techniques usually have difficult ..."
Abstract

Cited by 113 (12 self)
 Add to MetaCart
This paper deals with both exploration and interpretation problems related to posterior distributions for mixture models. The specification of mixture posterior distributions means that the presence of k! modes is known immediately. Standard Markov chain Monte Carlo techniques usually have difficulties with wellseparated modes such as occur here; the Markov chain Monte Carlo sampler stays within a neighbourhood of a local mode and fails to visit other equally important modes. We show that exploration of these modes can be imposed on the Markov chain Monte Carlo sampler using tempered transitions based on Langevin algorithms. However, as the prior distribution does not distinguish between the different components, the posterior mixture distribution is symmetric and thus standard estimators such as posterior means cannot be used. Since this is also true for most nonsymmetric priors, we propose alternatives for Bayesian inference for permutation invariant posteriors, including a cluster...
Controlled MCMC for Optimal Sampling
, 2001
"... this paper we develop an original and general framework for automatically optimizing the statistical properties of Markov chain Monte Carlo (MCMC) samples, which are typically used to evaluate complex integrals. The MetropolisHastings algorithm is the basic building block of classical MCMC methods ..."
Abstract

Cited by 37 (6 self)
 Add to MetaCart
this paper we develop an original and general framework for automatically optimizing the statistical properties of Markov chain Monte Carlo (MCMC) samples, which are typically used to evaluate complex integrals. The MetropolisHastings algorithm is the basic building block of classical MCMC methods and requires the choice of a proposal distribution, which usually belongs to a parametric family. The correlation properties together with the exploratory ability of the Markov chain heavily depend on the choice of the proposal distribution. By monitoring the simulated path, our approach allows us to learn "on the fly" the optimal parameters of the proposal distribution for several statistical criteria. Keywords: Monte Carlo, adaptive MCMC, calibration, stochastic approximation, gradient method, optimal scaling, random walk, Langevin, Gibbs, controlled Markov chain, learning algorithm, reversible jump MCMC. 1. Motivation 1.1. Introduction Markov chain Monte Carlo (MCMC) is a general strategy for generating samples x i (i = 0; 1; : : :) from complex highdimensional distributions, say defined on the space X ae R nx , from which integrals of the type I (f) = Z X f (x) (x) dx; can be calculated using the estimator b I N (f) = 1 N + 1 N X i=0 f (x i ) ; provided that the Markov chain produced is ergodic. The main building block of this class of algorithms is the MetropolisHastings (MH) algorithm. It requires the definition of a proposal distribution q whose role is to generate possible transitions for the Markov chain, say from x to y, which are then accepted or rejected according to the probabilityy ff (x; y) = min ae 1; (y) q (y; x) (x) q (x; y) oe : The simplicity and universality of this algorithm are both its strength and weakness. The choice of ...
Learn From Thy Neighbor: ParallelChain and Regional Adaptive MCMC
, 2009
"... Starting with the seminal paper of Haario, Saksman and Tamminen (Haario et al. (2001)), a substantial amount of work has been done to validate adaptive Markov chain Monte Carlo algorithms. In this paper we focus on two practical aspects of adaptive Metropolis samplers. First, we draw attention to th ..."
Abstract

Cited by 17 (12 self)
 Add to MetaCart
Starting with the seminal paper of Haario, Saksman and Tamminen (Haario et al. (2001)), a substantial amount of work has been done to validate adaptive Markov chain Monte Carlo algorithms. In this paper we focus on two practical aspects of adaptive Metropolis samplers. First, we draw attention to the deficient performance of standard adaptation when the target distribution is multimodal. We propose a parallel chain adaptation strategy that incorporates multiple Markov chains which are run in parallel. Second, we note that the current adaptive MCMC paradigm implicitly assumes that the adaptation is uniformly efficient on all regions of the state space. However, in many practical instances, different “optimal ” kernels are needed in different regions of the state space. We propose here a regional adaptation algorithm in which we account for possible errors made in defining the adaptation regions. This corresponds to the more realistic case in which one does not know exactly the optimal regions for adaptation. The methods focus on the random walk Metropolis sampling algorithm but their scope is much wider. We provide theoretical justification for the two adaptive approaches using the existent theory build for adaptive Markov chain Monte Carlo. We illustrate the performance of the methods using simulations and analyze a mixture model for real data using an algorithm that combines the two approaches.
Synthesizing open worlds with constraints using locally annealed reversible jump mcmc
 ACM Transactions on Graphics (TOG
, 2012
"... Figure 1: The tablechair sets, arm chairs, plants, shelves, and floor lamps in this coffee shop were arranged using our locally annealed reversible jump MCMC sampling method. The users don’t need to specify the number of objects beforehand. We present a novel Markov chain Monte Carlo (MCMC) algorit ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
Figure 1: The tablechair sets, arm chairs, plants, shelves, and floor lamps in this coffee shop were arranged using our locally annealed reversible jump MCMC sampling method. The users don’t need to specify the number of objects beforehand. We present a novel Markov chain Monte Carlo (MCMC) algorithm that generates samples from transdimensional distributions encoding complex constraints. We use factor graphs, a type of graphical model, to encode constraints as factors. Our proposed MCMC method, called locally annealed reversible jump MCMC, exploits knowledge of how dimension changes affect the structure of the factor graph. We employ a sequence of annealed distributions during the sampling process, allowing us to explore the state space across different dimensionalities more freely. This approach is motivated by the application of layout synthesis where relationships between objects are characterized as constraints. In particular, our method addresses the challenge of synthesizing open world layouts where the number of objects are not fixed and optimal configurations for different numbers of objects may be drastically different. We demonstrate the applicability of our approach on two open world layout synthesis problems: coffee shops and golf courses.
Bayesian CART – Prior Specification and Posterior Simulation –
, 2006
"... We present advances in Bayesian modeling and computation for CART (classification and regression tree) models. The modeling innovations include a formal prior distributional structure for tree generation – the pinball prior – that allows for the combination of an explicit specification of a distribu ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We present advances in Bayesian modeling and computation for CART (classification and regression tree) models. The modeling innovations include a formal prior distributional structure for tree generation – the pinball prior – that allows for the combination of an explicit specification of a distribution for both the tree size and the tree shape. The core computational innovations involve a novel Metropolis–Hastings method that can dramatically improve the convergence and mixing properties of MCMC methods of Bayesian CART analysis. Earlier MCMC methods have simulated Bayesian CART models using very local MCMC moves, proposing only small changes to a “current ” CART model. Our new Metropolis–Hastings move makes large changes in the CART tree, but is at the same time local in that it leaves unchanged the partition of observations into terminal nodes. We evaluate the effectiveness of the proposed algorithm in two examples, one with a constructed data set and one concerning analysis of a published breast cancer data set.
A Theory for Dynamic Weighting in Monte Carlo Computation
, 2001
"... This article provides a first theoretical analysis of a new Monte Carlo approach, the dynamic weighting algorithm, proposed recently by Wong and Liang. In dynamic weighting Monte Carlo, one augments the original state space of interest by a weighting factor, which allows the resulting Markov chain t ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
This article provides a first theoretical analysis of a new Monte Carlo approach, the dynamic weighting algorithm, proposed recently by Wong and Liang. In dynamic weighting Monte Carlo, one augments the original state space of interest by a weighting factor, which allows the resulting Markov chain to move more freely and to escape from local modes. It uses a new invariance principle to guide the construction of transition rules. We analyze the behavior of the weights resulting from such a process and provide detailed recommendations on how to use these weights properly. Our recommendations are supported by a renewal theorytype analysis. Our theoretical investigations are further demonstrated by a simulation study and applications in neural network training and Ising model simulations.
Markov chain Monte Carlo and related topics
 Proceedings of the IX General Assembly (Page 451 & 454
, 1999
"... ..."