Results 1  10
of
40
Simulating Normalized Constants: From Importance Sampling to Bridge Sampling to Path Sampling
, 1998
"... Computing (ratios of) normalizing constants of probability models is a fundamental computational problem for many statistical and scientific studies. Monte Carlo simulation is an effective technique, especially with complex and highdimensional models. This paper aims to bring to the attention of ..."
Abstract

Cited by 183 (4 self)
 Add to MetaCart
Computing (ratios of) normalizing constants of probability models is a fundamental computational problem for many statistical and scientific studies. Monte Carlo simulation is an effective technique, especially with complex and highdimensional models. This paper aims to bring to the attention of general statistical audiences of some effective methods originating from theoretical physics and at the same time to explore these methods from a more statistical perspective, through establishing theoretical connections and illustrating their uses with statistical problems. We show that the acceptance ratio method and thermodynamic integration are natural generalizations of importance sampling, which is most familiar to statistical audiences. The former generalizes importance sampling through the use of a single “bridge ” density and is thus a case of bridge sampling in the sense of Meng and Wong. Thermodynamic integration, which is also known in the numerical analysis literature as Ogata’s method for highdimensional integration, corresponds to the use of infinitely many and continuously connected bridges (and thus a “path”). Our path sampling formulation offers more flexibility and thus potential efficiency to thermodynamic integration, and the search of optimal paths turns out to have close connections with the Jeffreys prior density and the Rao and Hellinger distances between two densities. We provide an informative theoretical example as well as two empirical examples (involving 17 to 70dimensional integrations) to illustrate the potential and implementation of path sampling. We also discuss some open problems.
Simulating ratios of normalizing constants via a simple identity: A theoretical exploration
 Statistica Sinica
, 1996
"... Abstract: Let pi(w),i =1, 2, be two densities with common support where each density is known up to a normalizing constant: pi(w) =qi(w)/ci. We have draws from each density (e.g., via Markov chain Monte Carlo), and we want to use these draws to simulate the ratio of the normalizing constants, c1/c2. ..."
Abstract

Cited by 153 (3 self)
 Add to MetaCart
Abstract: Let pi(w),i =1, 2, be two densities with common support where each density is known up to a normalizing constant: pi(w) =qi(w)/ci. We have draws from each density (e.g., via Markov chain Monte Carlo), and we want to use these draws to simulate the ratio of the normalizing constants, c1/c2. Such a computational problem is often encountered in likelihood and Bayesian inference, and arises in fields such as physics and genetics. Many methods proposed in statistical and other literature (e.g., computational physics) for dealing with this problem are based on various special cases of the following simple identity: c1 c2 = E2[q1(w)α(w)] E1[q2(w)α(w)]. Here Ei denotes the expectation with respect to pi (i =1, 2), and α is an arbitrary function such that the denominator is nonzero. A main purpose of this paper is to provide a theoretical study of the usefulness of this identity, with focus on (asymptotically) optimal and practical choices of α. Using a simple but informative example, we demonstrate that with sensible (not necessarily optimal) choices of α, we can reduce the simulation error by orders of magnitude when compared to the conventional importance sampling method, which corresponds to α =1/q2. We also introduce several generalizations of this identity for handling more complicated settings (e.g., estimating several ratios simultaneously) and pose several open problems that appear to have practical as well as theoretical value. Furthermore, we discuss related theoretical and empirical work.
An efficient Markov chain Monte Carlo method for distributions with intractable normalising constants
 Biometrika
, 2006
"... Maximum likelihood parameter estimation and sampling from Bayesian posterior distributions are problematic when the probability density for the parameter of interest involves an intractable normalising constant which is also a function of that parameter. In this paper, an auxiliary variable method i ..."
Abstract

Cited by 81 (2 self)
 Add to MetaCart
Maximum likelihood parameter estimation and sampling from Bayesian posterior distributions are problematic when the probability density for the parameter of interest involves an intractable normalising constant which is also a function of that parameter. In this paper, an auxiliary variable method is presented which requires only that independent samples can be drawn from the unnormalised density at any particular parameter value. The proposal distribution is constructed so that the normalising constant cancels from the Metropolis–Hastings ratio. The method is illustrated by producing posterior samples for parameters of the Ising model given a particular lattice realisation.
Monte Carlo Estimation of Bayesian Credible and HPD Intervals
 Journal of Computational and Graphical Statistics
, 1998
"... This paper considers how to estimate Bayesian credible and highest probability density (HPD) intervals for parameters of interest and provides a simple Monte Carlo approach to approximate these Bayesian intervals when a sample of the relevant parameters can be generated from their respective margina ..."
Abstract

Cited by 42 (4 self)
 Add to MetaCart
This paper considers how to estimate Bayesian credible and highest probability density (HPD) intervals for parameters of interest and provides a simple Monte Carlo approach to approximate these Bayesian intervals when a sample of the relevant parameters can be generated from their respective marginal posterior distribution using a Markov chain Monte Carlo (MCMC) sampling algorithm. We also develop a Monte Carlo method to compute HPD intervals for the parameters of interest from the desired posterior distribution using a sample from an importance sampling distribution. We apply our methodology to a Bayesian hierarchical model that has a posterior density containing analytically intractable integrals that depend on the (hyper) parameters. We further show that our methods are useful not only for calculating the HPD intervals for the parameters of interest but also for computing the HPD intervals for functions of the parameters. Necessary theory is developed and illustrative examples including a simulation study are given. Key Words: Bayesian computation; Markov chain Monte Carlo; Monte Carlo methods; Posterior distribution; Simulation.
Semiparametric Bayesian Analysis Of Survival Data
 Journal of the American Statistical Association
, 1996
"... this paper are motivated and aimed at analyzing some common types of survival data from different medical studies. We will center our attention to the following topics. ..."
Abstract

Cited by 35 (1 self)
 Add to MetaCart
(Show Context)
this paper are motivated and aimed at analyzing some common types of survival data from different medical studies. We will center our attention to the following topics.
Bayesian Variable Selection for Proportional Hazards Models
, 1996
"... The authors consider the problem of Bayesian variable selection for proportional hazards regression models with right censored data. They propose a semiparametric approach in which a nonparametric prior is specified for the baseline hazard rate and a fully parametric prior is specified for the regr ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
The authors consider the problem of Bayesian variable selection for proportional hazards regression models with right censored data. They propose a semiparametric approach in which a nonparametric prior is specified for the baseline hazard rate and a fully parametric prior is specified for the regression coe#cients. For the baseline hazard, they use a discrete gamma process prior, and for the regression coe#cients and the model space, they propose a semiautomatic parametric informative prior specification that focuses on the observables rather than the parameters. To implement the methodology, they propose a Markov chain Monte Carlo method to compute the posterior model probabilities. Examples using simulated and real data are given to demonstrate the methodology. R ESUM E Les auteurs abordent d'un point de vue bayesien le problemedelaselection de variables dans les modeles de regression des risques proportionnels en presence de censure a droite. Ils proposent une approche semip...
Estimating Ratios of Normalizing Constants for Densities With Different Dimensions
 Statistica Sinica
, 1997
"... Abstract: In Bayesian inference, a Bayes factor is defined as the ratio of posterior odds versus prior odds where posterior odds is simply a ratio of the normalizing constants of two posterior densities. In many practical problems, the two posteriors have different dimensions. For such cases, the cu ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
Abstract: In Bayesian inference, a Bayes factor is defined as the ratio of posterior odds versus prior odds where posterior odds is simply a ratio of the normalizing constants of two posterior densities. In many practical problems, the two posteriors have different dimensions. For such cases, the current Monte Carlo methods such as the bridge sampling method (Meng and Wong (1996)), the path sampling method (Gelman and Meng (1994)), and the ratio importance sampling method (Chen and Shao (1997)) cannot directly be applied. In this article, we extend importance sampling, bridge sampling, and ratio importance sampling to problems of different dimensions. Then we find global optimal importance sampling, bridge sampling, and ratio importance sampling in the sense of minimizing asymptotic relative meansquare errors of estimators. Implementation algorithms, which can asymptotically achieve the optimal simulation errors, are developed and two illustrative examples are also provided.
Contemplating evidence: properties, extensions of, and alternatives to nested sampling
, 2007
"... Nested sampling is a novel simulation method for approximating marginal likelihoods, proposed by Skilling (2007a,b). We establish that nested sampling leads to an error that vanishes at the standard Monte Carlo rate N −1/2, where N is a tuning parameter that is proportional to the computational effo ..."
Abstract

Cited by 14 (12 self)
 Add to MetaCart
(Show Context)
Nested sampling is a novel simulation method for approximating marginal likelihoods, proposed by Skilling (2007a,b). We establish that nested sampling leads to an error that vanishes at the standard Monte Carlo rate N −1/2, where N is a tuning parameter that is proportional to the computational effort, and that this error is asymptotically Gaussian. We show that the corresponding asymptotic variance typically grows linearly with the dimension of the parameter. We use these results to discuss the applicability and efficiency of nested sampling in realistic problems, including posterior distributions for mixtures. We propose an extension of nested sampling that makes it possible to avoid resorting to MCMC to obtain the simulated points. We study two alternative methods for computing marginal likelihood, which, in contrast with nested sampling, are based on draws from the posterior distribution and we conduct a comparison with nested sampling on several realistic examples.
Computational advances for and from Bayesian analysis
 STATIST. SCI
, 2004
"... The emergence in the past years of Bayesian analysis in many methodological and applied fields as the solution to the modeling of complex problems cannot be dissociated from major changes in its computational implementation. We show in this review how the advances in Bayesian analysis and statistic ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
The emergence in the past years of Bayesian analysis in many methodological and applied fields as the solution to the modeling of complex problems cannot be dissociated from major changes in its computational implementation. We show in this review how the advances in Bayesian analysis and statistical computation are intermingled.