Results 1  10
of
30
Simulating Normalized Constants: From Importance Sampling to Bridge Sampling to Path Sampling
 Statistical Science, 13, 163–185. COMPARISON OF METHODS FOR COMPUTING BAYES FACTORS 435
, 1998
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 146 (4 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
Simulating ratios of normalizing constants via a simple identity: A theoretical exploration
 Statistica Sinica
, 1996
"... Abstract: Let pi(w),i =1, 2, be two densities with common support where each density is known up to a normalizing constant: pi(w) =qi(w)/ci. We have draws from each density (e.g., via Markov chain Monte Carlo), and we want to use these draws to simulate the ratio of the normalizing constants, c1/c2. ..."
Abstract

Cited by 109 (4 self)
 Add to MetaCart
Abstract: Let pi(w),i =1, 2, be two densities with common support where each density is known up to a normalizing constant: pi(w) =qi(w)/ci. We have draws from each density (e.g., via Markov chain Monte Carlo), and we want to use these draws to simulate the ratio of the normalizing constants, c1/c2. Such a computational problem is often encountered in likelihood and Bayesian inference, and arises in fields such as physics and genetics. Many methods proposed in statistical and other literature (e.g., computational physics) for dealing with this problem are based on various special cases of the following simple identity: c1 c2 = E2[q1(w)α(w)] E1[q2(w)α(w)]. Here Ei denotes the expectation with respect to pi (i =1, 2), and α is an arbitrary function such that the denominator is nonzero. A main purpose of this paper is to provide a theoretical study of the usefulness of this identity, with focus on (asymptotically) optimal and practical choices of α. Using a simple but informative example, we demonstrate that with sensible (not necessarily optimal) choices of α, we can reduce the simulation error by orders of magnitude when compared to the conventional importance sampling method, which corresponds to α =1/q2. We also introduce several generalizations of this identity for handling more complicated settings (e.g., estimating several ratios simultaneously) and pose several open problems that appear to have practical as well as theoretical value. Furthermore, we discuss related theoretical and empirical work.
Blocking Gibbs Sampling for Linkage Analysis in Large Pedigrees with Many Loops
 AMERICAN JOURNAL OF HUMAN GENETICS
, 1996
"... We will apply the method of blocking Gibbs sampling to a problem of great importance and complexity  linkage analysis. Blocking Gibbs combines exact local computations with Gibbs sampling in a way that complements the strengths of both. The method is able to handle problems with very high complexi ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
We will apply the method of blocking Gibbs sampling to a problem of great importance and complexity  linkage analysis. Blocking Gibbs combines exact local computations with Gibbs sampling in a way that complements the strengths of both. The method is able to handle problems with very high complexity such as linkage analysis in large pedigrees with many loops; a task that no other known method is able to handle. New developments of the method are outlined, and it is applied to a highly complex linkage problem.
Fully Bayesian Estimation of Gibbs Hyperparameters for Emission Computed Tomography Data
 IEEE Transactions on Medical Imaging
, 1997
"... In recent years, many investigators have proposed Gibbs prior models to regularize images reconstructed from emission computed tomography data. Unfortunately, hyperparameters used to specify Gibbs priors can greatly influence the degree of regularity imposed by such priors, and as a result, numerous ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
In recent years, many investigators have proposed Gibbs prior models to regularize images reconstructed from emission computed tomography data. Unfortunately, hyperparameters used to specify Gibbs priors can greatly influence the degree of regularity imposed by such priors, and as a result, numerous procedures have been proposed to estimate hyperparameter values from observed image data. Many of these procedures attempt to maximize the joint posterior distribution on the image scene. To implement these methods, approximations to the joint posterior densities are required, because the dependence of the Gibbs partition function on the hyperparameter values is unknown. In this paper, we use recent results in Markov Chain Monte Carlo sampling to estimate the relative values of Gibbs partition functions, and using these values, sample from joint posterior distributions on image scenes. This allows for a fully Bayesian procedure which does not fix the hyperparameters at some estimated or spe...
Stochastic approximation algorithms for partition function estimation of Gibbs random fields
 IEEE Trans. Inform. Theory
, 1997
"... Abstract—We present an analysis of recently proposed Monte Carlo algorithms for estimating the partition function of a Gibbs random field. We show that this problem reduces to estimating one or more expectations of suitable functionals of the Gibbs states with respect to properly chosen Gibbs distri ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
Abstract—We present an analysis of recently proposed Monte Carlo algorithms for estimating the partition function of a Gibbs random field. We show that this problem reduces to estimating one or more expectations of suitable functionals of the Gibbs states with respect to properly chosen Gibbs distributions. As expected, the resulting estimators are consistent. Certain generalizations are also provided. We study computational complexity with respect to grid size and show that Monte Carlo partition function estimation algorithms can be classified into two categories: EType algorithms that are of exponential complexity and PType algorithms that are of polynomial complexity, Turing reducible to the problem of sampling from the Gibbs distribution. EType algorithms require estimating a single expectation, whereas, PType algorithms require estimating a number of expectations with respect to Gibbs distributions which are chosen to be sufficiently “close ” to each other. In the latter case, the required number of expectations is of polynomial order with respect to grid size. We compare computational complexity by using both theoretical results and simulation experiments. We determine the most efficient EType and PType algorithms and conclude that PType algorithms are more appropriate for partition function estimation. We finally suggest a practical and efficient PType algorithm for this task. Index Terms—Computational complexity, Gibbs random fields, importance sampling, Monte Carlo simulations, partition function estimation, stochastic approximation. I.
Estimating ratios of normalizing constants using linked importance sampling
, 2005
"... Abstract. Ratios of normalizing constants for two distributions are needed in both Bayesian statistics, where they are used to compare models, and in statistical physics, where they correspond to differences in free energy. Two approaches have long been used to estimate ratios of normalizing constan ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Abstract. Ratios of normalizing constants for two distributions are needed in both Bayesian statistics, where they are used to compare models, and in statistical physics, where they correspond to differences in free energy. Two approaches have long been used to estimate ratios of normalizing constants. The ‘simple importance sampling ’ (SIS) or ‘free energy perturbation ’ method uses a sample drawn from just one of the two distributions. The ‘bridge sampling ’ or ‘acceptance ratio ’ estimate can be viewed as the ratio of two SIS estimates involving a bridge distribution. For both methods, difficult problems must be handled by introducing a sequence of intermediate distributions linking the two distributions of interest, with the final ratio of normalizing constants being estimated by the product of estimates of ratios for adjacent distributions in this sequence. Recently, work by Jarzynski, and independently by Neal, has shown how one can view such a product of estimates, each based on simple importance sampling using a single point, as an SIS estimate on an extended state space. This ‘Annealed Importance Sampling ’ (AIS) method produces an exactly unbiased estimate for the ratio of normalizing constants even when the Markov transitions used do not reach equilibrium. In this paper, I show how a corresponding ‘Linked Importance Sampling ’ (LIS) method can be constructed in which the estimates for individual ratios are similar to bridge sampling estimates. As a further
Nonequilibrium fluctuations in small systems: From physics to biology
 Advances in Chemical Physics
, 2006
"... In this paper I am presenting an overview on several topics related to nonequilibrium fluctuations in small systems. I start with a general discussion about fluctuation theorems and applications to physical examples extracted from physics and biology: a bead in an optical trap and single molecule fo ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
In this paper I am presenting an overview on several topics related to nonequilibrium fluctuations in small systems. I start with a general discussion about fluctuation theorems and applications to physical examples extracted from physics and biology: a bead in an optical trap and single molecule force experiments. Next I present a general discussion on path thermodynamics and consider distributions of work/heat fluctuations as large deviation functions. Then I address the topic of glassy dynamics from the perspective of nonequilibrium fluctuations due to small cooperatively rearranging regions. Finally, I
Posterior Distributions on Normalizing Constants
, 1999
"... This article describes a procedure for defining a posterior distribution on the value of a normalizing constant or ratio of normalizing constants using output from Monte Carlo simulation experiments. The resulting posterior distribution provides a simple diagnostic for assessing the adequacy of ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
This article describes a procedure for defining a posterior distribution on the value of a normalizing constant or ratio of normalizing constants using output from Monte Carlo simulation experiments. The resulting posterior distribution provides a simple diagnostic for assessing the adequacy of a simulation experiment for estimating these quantities, and is particularly useful in cases for which standard estimators perform poorly, since in such situations asymptotic properties of standard diagnostics are unlikely to hold. Keywords: Marginal likelihood, partition function, Markov chain Monte Carlo, Ising model, fl coupling. 1 Introduction This article describes a simulationbased method for computing a posterior distribution on either a single normalizing constant or a ratio of normalizing constants. The method relies on a coupling argument to define two sequences of Bernoulli random variables whose success probabilities, given the true values of the normalizing constants, are...
Nested sampling for Potts models
 Advances in Neural Information Processing Systems
, 2006
"... Nested sampling is a new Monte Carlo method by Skilling [1] intended for general Bayesian computation. Nested sampling provides a robust alternative to annealingbased methods for computing normalizing constants. It can also generate estimates of other quantities such as posterior expectations. The ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Nested sampling is a new Monte Carlo method by Skilling [1] intended for general Bayesian computation. Nested sampling provides a robust alternative to annealingbased methods for computing normalizing constants. It can also generate estimates of other quantities such as posterior expectations. The key technical requirement is an ability to draw samples uniformly from the prior subject to a constraint on the likelihood. We provide a demonstration with the Potts model, an undirected graphical model.