Results 1 
5 of
5
Simulating Normalized Constants: From Importance Sampling to Bridge Sampling to Path Sampling
 Statistical Science, 13, 163–185. COMPARISON OF METHODS FOR COMPUTING BAYES FACTORS 435
, 1998
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 146 (4 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
The Art of Data Augmentation
, 2001
"... The term data augmentation refers to methods for constructing iterative optimization or sampling algorithms via the introduction of unobserved data or latent variables. For deterministic algorithms,the method was popularizedin the general statistical community by the seminal article by Dempster, Lai ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
The term data augmentation refers to methods for constructing iterative optimization or sampling algorithms via the introduction of unobserved data or latent variables. For deterministic algorithms,the method was popularizedin the general statistical community by the seminal article by Dempster, Laird, and Rubin on the EM algorithm for maximizing a likelihood function or, more generally, a posterior density. For stochastic algorithms, the method was popularized in the statistical literature by Tanner and Wong’s Data Augmentation algorithm for posteriorsampling and in the physics literatureby Swendsen and Wang’s algorithm for sampling from the Ising and Potts models and their generalizations; in the physics literature,the method of data augmentationis referred to as the method of auxiliary variables. Data augmentationschemes were used by Tanner and Wong to make simulation feasible and simple, while auxiliary variables were adopted by Swendsen and Wang to improve the speed of iterative simulation. In general,however, constructing data augmentation schemes that result in both simple and fast algorithms is a matter of art in that successful strategiesvary greatlywith the (observeddata) models being considered.After an overview of data augmentation/auxiliary variables and some recent developments in methods for constructing such
Warp bridge sampling
 J. Comp. Graph. Statist
, 2002
"... Bridge sampling, a general formulation of the acceptance ratio method in physics for computing freeenergy difference, is an effective Monte Carlo method for computing normalizingconstantsof probabilitymodels. The method was originallyproposedfor cases where the probabilitymodels have overlappingsup ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
Bridge sampling, a general formulation of the acceptance ratio method in physics for computing freeenergy difference, is an effective Monte Carlo method for computing normalizingconstantsof probabilitymodels. The method was originallyproposedfor cases where the probabilitymodels have overlappingsupport. Voter proposed the idea of shifting physical systems before applying the acceptance ratio method to calculate freeenergy differencesbetween systems that are highlyseparatedin a con � guration space.The purpose of this article is to push Voter’s idea further by applying more general transformations, including stochastic transformations resulting from mixing over transformation groups, to the underlying variables before performing bridge sampling. We term such methods warp bridgesampling to highlightthe fact that in addition to location shifting (i.e., centering)one can further reduce the difference/distance between two densities by warping their shapes without changing the normalizing constants. Real databased empirical studies using the fullinformationitem factor modeland a nonlinearmixed model are providedto demonstrate the potentially substantial gains in Monte Carlo ef � ciency by going beyond centering and by using ef � cient bridge sampling estimators. Our general method is also applicable to a couple of recent proposals for computing marginal likelihoods and Bayes factors because these methods turn out to be covered by the general bridge sampling framework.
NORMALIZING CONSTANTS
"... Abstract. Computing (ratios of) normalizing constants of probability models is a fundamental computational problem for many statistical and scientific studies. Monte Carlo simulation is an effective technique, especially with complex and highdimensional models. This paper aims to bring to the atten ..."
Abstract
 Add to MetaCart
Abstract. Computing (ratios of) normalizing constants of probability models is a fundamental computational problem for many statistical and scientific studies. Monte Carlo simulation is an effective technique, especially with complex and highdimensional models. This paper aims to bring to the attention of general statistical audiences of some effective methods originating from theoretical physics and at the same time to explore these methods from a more statistical perspective, through establishing theoretical connections and illustrating their uses with statistical problems. We show that the acceptance ratio method and thermodynamic integration are natural generalizations of importance sampling, which is most familiar to statistical audiences. The former generalizes importance sampling through the use of a single “bridge ” density and is thus a case of bridge sampling in the sense of Meng and Wong. Thermodynamic integration, which is also known in the numerical analysis literature as Ogata’s method for highdimensional integration, corresponds to the use of infinitely many and continuously connected bridges (and thus a “path”). Our path sampling formulation offers more flexibility and thus potential efficiency to thermodynamic integration, and the search of optimal paths turns out to have close connections with the Jeffreys prior density and the Rao and Hellinger distances between two densities. We provide an informative theoretical example as well as two empirical examples (involving 17 to 70dimensional integrations) to illustrate the potential and implementation of path sampling. We also discuss some open problems.
1082989X/09/$12.00 DOI: 10.1037/a0015825 Estimation of IRT Graded Response Models: Limited Versus Full Information Methods
"... The performance of parameter estimates and standard errors in estimating F. Samejima’s graded response model was examined across 324 conditions. Full information maximum likelihood (FIML) was compared with a 3stage estimator for categorical item factor analysis (CIFA) when the unweighted least squa ..."
Abstract
 Add to MetaCart
The performance of parameter estimates and standard errors in estimating F. Samejima’s graded response model was examined across 324 conditions. Full information maximum likelihood (FIML) was compared with a 3stage estimator for categorical item factor analysis (CIFA) when the unweighted least squares method was used in CIFA’s third stage. CIFA is much faster in estimating multidimensional models, particularly with correlated dimensions. Overall, CIFA yields slightly more accurate parameter estimates, and FIML yields slightly more accurate standard errors. Yet, across most conditions, differences between methods are negligible. FIML is the best election in small sample sizes (200 observations). CIFA is the best election in larger samples (on computational grounds). Both methods failed in a number of conditions, most of which involved 200 observations, few indicators per dimension, highly skewed items, or low factor loadings. These conditions are to be avoided in applications.