Results 1  10
of
29
Simulating Normalized Constants: From Importance Sampling to Bridge Sampling to Path Sampling
 Statistical Science, 13, 163–185. COMPARISON OF METHODS FOR COMPUTING BAYES FACTORS 435
, 1998
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 145 (4 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
Bayesian Model Assessment In Factor Analysis
, 2004
"... Factor analysis has been one of the most powerful and flexible tools for assessment of multivariate dependence and codependence. Loosely speaking, it could be argued that the origin of its success rests in its very exploratory nature, where various kinds of datarelationships amongst the variable ..."
Abstract

Cited by 56 (8 self)
 Add to MetaCart
Factor analysis has been one of the most powerful and flexible tools for assessment of multivariate dependence and codependence. Loosely speaking, it could be argued that the origin of its success rests in its very exploratory nature, where various kinds of datarelationships amongst the variables at study can be iteratively verified and/or refuted. Bayesian inference in factor analytic models has received renewed attention in recent years, partly due to computational advances but also partly to applied focuses generating factor structures as exemplified by recent work in financial time series modeling. The focus of our current work is on exploring questions of uncertainty about the number of latent factors in a multivariate factor model, combined with methodological and computational issues of model specification and model fitting. We explore reversible jump MCMC methods that build on sets of parallel Gibbs samplingbased analyses to generate suitable empirical proposal distributions and that address the challenging problem of finding e#cient proposals in highdimensional models. Alternative MCMC methods based on bridge sampling are discussed, and these fully Bayesian MCMC approaches are compared with a collection of popular model selection methods in empirical studies.
Inference for Deterministic Simulation Models: The Bayesian Melding Approach
 JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
, 2000
"... Deterministic simulation models are used in many areas of science, engineering and policymaking. Typically, they are complex models that attempt to capture underlying mechanisms in considerable detail, and they have many userspecified inputs. The inputs are often specified by some form of trialan ..."
Abstract

Cited by 25 (4 self)
 Add to MetaCart
Deterministic simulation models are used in many areas of science, engineering and policymaking. Typically, they are complex models that attempt to capture underlying mechanisms in considerable detail, and they have many userspecified inputs. The inputs are often specified by some form of trialanderror approach in which plausible values are postulated, the corresponding outputs inspected, and the inputs modified until plausible outputs are obtained. Here we address the issue of more formal inference for such models. Raftery et al. (1995a) proposed the Bayesian synthesis approach in which the available information about both inputs and outputs was encoded in a probability distribution and inference was made by restricting this distribution to the submanifold specifid by the model. Wolpert (1995) showed that this is subject to the Borel paradox, according to which the results can depend on the parameterization of the model. We show that this dependence is due to the presence of a prior on the outputs. We propose a modified approach, called Bayesian melding, which takes full account of information and uncertainty about both inputs and outputs to the model, while avoiding the Borel paradox. This is done by recognizing the existence of two priors, one implicit and one explicit, on each input and output � these are combined via logarithmic pooling. Bayesian melding is then
Estimating the integrated likelihood via posterior simulation using the harmonic mean identity
 Bayesian Statistics
, 2007
"... The integrated likelihood (also called the marginal likelihood or the normalizing constant) is a central quantity in Bayesian model selection and model averaging. It is defined as the integral over the parameter space of the likelihood times the prior density. The Bayes factor for model comparison a ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
The integrated likelihood (also called the marginal likelihood or the normalizing constant) is a central quantity in Bayesian model selection and model averaging. It is defined as the integral over the parameter space of the likelihood times the prior density. The Bayes factor for model comparison and Bayesian testing is a ratio of integrated likelihoods, and the model weights in Bayesian model averaging are proportional to the integrated likelihoods. We consider the estimation of the integrated likelihood from posterior simulation output, aiming at a generic method that uses only the likelihoods from the posterior simulation iterations. The key is the harmonic mean identity, which says that the reciprocal of the integrated likelihood is equal to the posterior harmonic mean of the likelihood. The simplest estimator based on the identity is thus the harmonic mean of the likelihoods. While this is an unbiased and simulationconsistent estimator, its reciprocal can have infinite variance and so it is unstable in general. We describe two methods for stabilizing the harmonic mean estimator. In the first one, the parameter space is reduced in such a way that the modified estimator involves a harmonic mean of heaviertailed densities, thus resulting in a finite variance estimator. The resulting
Inference in modelbased cluster analysis
, 1995
"... A new approach to cluster analysis has been introduced based on parsimonious geometric modelling of the withingroup covariance matrices in a mixture of multivariate normal distributions, using hierarchical agglomeration and iterative relocation. It works well and is widely used via the MCLUST softw ..."
Abstract

Cited by 22 (7 self)
 Add to MetaCart
A new approach to cluster analysis has been introduced based on parsimonious geometric modelling of the withingroup covariance matrices in a mixture of multivariate normal distributions, using hierarchical agglomeration and iterative relocation. It works well and is widely used via the MCLUST software available in SPLUS and StatLib. However, it has several limitations: there is no assessment of the uncertainty about the classification, the partition can be suboptimal, parameter estimates are biased, the shape matrix has to be specified by the user, prior group probabilities are assumed to be equal, the method for choosing the number of groups is based on a crude approximation, and no formal way of choosing between the various possible models is included. Here, we propose a new approach which overcomes all these difficulties. It consists of exact Bayesian inference via Gibbs sampling, and the calculation of Bayes factors (for choosing the model and the number of groups) from the output using the LaplaceMetropolis estimator. It works well in several real and simulated examples.
Behavior in a Dynamic Decision Problem: An Analysis of Experimental Evidence Using a Bayesian Type Classification Algorithm
 Econometrica
, 2004
"... Different people may use different strategies, or decision rules, when solving complex decision problems. We provide a new Bayesian procedure for drawing inferences about the nature and number of decision rules present in a population, and use it to analyze the behaviors of laboratory subjects confr ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
Different people may use different strategies, or decision rules, when solving complex decision problems. We provide a new Bayesian procedure for drawing inferences about the nature and number of decision rules present in a population, and use it to analyze the behaviors of laboratory subjects confronted with a difficult dynamic stochastic decision problem. Subjects practiced before playing for money. Based on money round decisions, our procedure classifies subjects into three types, which we label “Near Rational,” “Fatalist,” and “Confused.” There is clear evidence of continuity in subjects’ behaviors between the practice and money rounds: types who performed best in practice also tended to perform best when playing for money. However, the agreement between practice and money play is far from perfect. The divergences appear to be well explained by a combination of type switching (due to learning and/or increased effort in money play) and errors in our probabilistic type assignments.
Computing Normalizing Constants for Finite Mixture Models via Incremental Mixture Importance Sampling (IMIS)
, 2003
"... We propose a method for approximating integrated likelihoods in finite mixture models. We formulate the model in terms of the unobserved group memberships, z, and make them the variables of integration. The integral is then evaluated using importance sampling over the z. We propose an adaptive imp ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
We propose a method for approximating integrated likelihoods in finite mixture models. We formulate the model in terms of the unobserved group memberships, z, and make them the variables of integration. The integral is then evaluated using importance sampling over the z. We propose an adaptive importance sampling function which is itself a mixture, with two types of component distributions, one concentrated and one diffuse. The more concentrated type of component serves the usual purpose of an importance sampling function, sampling mostly group assignments of high posterior probability. The less concentrated type of component allows for the importance sampling function to explore the space in a controlled way to find other, unvisited assignments with high posterior probability. Components are added adaptively, one at a time, to cover areas of high posterior probability not well covered by the current important sampling function. The method is called Incremental Mixture Importance Sampling (IMIS). IMIS is easy to implement and to monitor for convergence. It scales easily for higher dimensional
Bayes Factors and BIC  Comment on “A Critique of the Bayesian Information Criterion for Model Selection”
, 1999
"... I would like to thank David L. Weakliem (1999 [this issue]) for a thoughtprovoking discussion of the basis of the Bayesian information criterion (BIC). We may be in closer agreement than one might think from reading his article. When writing about Bayesian model selection for social researchers, I ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
I would like to thank David L. Weakliem (1999 [this issue]) for a thoughtprovoking discussion of the basis of the Bayesian information criterion (BIC). We may be in closer agreement than one might think from reading his article. When writing about Bayesian model selection for social researchers, I focused on the BIC approximation on the grounds that it is easily implemented and often reasonable, and simplifies the exposition of an already technical topic. As Weakliem says, BIC corresponds to one of many possible priors, although I will argue that this prior is such as to make BIC appropriate for baseline reference use and reporting, albeit not necessarily always appropriate for drawing final conclusions. When writing about the same subject for statistical journals, however, I have paid considerable attention to the choice of priors for Bayes factors. I thank Weakliem for bringing this subtle but important topic to the attention of sociologists. In 1986, I proposed replacing P values by Bayes factors as the basis for hypothesis testing and model selection in social research, and I suggested BIC as a simple and convenient, albeit crude, approximation. Since then, a great deal has been learned about Bayes factors in general, and about BIC in particular. Weakliem seems to agree that the Bayes factor framework is a useful one for hypothesis testing and model selection; his concern is with how the Bayes factors are to be evaluated. Weakliem makes two main points about the BIC approximation. The first is that BIC yields an approximation to Bayes factors that corresponds closely to a particular prior (the unit information prior) on