Results 11  20
of
37
2006 “A Bayesian Approach to Diffusion Models of DecisionMaking and Response Time” NIPS
"... We present a computational Bayesian approach for Wiener diffusion models, which are prominent accounts of response time distributions in decisionmaking. We first develop a general closedform analytic approximation to the response time distributions for onedimensional diffusion processes, and deri ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
We present a computational Bayesian approach for Wiener diffusion models, which are prominent accounts of response time distributions in decisionmaking. We first develop a general closedform analytic approximation to the response time distributions for onedimensional diffusion processes, and derive the required Wiener diffusion as a special case. We use this result to undertake Bayesian modeling of benchmark data, using posterior sampling to draw inferences about the interesting psychological parameters. With the aid of the benchmark data, we show the Bayesian account has several advantages, including dealing naturally with the parameter variation needed to account for some key features of the data, and providing quantitative measures to guide decisions about model construction. 1
Easy Estimation of Normalizing Constants and Bayes Factors from Posterior Simulation: Stabilizing the Harmonic Mean Estimator
, 2000
"... The Bayes factor is a useful summary for model selection. Calculation of this measure involves evaluating the integrated likelihood (or prior predictive density), which can be estimated from the output of MCMC and other posterior simulation methods using the harmonic mean estimator. While this is a ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
The Bayes factor is a useful summary for model selection. Calculation of this measure involves evaluating the integrated likelihood (or prior predictive density), which can be estimated from the output of MCMC and other posterior simulation methods using the harmonic mean estimator. While this is a simulationconsistent estimator, it can have innite variance. In this article we describe a method to stabilize the harmonic mean estimator. Under this approach, the parameter space is reduced such that the modied estimator involves a harmonic mean of heavier tailed densities, thus resulting in a nite variance estimator. We discuss general conditions under which this reduction is applicable and illustrate the proposed method through several examples. Keywords: Bayes factor, Betabinomial, Integrated likelihood, PoissonGamma distribution, Statistical genetics, Variance reduction. Contents 1 Introduction 1 2 Stabilizing the Harmonic Mean Estimator 2 3 Statistical Genetics 6 4 Beta{Binom...
An evaluation of a Markov chain Monte Carlo method for the Rasch model
, 1998
"... The accuracy of the Gibbs sampling Markov chain monte carlo procedure was examined for estimating item and person (θ) parameters in the oneparameter logistic model. Four datasets were analyzed using the Gibbs sampling method, conditional maximum likelihood, marginal maximum likelihood, and joint ma ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
The accuracy of the Gibbs sampling Markov chain monte carlo procedure was examined for estimating item and person (θ) parameters in the oneparameter logistic model. Four datasets were analyzed using the Gibbs sampling method, conditional maximum likelihood, marginal maximum likelihood, and joint maximum likelihood. Maximum likelihood and expected a posteriori θ estimation methods were used with marginal maximum likelihood estimation of item parameters. Item parameter estimates from the four methods were almost identical; θ estimates from Gibbs sampling were similar to those obtained from the expected a posteriori method. Index terms: Bayesian inference, conditional maximum likelihood, Gibbs sampling, item response theory, joint maximum
Bayesian Estimation and Model Choice in Item Response Models
, 1999
"... Item response models are essential tools for analyzing results from many placement tests. Such models are used to quantify the probability of correct response as a function of unobserved examinee ability and other parameters explaining the difficulty and the discriminatory power of the questions in ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Item response models are essential tools for analyzing results from many placement tests. Such models are used to quantify the probability of correct response as a function of unobserved examinee ability and other parameters explaining the difficulty and the discriminatory power of the questions in the test. Some of these models also incorporate a threshold parameter for the probability of the correct response to eliminate the effect of guessing the correct answer in multiple choice type tests. In this article we consider fitting of these models using the Gibbs sampler. A data augmentation method to analyze a normalogive model incorporating a threshold guessing parameter is introduced and compared with a MetropolisHastings sampling method. The proposed method is an order of magnitude better than the existing method. Another objective of this paper is to develop Bayesian model choice techniques for model discrimination. A predictive approach based on a variant of the Bayes factor is ...
Easy Computation of Bayes Factors and Normalizing Constants for Mixture Models via Mixture Importance Sampling
, 2001
"... We propose a method for approximating integrated likelihoods, or posterior normalizing constants, in finite mixture models, for which analytic approximations such as the Laplace method are invalid. Integrated likelihoods are key components of Bayes factors and of the posterior model probabilities us ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We propose a method for approximating integrated likelihoods, or posterior normalizing constants, in finite mixture models, for which analytic approximations such as the Laplace method are invalid. Integrated likelihoods are key components of Bayes factors and of the posterior model probabilities used in Bayesian model averaging. The method starts by formulating the model in terms of the unobserved group memberships, Z, and making these, rather than the model parameters, the variables of integration. The integral is then evaluated using importance sampling over the Z. The tricky part is choosing the importance sampling function, and we study the use of mixtures as importance sampling functions. We propose two forms of this: defensive mixture importance sampling (DMIS), and Zdistance importance sampling. We choose the parameters of the mixture adaptively, and we show how this can be done so as to approximately minimize the variance of the approximation to the integral.
Bayesian finite mixtures: a note on prior specification and posterior computation
, 2005
"... A new method for the computation of the posterior distribution of the number k of components in a finite mixture is presented. Two aspects of prior specification are also studied: an argument is made for the use of a P oi(1) distribution as the prior for k; and methods are given for the selection of ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
A new method for the computation of the posterior distribution of the number k of components in a finite mixture is presented. Two aspects of prior specification are also studied: an argument is made for the use of a P oi(1) distribution as the prior for k; and methods are given for the selection of hyperparameter values in the mixture of normals model, with natural conjugate priors on the components parameters.
Delivery: An OpenSource ModelBased Bayesian Seismic Inversion Program
, 2003
"... We introduce a new opensource toolkit for modelbased Bayesian seismic inversion called Delivery. The prior model in Delivery is a tracelocal layer stack, with rock physics information taken from log analysis and layer times initialised from picks. We allow for uncertainty in both the fluid ty ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We introduce a new opensource toolkit for modelbased Bayesian seismic inversion called Delivery. The prior model in Delivery is a tracelocal layer stack, with rock physics information taken from log analysis and layer times initialised from picks. We allow for uncertainty in both the fluid type and saturation in reservoir layers: variation in seismic responses due to fluid e#ects are taken into account via Gassman's equation. Multiple stacks are supported, so the software implicitly performs a full AVO inversion using approximate Zoeppritz equations. The likelihood function is formed from a convolutional model with specified wavelet(s) and noise level(s). Uncertainties and irresolvabilities in the inverted models are captured by the generation of multiple stochastic models from the Bayesian posterior, all of which acceptably match the seismic data, log data, and rough initial picks of the horizons. Postinversion analysis of the inverted stochastic models then facilitates the answering of commercially useful questions, e.g. the probability of hydrocarbons, the expected reservoir volume and its uncertainty, and the distribution of net sand. Delivery is written in java, and thus platform independent, but the SU data backbone makes the inversion particularly suited to Unix/Linux environments and cluster systems.
Determining the Number of Colors or Gray Levels in an Image Using Approximate Bayes Factors: The Pseudolikelihood Information Criterion (PLIC)
 PLIC), IEEE Transactions on Pattern Analysis and Machine Intelligence 24
, 2001
"... We propose a method for choosing the number of colors, or true gray levels, in an image. This is motivated by medical and satellite image segmentation, and may also be useful for color and gray scale image quantization, the display and storage of computergenerated holograms, and the use of cooccurr ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We propose a method for choosing the number of colors, or true gray levels, in an image. This is motivated by medical and satellite image segmentation, and may also be useful for color and gray scale image quantization, the display and storage of computergenerated holograms, and the use of cooccurrence matrices for assessing texture in images. Our underlying probability model is a hidden Markov random field. Each number of colors considered is viewed as corresponding to a statistical model for the image, and the resulting models are compared via approximate Bayes factors. The Bayes factors are approximated using BIC, where the required maximized likelihood is approximated by the QianTitterington pseudo likelihood. We call the resulting criterion PLIC (Pseudolikelihood Information Criterion). We also discuss a simpler approximation, MMIC (Marginal Mixture Information Criterion), which is based only on the marginal distribution of pixel values. This turns out to be useful for initialization, and also to have moderately good, albeit suboptimal, performance in its own right. We apply PLIC to three examples: a simulated twoband image, a medical segmentation problem, and a satellite image, and in each case it gives good results in practice. Keywords: BIC; Color image quantization; Cooccurrence matrix; Hologram; ICM algorithm; Image segmentation; Markov Random Field; Medical image; Mixture model; Posterior model probability; Pseudolikelihood; Satellite image.
Heterogeneity and model uncertainty in Bayesian regression models
, 1999
"... Data heterogeneity appears when the sample comes from at least two different populations. We analyze three types of situations. In the first and simplest case the majority of the data come from a central model and a few isolated observations come from a contaminating distribution. The data from the ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Data heterogeneity appears when the sample comes from at least two different populations. We analyze three types of situations. In the first and simplest case the majority of the data come from a central model and a few isolated observations come from a contaminating distribution. The data from the contaminating distribution are called outliers and they have been studied in depth in the statistical literature. In the second case we still have a central model but the heterogeneous data may appear in clusters of outliers which mask each other. This is the multiple outlier problem which is much more difficult to handle and it has been analyzed and understood in the last few years. The few Bayesian contributions to this problem are presented. In the third case we do not have a central model but instead different groups of data have been generated by different models. For multivariate normal this problem has been analyzed by mixture models under the name of cluster analysis, but a challenging area of research is to develop a general methodology for applying this multiple model approach to other statistical problems. Heterogeneity implies in general an increase in the uncertainty of predictions, and we present in this paper a procedure to measure this effect.