Results 11  20
of
59
2006 “A Bayesian Approach to Diffusion Models of DecisionMaking and Response Time” NIPS
"... We present a computational Bayesian approach for Wiener diffusion models, which are prominent accounts of response time distributions in decisionmaking. We first develop a general closedform analytic approximation to the response time distributions for onedimensional diffusion processes, and deri ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
(Show Context)
We present a computational Bayesian approach for Wiener diffusion models, which are prominent accounts of response time distributions in decisionmaking. We first develop a general closedform analytic approximation to the response time distributions for onedimensional diffusion processes, and derive the required Wiener diffusion as a special case. We use this result to undertake Bayesian modeling of benchmark data, using posterior sampling to draw inferences about the interesting psychological parameters. With the aid of the benchmark data, we show the Bayesian account has several advantages, including dealing naturally with the parameter variation needed to account for some key features of the data, and providing quantitative measures to guide decisions about model construction. 1
Learning hybrid Bayesian networks from data
, 1998
"... We illustrate two different methodologies for learning Hybrid Bayesian networks, that is, Bayesian networks containing both continuous and discrete variables, from data. The two methodologies differ in the way of handling continuous data when learning the Bayesian network structure. The first method ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
We illustrate two different methodologies for learning Hybrid Bayesian networks, that is, Bayesian networks containing both continuous and discrete variables, from data. The two methodologies differ in the way of handling continuous data when learning the Bayesian network structure. The first methodology uses discretized data to learn the Bayesian network structure, and the original nondiscretized data for the parameterization of the learned structure. The second methodology uses nondiscretized data both to learn the Bayesian network structure and its parameterization. For the direct handling of continuous data, we propose the use of artificial neural networks as probability estimators, to be used as an integral part of the scoring metric defined to search the space of Bayesian network structures. With both methodologies, we assume the availability of a complete dataset, with no missing values or hidden variables. We report experimental results aimed at comparing the two methodologies. These results provide evidence that learning with discretized data presents advantages both in terms of efficiency and in terms of accuracy of the learned models over the alternative approach of using nondiscretized data.
Bridge Estimation of the Probability Density At a Point
 Statistica Sinica
, 2000
"... Bridge estimation, as described by Meng and Wong in 1996, is used to estimate the value taken by a probability density at a point in the state space. When the normalisation of the prior density is known, this value may be used to estimate a Bayes factor. It is shown that the multiblock MetropolisH ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Bridge estimation, as described by Meng and Wong in 1996, is used to estimate the value taken by a probability density at a point in the state space. When the normalisation of the prior density is known, this value may be used to estimate a Bayes factor. It is shown that the multiblock MetropolisHastings estimators of Chib and Jeliazkov (2001) are bridge estimators. This identification leads to more efficient estimators for the quantity of interest.
Delivery: An OpenSource ModelBased Bayesian Seismic Inversion Program
, 2003
"... We introduce a new opensource toolkit for modelbased Bayesian seismic inversion called Delivery. The prior model in Delivery is a tracelocal layer stack, with rock physics information taken from log analysis and layer times initialised from picks. We allow for uncertainty in both the fluid ty ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
We introduce a new opensource toolkit for modelbased Bayesian seismic inversion called Delivery. The prior model in Delivery is a tracelocal layer stack, with rock physics information taken from log analysis and layer times initialised from picks. We allow for uncertainty in both the fluid type and saturation in reservoir layers: variation in seismic responses due to fluid e#ects are taken into account via Gassman's equation. Multiple stacks are supported, so the software implicitly performs a full AVO inversion using approximate Zoeppritz equations. The likelihood function is formed from a convolutional model with specified wavelet(s) and noise level(s). Uncertainties and irresolvabilities in the inverted models are captured by the generation of multiple stochastic models from the Bayesian posterior, all of which acceptably match the seismic data, log data, and rough initial picks of the horizons. Postinversion analysis of the inverted stochastic models then facilitates the answering of commercially useful questions, e.g. the probability of hydrocarbons, the expected reservoir volume and its uncertainty, and the distribution of net sand. Delivery is written in java, and thus platform independent, but the SU data backbone makes the inversion particularly suited to Unix/Linux environments and cluster systems.
Performance of Bayesian model selection criteria for Gaussian mixture models. In: Frontiers of Statistical Decision Making and Bayesian
, 2010
"... ..."
Bayesian Estimation and Model Choice in Item Response Models
, 1999
"... Item response models are essential tools for analyzing results from many placement tests. Such models are used to quantify the probability of correct response as a function of unobserved examinee ability and other parameters explaining the difficulty and the discriminatory power of the questions in ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Item response models are essential tools for analyzing results from many placement tests. Such models are used to quantify the probability of correct response as a function of unobserved examinee ability and other parameters explaining the difficulty and the discriminatory power of the questions in the test. Some of these models also incorporate a threshold parameter for the probability of the correct response to eliminate the effect of guessing the correct answer in multiple choice type tests. In this article we consider fitting of these models using the Gibbs sampler. A data augmentation method to analyze a normalogive model incorporating a threshold guessing parameter is introduced and compared with a MetropolisHastings sampling method. The proposed method is an order of magnitude better than the existing method. Another objective of this paper is to develop Bayesian model choice techniques for model discrimination. A predictive approach based on a variant of the Bayes factor is ...
An evaluation of a Markov chain Monte Carlo method for the Rasch model
 Applied Psychological Measurement
"... ..."
Easy Estimation of Normalizing Constants and Bayes Factors from Posterior Simulation: Stabilizing the Harmonic Mean Estimator
, 2000
"... The Bayes factor is a useful summary for model selection. Calculation of this measure involves evaluating the integrated likelihood (or prior predictive density), which can be estimated from the output of MCMC and other posterior simulation methods using the harmonic mean estimator. While this is a ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
The Bayes factor is a useful summary for model selection. Calculation of this measure involves evaluating the integrated likelihood (or prior predictive density), which can be estimated from the output of MCMC and other posterior simulation methods using the harmonic mean estimator. While this is a simulationconsistent estimator, it can have innite variance. In this article we describe a method to stabilize the harmonic mean estimator. Under this approach, the parameter space is reduced such that the modied estimator involves a harmonic mean of heavier tailed densities, thus resulting in a nite variance estimator. We discuss general conditions under which this reduction is applicable and illustrate the proposed method through several examples. Keywords: Bayes factor, Betabinomial, Integrated likelihood, PoissonGamma distribution, Statistical genetics, Variance reduction. Contents 1 Introduction 1 2 Stabilizing the Harmonic Mean Estimator 2 3 Statistical Genetics 6 4 Beta{Binom...
Bayesian finite mixtures: a note on prior specification and posterior computation
, 2005
"... A new method for the computation of the posterior distribution of the number k of components in a finite mixture is presented. Two aspects of prior specification are also studied: an argument is made for the use of a P oi(1) distribution as the prior for k; and methods are given for the selection of ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
A new method for the computation of the posterior distribution of the number k of components in a finite mixture is presented. Two aspects of prior specification are also studied: an argument is made for the use of a P oi(1) distribution as the prior for k; and methods are given for the selection of hyperparameter values in the mixture of normals model, with natural conjugate priors on the components parameters.
Detecting Mines in Minefields with Linear Characteristics
, 1999
"... We consider the problem of detecting minefields using aerial images. A first stage of image processing has reduced the image to a set of points, each one representing a possible mine. Our task is to decide which ones are actual mines. We assume that the minefield consists of approximately parallel r ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We consider the problem of detecting minefields using aerial images. A first stage of image processing has reduced the image to a set of points, each one representing a possible mine. Our task is to decide which ones are actual mines. We assume that the minefield consists of approximately parallel rows of mines laid out according to a probability distribution that encourages evenly spaced, linear patterns. The noise points are assumed to be distributed as a Poisson process. We construct a Markov chain Monte Carlo algorithm to estimate the model and obtain posterior probabilities for each point being a mine. The algorithm performs well on