Results 11  20
of
126
Bayesian Monte Carlo
"... We investigate Bayesian alternatives to classical Monte Carlo methods for evaluating integrals. Bayesian Monte Carlo (BMC) allows the incorporation of prior knowledge, such as smoothness of the integrand, into the estimation. In a simple problem we show that this outperforms any classical import ..."
Abstract

Cited by 31 (4 self)
 Add to MetaCart
We investigate Bayesian alternatives to classical Monte Carlo methods for evaluating integrals. Bayesian Monte Carlo (BMC) allows the incorporation of prior knowledge, such as smoothness of the integrand, into the estimation. In a simple problem we show that this outperforms any classical importance sampling method. We also attempt more challenging multidimensional integrals involved in computing marginal likelihoods of statistical models (a.k.a. partition functions and model evidences) . We find that Bayesian Monte Carlo outperformed Annealed Importance Sampling, although for very high dimensional problems or problems with massive multimodality BMC may be less adequate. One advantage of the Bayesian approach to Monte Carlo is that samples can be drawn from any distribution. This allows for the possibility of active design of sample points so as to maximise information gain.
Blocking Gibbs Sampling for Linkage Analysis in Large Pedigrees with Many Loops
 AMERICAN JOURNAL OF HUMAN GENETICS
, 1996
"... We will apply the method of blocking Gibbs sampling to a problem of great importance and complexity  linkage analysis. Blocking Gibbs combines exact local computations with Gibbs sampling in a way that complements the strengths of both. The method is able to handle problems with very high complexi ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
We will apply the method of blocking Gibbs sampling to a problem of great importance and complexity  linkage analysis. Blocking Gibbs combines exact local computations with Gibbs sampling in a way that complements the strengths of both. The method is able to handle problems with very high complexity such as linkage analysis in large pedigrees with many loops; a task that no other known method is able to handle. New developments of the method are outlined, and it is applied to a highly complex linkage problem.
Learning domain structures
 In Proceedings of the 26th Annual Conference of the Cognitive Science Society
, 2004
"... How do people acquire and use knowledge about domain structures, such as the treestructured taxonomy of folk biology? These structures are typically seen either as consequences of innate domainspecific knowledge or as epiphenomena of domaingeneral associative learning. We present an alternative: ..."
Abstract

Cited by 22 (13 self)
 Add to MetaCart
How do people acquire and use knowledge about domain structures, such as the treestructured taxonomy of folk biology? These structures are typically seen either as consequences of innate domainspecific knowledge or as epiphenomena of domaingeneral associative learning. We present an alternative: a framework for statistical inference that discovers the structural principles that best account for different domains of objects and their properties. Our approach infers that a tree structure is best for a biological dataset, and a linear structure (“left”–“right”) is best for a dataset of people and their political views. We compare our proposal with unstructured associative learning and argue that our structured approach gives the better account of inductive
Fully Bayesian Estimation of Gibbs Hyperparameters for Emission Computed Tomography Data
 IEEE Transactions on Medical Imaging
, 1997
"... In recent years, many investigators have proposed Gibbs prior models to regularize images reconstructed from emission computed tomography data. Unfortunately, hyperparameters used to specify Gibbs priors can greatly influence the degree of regularity imposed by such priors, and as a result, numerous ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
In recent years, many investigators have proposed Gibbs prior models to regularize images reconstructed from emission computed tomography data. Unfortunately, hyperparameters used to specify Gibbs priors can greatly influence the degree of regularity imposed by such priors, and as a result, numerous procedures have been proposed to estimate hyperparameter values from observed image data. Many of these procedures attempt to maximize the joint posterior distribution on the image scene. To implement these methods, approximations to the joint posterior densities are required, because the dependence of the Gibbs partition function on the hyperparameter values is unknown. In this paper, we use recent results in Markov Chain Monte Carlo sampling to estimate the relative values of Gibbs partition functions, and using these values, sample from joint posterior distributions on image scenes. This allows for a fully Bayesian procedure which does not fix the hyperparameters at some estimated or spe...
Distance dependent Chinese restaurant processes
"... We develop the distance dependent Chinese restaurant process (CRP), a flexible class of distributions over partitions that allows for nonexchangeability. This class can be used to model dependencies between data in infinite clustering models, including dependencies across time or space. We examine t ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
We develop the distance dependent Chinese restaurant process (CRP), a flexible class of distributions over partitions that allows for nonexchangeability. This class can be used to model dependencies between data in infinite clustering models, including dependencies across time or space. We examine the properties of the distance dependent CRP, discuss its connections to Bayesian nonparametric mixture models, and derive a Gibbs sampler for both observed and mixture settings. We study its performance with timedependent models and three text corpora. We show that relaxing the assumption of exchangeability with distance dependent CRPs can provide a better fit to sequential data. We also show its alternative formulation of the traditional CRP leads to a fastermixing Gibbs sampling algorithm than the one based on the original formulation. 1.
Warp bridge sampling
 J. Comp. Graph. Statist
, 2002
"... Bridge sampling, a general formulation of the acceptance ratio method in physics for computing freeenergy difference, is an effective Monte Carlo method for computing normalizingconstantsof probabilitymodels. The method was originallyproposedfor cases where the probabilitymodels have overlappingsup ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
Bridge sampling, a general formulation of the acceptance ratio method in physics for computing freeenergy difference, is an effective Monte Carlo method for computing normalizingconstantsof probabilitymodels. The method was originallyproposedfor cases where the probabilitymodels have overlappingsupport. Voter proposed the idea of shifting physical systems before applying the acceptance ratio method to calculate freeenergy differencesbetween systems that are highlyseparatedin a con � guration space.The purpose of this article is to push Voter’s idea further by applying more general transformations, including stochastic transformations resulting from mixing over transformation groups, to the underlying variables before performing bridge sampling. We term such methods warp bridgesampling to highlightthe fact that in addition to location shifting (i.e., centering)one can further reduce the difference/distance between two densities by warping their shapes without changing the normalizing constants. Real databased empirical studies using the fullinformationitem factor modeland a nonlinearmixed model are providedto demonstrate the potentially substantial gains in Monte Carlo ef � ciency by going beyond centering and by using ef � cient bridge sampling estimators. Our general method is also applicable to a couple of recent proposals for computing marginal likelihoods and Bayes factors because these methods turn out to be covered by the general bridge sampling framework.
Bayesian Variable Selection for Proportional Hazards Models
, 1996
"... The authors consider the problem of Bayesian variable selection for proportional hazards regression models with right censored data. They propose a semiparametric approach in which a nonparametric prior is specified for the baseline hazard rate and a fully parametric prior is specified for the regr ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
The authors consider the problem of Bayesian variable selection for proportional hazards regression models with right censored data. They propose a semiparametric approach in which a nonparametric prior is specified for the baseline hazard rate and a fully parametric prior is specified for the regression coe#cients. For the baseline hazard, they use a discrete gamma process prior, and for the regression coe#cients and the model space, they propose a semiautomatic parametric informative prior specification that focuses on the observables rather than the parameters. To implement the methodology, they propose a Markov chain Monte Carlo method to compute the posterior model probabilities. Examples using simulated and real data are given to demonstrate the methodology. R ESUM E Les auteurs abordent d'un point de vue bayesien le problemedelaselection de variables dans les modeles de regression des risques proportionnels en presence de censure a droite. Ils proposent une approche semip...
Sequential Monte Carlo for Bayesian Computation
"... Sequential Monte Carlo (SMC) methods are a class of importance sampling and resampling techniques designed to simulate from a sequence of probability distributions. These approaches have become very popular over the last few years to solve sequential Bayesian inference problems (e.g. Doucet et al. 2 ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
Sequential Monte Carlo (SMC) methods are a class of importance sampling and resampling techniques designed to simulate from a sequence of probability distributions. These approaches have become very popular over the last few years to solve sequential Bayesian inference problems (e.g. Doucet et al. 2001). However, in comparison to Markov chain Monte Carlo (MCMC), the application of SMC remains limited when, in fact, such methods are also appropriate in such contexts (e.g. Chopin (2002); Del Moral et al. (2006)). In this paper, we present a simple unifying framework which allows us to extend both the SMC methodology and its range of applications. Additionally, reinterpreting SMC algorithms as an approximation of nonlinear MCMC kernels, we present alternative SMC and iterative selfinteracting approximation (Del Moral & Miclo 2004; 2006) schemes. We demonstrate the performance of the SMC methodology on static and sequential Bayesian inference problems.
Computing Normalizing Constants for Finite Mixture Models via Incremental Mixture Importance Sampling (IMIS)
, 2003
"... We propose a method for approximating integrated likelihoods in finite mixture models. We formulate the model in terms of the unobserved group memberships, z, and make them the variables of integration. The integral is then evaluated using importance sampling over the z. We propose an adaptive imp ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
We propose a method for approximating integrated likelihoods in finite mixture models. We formulate the model in terms of the unobserved group memberships, z, and make them the variables of integration. The integral is then evaluated using importance sampling over the z. We propose an adaptive importance sampling function which is itself a mixture, with two types of component distributions, one concentrated and one diffuse. The more concentrated type of component serves the usual purpose of an importance sampling function, sampling mostly group assignments of high posterior probability. The less concentrated type of component allows for the importance sampling function to explore the space in a controlled way to find other, unvisited assignments with high posterior probability. Components are added adaptively, one at a time, to cover areas of high posterior probability not well covered by the current important sampling function. The method is called Incremental Mixture Importance Sampling (IMIS). IMIS is easy to implement and to monitor for convergence. It scales easily for higher dimensional