Results 1  10
of
31
Bayesian Model Assessment and Comparison Using CrossValidation Predictive Densities
 Neural Computation
, 2002
"... In this work, we discuss practical methods for the assessment, comparison, and selection of complex hierarchical Bayesian models. A natural way to assess the goodness of the model is to estimate its future predictive capability by estimating expected utilities. Instead of just making a point estimat ..."
Abstract

Cited by 26 (10 self)
 Add to MetaCart
In this work, we discuss practical methods for the assessment, comparison, and selection of complex hierarchical Bayesian models. A natural way to assess the goodness of the model is to estimate its future predictive capability by estimating expected utilities. Instead of just making a point estimate, it is important to obtain the distribution of the expected utility estimate, as it describes the uncertainty in the estimate. The distributions of the expected utility estimates can also be used to compare models, for example, by computing the probability of one model having a better expected utility than some other model. We propose an approach using crossvalidation predictive densities to obtain expected utility estimates and Bayesian bootstrap to obtain samples from their distributions. We also discuss the probabilistic assumptions made and properties of two practical crossvalidation methods, importance sampling and kfold crossvalidation. As illustrative examples, we use MLP neural networks and Gaussian Processes (GP) with Markov chain Monte Carlo sampling in one toy problem and two challenging realworld problems.
Estimating the integrated likelihood via posterior simulation using the harmonic mean identity
 Bayesian Statistics
, 2007
"... The integrated likelihood (also called the marginal likelihood or the normalizing constant) is a central quantity in Bayesian model selection and model averaging. It is defined as the integral over the parameter space of the likelihood times the prior density. The Bayes factor for model comparison a ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
The integrated likelihood (also called the marginal likelihood or the normalizing constant) is a central quantity in Bayesian model selection and model averaging. It is defined as the integral over the parameter space of the likelihood times the prior density. The Bayes factor for model comparison and Bayesian testing is a ratio of integrated likelihoods, and the model weights in Bayesian model averaging are proportional to the integrated likelihoods. We consider the estimation of the integrated likelihood from posterior simulation output, aiming at a generic method that uses only the likelihoods from the posterior simulation iterations. The key is the harmonic mean identity, which says that the reciprocal of the integrated likelihood is equal to the posterior harmonic mean of the likelihood. The simplest estimator based on the identity is thus the harmonic mean of the likelihoods. While this is an unbiased and simulationconsistent estimator, its reciprocal can have infinite variance and so it is unstable in general. We describe two methods for stabilizing the harmonic mean estimator. In the first one, the parameter space is reduced in such a way that the modified estimator involves a harmonic mean of heaviertailed densities, thus resulting in a finite variance estimator. The resulting
Variational Approximations in Bayesian Model Selection for Finite Mixture Distributions
 COMPUTATIONAL STATISTICS AND DATA ANALYSIS
, 2006
"... Variational methods for model comparison have become popular in the neural computing/machine learning literature. In this paper we explore their application to the Bayesian analysis of mixtures of Gaussians. We also consider how the Deviance Information Criterion, or DIC, devised by Spiegelhalter e ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
Variational methods for model comparison have become popular in the neural computing/machine learning literature. In this paper we explore their application to the Bayesian analysis of mixtures of Gaussians. We also consider how the Deviance Information Criterion, or DIC, devised by Spiegelhalter et al. (2002), can be extended to these types of model by exploiting the use of variational approximations. We illustrate the results of using variational methods for model selection and the calculation of a DIC using real and simulated data. Using the variational approximation, one can simultaneously estimate component parameters and the model complexity. It turns out that, if one starts o# with a large number of components, superfluous components are eliminated as the method converges to a solution, thereby leading to an automatic choice of model complexity, the appropriateness of which is reflected in the DIC values.
Extending Conventional priors for Testing General Hypotheses
 Biometrika
, 2007
"... In this paper, we consider that observations Y come from a general normal linear model and that it is desired to test a simplifying (null) hypothesis about the parameters. We approach this problem from an objective Bayesian, model selection perspective. Crucial ingredients for this approach are ‘pro ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
In this paper, we consider that observations Y come from a general normal linear model and that it is desired to test a simplifying (null) hypothesis about the parameters. We approach this problem from an objective Bayesian, model selection perspective. Crucial ingredients for this approach are ‘proper objective priors ’ to be used for deriving the Bayes factors. JeffreysZellnerSiow priors have shown to have good properties for testing null hypotheses defined by specific values of the parameters in full rank linear models. We extend these priors to deal with general hypotheses in general linear models, not necessarily full rank. The resulting priors, which we call ‘conventional priors’, are expressed as a generalization of recently introduced ‘partially informative distributions’. The corresponding Bayes factors are fully automatic, easy to compute and very reasonable. The methodology is illustrated for two popular problems: the change point problem and the equality of treatments effects problem. We compare the conventional priors derived for these problems with other objective Bayesian proposals like the intrinsic priors. It is concluded that both priors behave similarly although interesting subtle differences arise. Finally, we accommodate the conventional priors to deal with non nested model selection as well as multiple model comparison.
Bayesian Validation of a Computer Model for Vehicle Collision
"... A key question in evaluation of computer models is Does the computer model adequately represent reality? A complete Bayesian approach to answering this question is developed for the challenging practical context in which the computer model (and reality) produce functional data. The methodology is pa ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
A key question in evaluation of computer models is Does the computer model adequately represent reality? A complete Bayesian approach to answering this question is developed for the challenging practical context in which the computer model (and reality) produce functional data. The methodology is particularly suited to treating the major issues associated with the validation process: quantifying multiple sources of error and uncertainty in computer models; combining multiple sources of information; and being able to adapt to different – but related – scenarios through hierarchical modeling. It is also shown how one can formally test if the computer model reproduces reality. The approach is illustrated through study of a computer model developed to model vehicle crashworthiness.
Estimating and Projecting Trends in HIV/AIDS Generalized Epidemics Using Incremental Mixture Importance Sampling
"... The Joint United Nations Programme on HIV/AIDS (UNAIDS) has decided to use Bayesian melding as the basis for its probabilistic projections of HIV prevalence in countries with generalized epidemics. This combines a mechanistic epidemiological model, prevalence data and expert opinion. Initially, the ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
The Joint United Nations Programme on HIV/AIDS (UNAIDS) has decided to use Bayesian melding as the basis for its probabilistic projections of HIV prevalence in countries with generalized epidemics. This combines a mechanistic epidemiological model, prevalence data and expert opinion. Initially, the posterior distribution was approximated by samplingimportanceresampling, which is simple to implement, easy to interpret, transparent to users and gave acceptable results for most countries. For some countries, however, this is not computationally efficient because the posterior distribution tends to be concentrated around nonlinear ridges and can also be multimodal. We propose instead Incremental Mixture Importance Sampling (IMIS), which iteratively builds up a better importance sampling function. This retains the simplicity and transparency of sampling importance resampling, but is much more efficient computationally. It also leads to a simple estimator of the integrated likelihood that is the basis for Bayesian model comparison and model averaging. In simulation experiments and on real data it outperformed both sampling importance resampling and three publicly available generic Markov chain Monte Carlo algorithms for this
Particle filtering and parameter learning
, 2007
"... This paper provides a new approach for sequentially learning parameters and states in a wide class of state space models using particle filters. Our approach generates direct i.i.d. samples from a particle approximation to the joint posterior distribution of both parameters and latent states, avoidi ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
This paper provides a new approach for sequentially learning parameters and states in a wide class of state space models using particle filters. Our approach generates direct i.i.d. samples from a particle approximation to the joint posterior distribution of both parameters and latent states, avoiding the use of and the degeneracies inherent in sequential importance sampling. We illustrate the efficiency of our approach by sequentially learning parameters and filtering states in two models: a logstochastic volatility model and robust version of the Kalman filter model with terrors in both the observation and state equation. In both cases, we show using simulated data that our approach efficiently learns the parameters and states sequentially, generating higher effective sample sizes than existing algorithms. We use the approach for two real data examples, sequentially learning in a stochastic volatility model of Nasdaq stock returns and about predictable components in a model of core inflation.