Results 11  20
of
60
An empirical comparison of methods for computing Bayes factors in generalized linear mixed models
 Journal of Computational and Graphical Statistics
, 2005
"... Generalized linear mixed models (GLMM) are used in situations where a number of characteristics (covariates) affect a nonnormal response variable and the responses are correlated due to the existence of clusters or groups. For example, the responses in biological applications may be correlated due t ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Generalized linear mixed models (GLMM) are used in situations where a number of characteristics (covariates) affect a nonnormal response variable and the responses are correlated due to the existence of clusters or groups. For example, the responses in biological applications may be correlated due to common genetic factors or environmental factors. The clustering or grouping is addressed by introducing cluster effects to the model; the associated parameters are often treated as random effects parameters. In many applications, the magnitude of the variance components corresponding to one or more of the sets of random effects parameters are of interest, especially the point null hypothesis that one or more of the variance components is zero. A Bayesian approach to test the hypothesis is to use Bayes factors comparing the models with and without the random effects in question— this work reviews a number of approaches for estimating the Bayes factor. We perform a comparative study of the different approaches to compute Bayes factors for GLMMs by applying them to two different datasets. The first example employs a probit regression model with a single variance component to data from a natural selection study on turtles. The second example uses a disease mapping model from epidemiology, a Poisson regression model with two variance components. Bridge sampling and a recent improvement known as warp bridge sampling, importance sampling, and Chib’s marginal likelihood calculation are all found to be effective. The relative advantages of the different approaches are discussed.
Bayesian Validation of a Computer Model for Vehicle Collision
"... A key question in evaluation of computer models is Does the computer model adequately represent reality? A complete Bayesian approach to answering this question is developed for the challenging practical context in which the computer model (and reality) produce functional data. The methodology is pa ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
A key question in evaluation of computer models is Does the computer model adequately represent reality? A complete Bayesian approach to answering this question is developed for the challenging practical context in which the computer model (and reality) produce functional data. The methodology is particularly suited to treating the major issues associated with the validation process: quantifying multiple sources of error and uncertainty in computer models; combining multiple sources of information; and being able to adapt to different – but related – scenarios through hierarchical modeling. It is also shown how one can formally test if the computer model reproduces reality. The approach is illustrated through study of a computer model developed to model vehicle crashworthiness.
Variational Bayesian Analysis for Hidden Markov Models
"... The variational approach to Bayesian inference enables simultaneous estimation of model parameters and model complexity. An interesting feature of this approach is that it appears also to lead to an automatic choice of model complexity. Empirical results from the analysis of hidden Markov models wit ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
The variational approach to Bayesian inference enables simultaneous estimation of model parameters and model complexity. An interesting feature of this approach is that it appears also to lead to an automatic choice of model complexity. Empirical results from the analysis of hidden Markov models with Gaussian observation densities illustrate this. If the variational algorithm is initialised with a large number of hidden states, redundant states are eliminated as the method converges to a solution, thereby leading to an automatic selection of the number of hidden states. In addition, through the use of a variational approximation, the Deviance Information Criterion (DIC) for Bayesian model selection can be extended to the hidden Markov model framework. Calculation of the DIC provides a further tool for model selection which can be used in conjunction with the variational approach.
Particle Learning for General Mixtures
"... This paper develops efficient sequential learning methods for the estimation of general mixture models. The approach is distinguished from alternative particle filtering methods in two major ways. First, each iteration begins by resampling particles according to posterior predictive probability, lea ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
This paper develops efficient sequential learning methods for the estimation of general mixture models. The approach is distinguished from alternative particle filtering methods in two major ways. First, each iteration begins by resampling particles according to posterior predictive probability, leading to a more efficient set for propagation. Second, each particle tracks only the state of sufficient information for latent mixture components, thus leading to reduced dimensional inference. In addition, we describe how the approach will apply to more general mixture models of current interest in the literature; it is hoped that this will inspire a greater number of researchers to adopt sequential Monte Carlo methods for fitting their sophisticated mixture based models. Finally, we show that this particle learning approach leads to straightforward tools for marginal likelihood calculation and posterior cluster allocation. Specific versions of the algorithm are derived for standard density estimation applications based on both finite mixture models and Dirichlet process mixture models, as well as for the less common settings of latent feature selection through an Indian Buffet process and dependent distribution tracking through a probit stickbreaking model. Three simulation examples are presented: density estimation and model selection for a finite mixture model; a simulation study for Dirichlet process density estimation with as many as 12500 observations of 25 dimensional data, and an example of nonparametric mixture regression that requires learning truncated approximations to the infinite random mixing distribution.
Particle filtering and parameter learning
, 2007
"... This paper provides a new approach for sequentially learning parameters and states in a wide class of state space models using particle filters. Our approach generates direct i.i.d. samples from a particle approximation to the joint posterior distribution of both parameters and latent states, avoidi ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
This paper provides a new approach for sequentially learning parameters and states in a wide class of state space models using particle filters. Our approach generates direct i.i.d. samples from a particle approximation to the joint posterior distribution of both parameters and latent states, avoiding the use of and the degeneracies inherent in sequential importance sampling. We illustrate the efficiency of our approach by sequentially learning parameters and filtering states in two models: a logstochastic volatility model and robust version of the Kalman filter model with terrors in both the observation and state equation. In both cases, we show using simulated data that our approach efficiently learns the parameters and states sequentially, generating higher effective sample sizes than existing algorithms. We use the approach for two real data examples, sequentially learning in a stochastic volatility model of Nasdaq stock returns and about predictable components in a model of core inflation.
Hyperg priors for generalized linear models
 Bayesian Analysis
, 2011
"... We develop an extension of the classical Zellner’s gprior to generalized linear models. The prior on the hyperparameter g is handled in a flexible way, so that any continuous proper hyperprior f(g) can be used, giving rise to a large class of hyperg priors. Connections with the literature are des ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We develop an extension of the classical Zellner’s gprior to generalized linear models. The prior on the hyperparameter g is handled in a flexible way, so that any continuous proper hyperprior f(g) can be used, giving rise to a large class of hyperg priors. Connections with the literature are described in detail. A fast and accurate integrated Laplace approximation of the marginal likelihood makes inference in large model spaces feasible. For posterior parameter estimation we propose an efficient and tuningfree MetropolisHastings sampler. The methodology is illustrated with variable selection and automatic covariate transformation in the Pima Indians diabetes data set.
αStable Limit Laws for Harmonic Mean Estimators of Marginal Likelihoods
, 2010
"... The task of calculating marginal likelihoods arises in a wide array of statistical inference problems, including the evaluation of Bayes factors for model selection and hypothesis testing. Although Markov chain Monte Carlo methods have simplified many posterior calculations needed for practical Baye ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
The task of calculating marginal likelihoods arises in a wide array of statistical inference problems, including the evaluation of Bayes factors for model selection and hypothesis testing. Although Markov chain Monte Carlo methods have simplified many posterior calculations needed for practical Bayesian analysis, the evaluation of marginal likelihoods remains difficult. We consider the behavior of the wellknown harmonic mean estimator (Newton and Raftery, 1994) of the marginal likelihood, which converges almostsurely but may have infinite variance and so may not obey a central limit theorem.
Monte Carlo methods for adaptive sparse approximations of timeseries
, 2007
"... ..."
(Show Context)
Bayesian Model Comparison with the gPrior
"... Abstract—Model comparison and selection is an important problem in many modelbased signal processing applications. Often, very simple information criteria such as the Akaike information criterion or the Bayesian information criterion are used despite their shortcomings. Compared to these methods, D ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Abstract—Model comparison and selection is an important problem in many modelbased signal processing applications. Often, very simple information criteria such as the Akaike information criterion or the Bayesian information criterion are used despite their shortcomings. Compared to these methods, Djuric’s asymptotic MAP rule was an improvement, and in this paper we extend the work by Djuric in several ways. Specifically, we consider the elicitation of proper prior distributions, treat the case of real and complexvalued data simultaneously in a Bayesian framework similar to that considered by Djuric, and develop new model selection rules for a regression model containing both linear and nonlinear parameters. Moreover, we use this framework to give a new interpretation of the popular information criteria and relate their performance to the signaltonoise ratio of the data. By use of simulations, we also demonstrate that our proposed model comparison and selection rules outperform the traditional information criteria both in terms of detecting the true model and in terms of predicting unobserved data. The simulation code is available online. Index Terms—Bayesian model comparison, Zellner’s gprior, AIC, BIC, Asymptotic MAP. I.
A Bayesian Joinpoint Regression model with an unknown number of break points. 2011; http://www.uv.es/mamtnez/preprints/joinpoint.pdf
, 2011
"... Abstract: Joinpoint regression is used to determine the number of segments needed to adequately explain the relationship between two variables. This methodology can be widely applied to real problems but we focus on epidemiological data, the main goal being to uncover changes in the mortality time ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract: Joinpoint regression is used to determine the number of segments needed to adequately explain the relationship between two variables. This methodology can be widely applied to real problems but we focus on epidemiological data, the main goal being to uncover changes in the mortality time trend of a specific disease under study. Traditionally, joinpoint regression problems have paid little or no attention to the quantification of uncertainty in the estimation of the number of changepoints. In this context, we found a satisfactory way to handle the problem in the Bayesian methodology. Nevertheless, this novel approach involves significant difficulties (both theoretical and practical) since it implicitly entails a model selection (or testing) problem. In this study, we face these challenges through i) a novel reparameterization of the model; ii) a conscientious definition of the prior distributions used and iii) an encompassing approach which allows the use of MCMC simulationbased techniques to derive the results. The resulting methodology is flexible enough to make it possible to consider mortality counts (for epidemiological applications) as Poisson variables. The methodology is applied to the study of annual breast cancer mortality during the period 19802007 in Castellón, a province in Spain.