Results 1 
9 of
9
Implementing approximate Bayesian inference for latent Gaussian models using integrated nested Laplace approximations: A manual for the inlaprogram
, 2008
"... Structured additive regression models are perhaps the most commonly used class of models in statistical applications. It includes, among others, (generalised) linear models, (generalised) additive models, smoothingspline models, statespace models, semiparametric regression, spatial and spatiotemp ..."
Abstract

Cited by 91 (17 self)
 Add to MetaCart
(Show Context)
Structured additive regression models are perhaps the most commonly used class of models in statistical applications. It includes, among others, (generalised) linear models, (generalised) additive models, smoothingspline models, statespace models, semiparametric regression, spatial and spatiotemporal models, logGaussian Coxprocesses, geostatistical and geoadditive models. In this paper we consider approximate Bayesian inference in a popular subset of structured additive regression models, latent Gaussian models, where the latent field is Gaussian, controlled by a few hyperparameters and with nonGaussian response variables. The posterior marginals are not available in closed form due to the nonGaussian response variables. For such models, Markov chain Monte Carlo methods can be implemented, but they are not without problems, both in terms of convergence and computational time. In some practical applications, the extent of these problems is such that Markov chain Monte Carlo is simply not an appropriate tool for routine analysis. We show that, by using an integrated nested Laplace approximation and its simplified version, we can directly compute very accurate approximations to the posterior marginals. The main benefit of these approximations
Fast Bayesian Inference in Dirichlet Process Mixture Models
, 2008
"... There has been increasing interest in applying Bayesian nonparametric methods in large samples and high dimensions. As Markov chain Monte Carlo (MCMC) algorithms are often infeasible, there is a pressing need for much faster algorithms. This article proposes a fast approach for inference in Dirichle ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
There has been increasing interest in applying Bayesian nonparametric methods in large samples and high dimensions. As Markov chain Monte Carlo (MCMC) algorithms are often infeasible, there is a pressing need for much faster algorithms. This article proposes a fast approach for inference in Dirichlet process mixture (DPM) models. Viewing the partitioning of subjects into clusters as a model selection problem, we propose a sequential greedy search algorithm for selecting the partition. Then, when conjugate priors are chosen, the resulting posterior conditionally on the selected partition is available in closed form. This approach allows testing of parametric models versus nonparametric alternatives based on Bayes factors. We evaluate the approach using simulation studies and compare it with four other fast nonparametric methods in the literature. We apply the proposed approach to three datasets including one from a large epidemiologic study. Matlab codes for the simulation and data analyses using the proposed approach are available online in the supplemental materials.
Residuals and Outliers in Bayesian Random Effects Models
, 1994
"... Common repeated measures random effects models contain two random components, a random person effect and timevarying errors. An observation can be an outlier due to either an extreme person effect or an extreme time varying error. Outlier statistics are presented that can distinguish between these ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Common repeated measures random effects models contain two random components, a random person effect and timevarying errors. An observation can be an outlier due to either an extreme person effect or an extreme time varying error. Outlier statistics are presented that can distinguish between these types of outliers. For each person there is one statistic per observation, plus one statistic per random effect. Methodology is developed to reduce the explosion of statistics to two summary outlier statistics per person; one for the random effects and one for the time varying errors. If either of these screening statistics are large, then individual statistics for each observation or random effect can be inspected. Multivariate, targeted outlier statistics and goodnessoffit tests are also developed. Distribution theory is given, along with some geometric intuition. Key Words: Bayesian Data Analysis, GoodnessofFit, Hierarchical Models, Observed Errors, Repeated Measures. 1 Introduction...
Pediatric Pain, Predictive Inference and Sensitivity Analysis
 Evaluation Review
, 1994
"... The understanding, prevention and treatment of pain is of great importance to medical science. Children were asked to immerse their hands in cold water until they were unable to tolerate the pain of the cold. The length of time that they kept their hands immersed is a measure of pain tolerance. Two ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
The understanding, prevention and treatment of pain is of great importance to medical science. Children were asked to immerse their hands in cold water until they were unable to tolerate the pain of the cold. The length of time that they kept their hands immersed is a measure of pain tolerance. Two factors were studied; one factor is a child's Style of Coping (CS) with the pain (ATTENDERS pay attention to the pain, DISTRACTERS think of other things) and was assessed at a baseline trial. The other factor is Treatment (T), one of three counseling interventions (a NULL intervention, counseling to ATTEND, or counseling to DISTRACT) and was randomly applied prior to the response. The covariate is a baseline measure of pain tolerance prior to the intervention. Distracters taught to distract tolerated the pain much better than any other group. No strategy improved attenders pain tolerance. This paper analyzes this data from a predictive Bayesian viewpoint. The assumption of constant variance ...
Measures of Surprise in Bayesian Analysis
 Duke University
, 1997
"... Measures of surprise refer to quantifications of the degree of incompatibility of data with some hypothesized model H 0 without any reference to alternative models. Traditional measures of surprise have been the pvalues, which are however known to grossly overestimate the evidence against H 0 . Str ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Measures of surprise refer to quantifications of the degree of incompatibility of data with some hypothesized model H 0 without any reference to alternative models. Traditional measures of surprise have been the pvalues, which are however known to grossly overestimate the evidence against H 0 . Strict Bayesian analysis calls for an explicit specification of all possible alternatives to H 0 so Bayesians have not made routine use of measures of surprise. In this report we CRITICALLY REVIEw the proposals that have been made in this regard. We propose new modifications, stress the connections with robust Bayesian analysis and discuss the choice of suitable predictive distributions which allow surprise measures to play their intended role in the presence of nuisance parameters. We recommend either the use of appropriate likelihoodratio type measures or else the careful calibration of pvalues so that they are closer to Bayesian answers. Key words and phrases. Bayes factors; Bayesian pvalues; Bayesian robustness; Conditioning; Model checking; Predictive distributions. 1.
Heterogeneity and model uncertainty in Bayesian regression models
, 1999
"... Data heterogeneity appears when the sample comes from at least two different populations. We analyze three types of situations. In the first and simplest case the majority of the data come from a central model and a few isolated observations come from a contaminating distribution. The data from the ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Data heterogeneity appears when the sample comes from at least two different populations. We analyze three types of situations. In the first and simplest case the majority of the data come from a central model and a few isolated observations come from a contaminating distribution. The data from the contaminating distribution are called outliers and they have been studied in depth in the statistical literature. In the second case we still have a central model but the heterogeneous data may appear in clusters of outliers which mask each other. This is the multiple outlier problem which is much more difficult to handle and it has been analyzed and understood in the last few years. The few Bayesian contributions to this problem are presented. In the third case we do not have a central model but instead different groups of data have been generated by different models. For multivariate normal this problem has been analyzed by mixture models under the name of cluster analysis, but a challenging area of research is to develop a general methodology for applying this multiple model approach to other statistical problems. Heterogeneity implies in general an increase in the uncertainty of predictions, and we present in this paper a procedure to measure this effect.
A case study on recurrence of bladder tumours
, 2012
"... Investigating posterior contour probabilities using INLA: A case study on recurrence of bladder tumours by ..."
Abstract
 Add to MetaCart
Investigating posterior contour probabilities using INLA: A case study on recurrence of bladder tumours by
Abstract
, 2007
"... Structured additive regression models are perhaps the most commonly used class of models in statistical applications. It includes, among others, (generalised) linear models, (generalised) additive models, smoothingspline models, statespace models, semiparametric regression, spatial and spatiotemp ..."
Abstract
 Add to MetaCart
(Show Context)
Structured additive regression models are perhaps the most commonly used class of models in statistical applications. It includes, among others, (generalised) linear models, (generalised) additive models, smoothingspline models, statespace models, semiparametric regression, spatial and spatiotemporal models, logGaussian Coxprocesses, and geostatistical models. In this paper we consider approximate Bayesian inference in a popular subset of structured additive regression models, latent Gaussian models, where the latent field is Gaussian, controlled by a few hyperparameters and with nonGaussian response variables. The posterior marginals are not available in closed form due to the nonGaussian response variables. For such models, Markov chain Monte Carlo methods can be implemented, but they are not without problems, both in terms of convergence and computational time. In some practical applications, the extent of these problems is such that Markov chain Monte Carlo is simply not an appropriate tool for routine analysis. We show that, by using an integrated nested Laplace approximation and its simplified version, we can directly compute very accurate approximations to the posterior marginals. The main benefit of these approximations is computational: where MCMC algorithms need hours and days to run, our approximations