Results 1  10
of
57
Implementing approximate Bayesian inference for latent Gaussian models using integrated nested Laplace approximations: A manual for the inlaprogram
, 2008
"... Structured additive regression models are perhaps the most commonly used class of models in statistical applications. It includes, among others, (generalised) linear models, (generalised) additive models, smoothingspline models, statespace models, semiparametric regression, spatial and spatiotemp ..."
Abstract

Cited by 79 (16 self)
 Add to MetaCart
Structured additive regression models are perhaps the most commonly used class of models in statistical applications. It includes, among others, (generalised) linear models, (generalised) additive models, smoothingspline models, statespace models, semiparametric regression, spatial and spatiotemporal models, logGaussian Coxprocesses, geostatistical and geoadditive models. In this paper we consider approximate Bayesian inference in a popular subset of structured additive regression models, latent Gaussian models, where the latent field is Gaussian, controlled by a few hyperparameters and with nonGaussian response variables. The posterior marginals are not available in closed form due to the nonGaussian response variables. For such models, Markov chain Monte Carlo methods can be implemented, but they are not without problems, both in terms of convergence and computational time. In some practical applications, the extent of these problems is such that Markov chain Monte Carlo is simply not an appropriate tool for routine analysis. We show that, by using an integrated nested Laplace approximation and its simplified version, we can directly compute very accurate approximations to the posterior marginals. The main benefit of these approximations
Model Choice: A Minimum Posterior Predictive Loss Approach
, 1998
"... Model choice is a fundamental and much discussed activity in the analysis of data sets. Hierarchical models introducing random effects can not be handled by classical methods. Bayesian approaches using predictive distributions can, though the formal solution, which includes Bayes factors as a specia ..."
Abstract

Cited by 61 (10 self)
 Add to MetaCart
Model choice is a fundamental and much discussed activity in the analysis of data sets. Hierarchical models introducing random effects can not be handled by classical methods. Bayesian approaches using predictive distributions can, though the formal solution, which includes Bayes factors as a special case, can be criticized. We propose a predictive criterion where the goal is good prediction of a replicate of the observed data but tempered by fidelity to the observed values. We obtain this criterion by minimizing posterior loss for a given model and then, for models under consideration, select the one which minimizes this criterion. For a broad range of losses, the criterion emerges approximately as a form partitioned into a goodnessoffit term and a penalty term. In the context of generalized linear mixed effects models we obtain a penalized deviance criterion comprised of a piece which is a Bayesian deviance measure and a piece which is a penalty for model complexity. We illustrate ...
Bayesian Treed Gaussian Process Models with an Application to Computer Modeling
 Journal of the American Statistical Association
, 2007
"... This paper explores nonparametric and semiparametric nonstationary modeling methodologies that couple stationary Gaussian processes and (limiting) linear models with treed partitioning. Partitioning is a simple but effective method for dealing with nonstationarity. Mixing between full Gaussian proce ..."
Abstract

Cited by 44 (15 self)
 Add to MetaCart
This paper explores nonparametric and semiparametric nonstationary modeling methodologies that couple stationary Gaussian processes and (limiting) linear models with treed partitioning. Partitioning is a simple but effective method for dealing with nonstationarity. Mixing between full Gaussian processes and simple linear models can yield a more parsimonious spatial model while significantly reducing computational effort. The methodological developments and statistical computing details which make this approach efficient are described in detail. Illustrations of our model are given for both synthetic and real datasets. Key words: recursive partitioning, nonstationary spatial model, nonparametric regression, Bayesian model averaging 1
Bayesian Model Assessment and Comparison Using CrossValidation Predictive Densities
 Neural Computation
, 2002
"... In this work, we discuss practical methods for the assessment, comparison, and selection of complex hierarchical Bayesian models. A natural way to assess the goodness of the model is to estimate its future predictive capability by estimating expected utilities. Instead of just making a point estimat ..."
Abstract

Cited by 27 (11 self)
 Add to MetaCart
In this work, we discuss practical methods for the assessment, comparison, and selection of complex hierarchical Bayesian models. A natural way to assess the goodness of the model is to estimate its future predictive capability by estimating expected utilities. Instead of just making a point estimate, it is important to obtain the distribution of the expected utility estimate, as it describes the uncertainty in the estimate. The distributions of the expected utility estimates can also be used to compare models, for example, by computing the probability of one model having a better expected utility than some other model. We propose an approach using crossvalidation predictive densities to obtain expected utility estimates and Bayesian bootstrap to obtain samples from their distributions. We also discuss the probabilistic assumptions made and properties of two practical crossvalidation methods, importance sampling and kfold crossvalidation. As illustrative examples, we use MLP neural networks and Gaussian Processes (GP) with Markov chain Monte Carlo sampling in one toy problem and two challenging realworld problems.
Bayesian inference for generalized linear mixed models of portfolio credit risk
 Journal of Empirical Finance
, 2007
"... The aims of this paper are threefold. First we highlight the usefulness of generalized linear mixed models (GLMMs) in the modelling of portfolio credit default risk. The GLMMsetting allows for a flexible specification of the systematic portfolio risk in terms of observed fixed effects and unobserve ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
The aims of this paper are threefold. First we highlight the usefulness of generalized linear mixed models (GLMMs) in the modelling of portfolio credit default risk. The GLMMsetting allows for a flexible specification of the systematic portfolio risk in terms of observed fixed effects and unobserved random effects, in order to explain the phenomena of default dependence and timeinhomogeneity in empirical default data. Second we show that computational Bayesian techniques such as the Gibbs sampler can be successfully applied to fit models with serially correlated random effects, which are special instances of state space models. Third we provide an empirical study using Standard & Poor’s data on US firms. A model incorporating rating category and sector effects and a macroeconomic proxy variable for stateoftheeconomy suggests the presence of a residual, cyclical, latent component in the systematic risk.
Bayesian variable selection in multinomial models with application to spectral data and DNA microarrays
, 2002
"... Summary. Here we focus on discrimination problems where the number of predictors substantially exceeds the sample size and we propose a Bayesian variable selection approach to multinomial probit models. Our method makes use of mixture priors and Markov chain Monte Carlo techniques to select sets of ..."
Abstract

Cited by 26 (10 self)
 Add to MetaCart
Summary. Here we focus on discrimination problems where the number of predictors substantially exceeds the sample size and we propose a Bayesian variable selection approach to multinomial probit models. Our method makes use of mixture priors and Markov chain Monte Carlo techniques to select sets of variables that differ among the classes. We apply our methodology to a problem in functional genomics using gene expression profiling data. The aim of the analysis is to identify molecular signatures that characterize two different stages of rheumatoid arthritis.
Bayesian Approach for Neural Networks  Review and Case Studies
 Neural Networks
, 2001
"... We give a short review on the Bayesian approach for neural network learning and demonstrate the advantages of the approach in three real applications. We discuss the Bayesian approach with emphasis on the role of prior knowledge in Bayesian models and in classical error minimization approaches. The ..."
Abstract

Cited by 18 (9 self)
 Add to MetaCart
We give a short review on the Bayesian approach for neural network learning and demonstrate the advantages of the approach in three real applications. We discuss the Bayesian approach with emphasis on the role of prior knowledge in Bayesian models and in classical error minimization approaches. The generalization capability of a statistical model, classical or Bayesian, is ultimately based on the prior assumptions. The Bayesian approach permits propagation of uncertainty in quantities which are unknown to other assumptions in the model, which may be more generally valid or easier to guess in the problem. The case problems studied in this paper include a regression, a classification, and an inverse problem. In the most thoroughly analyzed regression problem, the best models were those with less restrictive priors. This emphasizes the major advantage of the Bayesian approach, that we are not forced to guess attributes that are unknown, such as the number of degrees of freedom in the model, nonlinearity of the model with respect to each input variable, or the exact form for the distribution of the model residuals.
Diagnostic Checks for DiscreteData Regression Models Using Posterior Predictive Simulations
, 1997
"... Model checking with discrete data regressions can be difficult because usual methods such as residual plots have complicated reference distributions that depend on the parameters in the model. Posterior predictive checks have been proposed as a Bayesian way to average the results of goodnessoffit ..."
Abstract

Cited by 12 (8 self)
 Add to MetaCart
Model checking with discrete data regressions can be difficult because usual methods such as residual plots have complicated reference distributions that depend on the parameters in the model. Posterior predictive checks have been proposed as a Bayesian way to average the results of goodnessoffit tests in the presence of uncertainty in estimation of the parameters. We try this approach using a variety of discrepancy variables for generalized linear models fit to a historical data set on behavioral learning. We then discuss the general applicability of our findings in the context of a recent applied example on which we have worked. We find that the following discrepancy variables work well, in the sense of being easy to interpret and sensitive to important model failures: (a) structured displays of the entire data set, (b) general discrepancy variables based on plots of binned or smoothed residuals versus predictors, and (c) specific discrepancy variables created based on the particul...
Inflationgap persistence in the U.S
 American Economic Journal Macroeconomics
, 2010
"... We use Bayesian methods to estimate two models of post WWII U.S. inflation rates with drifting stochastic volatility and drifting coefficients. One model is univariate, the other a multivariate autoregression. We define the inflation gap as the deviation of inflation from a pure random walk componen ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
We use Bayesian methods to estimate two models of post WWII U.S. inflation rates with drifting stochastic volatility and drifting coefficients. One model is univariate, the other a multivariate autoregression. We define the inflation gap as the deviation of inflation from a pure random walk component of inflation and use both models to study changes over time in the persistence of the inflation gap measured in terms of short to mediumterm predicability. We present evidence that our measure of the inflationgap persistence increased until Volcker brought mean inflation down in the early 1980s and that it then fell during the chairmanships of Volcker and Greenspan. Stronger evidence for movements in inflation gap persistence emerges from the VAR than from the univariate model. We interpret these changes in terms of a simple dynamic new Keynesian model that allows us to distinguish altered monetary policy rules and altered private sector parameters. 1