Results 1  10
of
14
Bayesian measures of model complexity and fit
 Journal of the Royal Statistical Society, Series B
, 2002
"... [Read before The Royal Statistical Society at a meeting organized by the Research ..."
Abstract

Cited by 435 (4 self)
 Add to MetaCart
[Read before The Royal Statistical Society at a meeting organized by the Research
A Monte Carlo approach to nonnormal and nonlinear statespace modeling
 Journal of the American Statistical Association
, 1992
"... ..."
Inference for nonconjugate bayesian models using the gibbs sampler. Canadian Journal of statistics
, 1991
"... JSTOR is a notforprofit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JS ..."
Abstract

Cited by 52 (13 self)
 Add to MetaCart
(Show Context)
JSTOR is a notforprofit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org. Statistical Society of Canada is collaborating with JSTOR to digitize, preserve and extend access to The
Bayesian Deviance, the Effective Number of Parameters, and the Comparison of Arbitrarily Complex Models
, 1998
"... We consider the problem of comparing complex hierarchical models in which the number of parameters is not clearly defined. We follow Dempster in examining the posterior distribution of the loglikelihood under each model, from which we derive measures of fit and complexity (the effective number of p ..."
Abstract

Cited by 51 (8 self)
 Add to MetaCart
(Show Context)
We consider the problem of comparing complex hierarchical models in which the number of parameters is not clearly defined. We follow Dempster in examining the posterior distribution of the loglikelihood under each model, from which we derive measures of fit and complexity (the effective number of parameters). These may be combined into a Deviance Information Criterion (DIC), which is shown to have an approximate decisiontheoretic justification. Analytic and asymptotic identities reveal the measure of complexity to be a generalisation of a wide range of previous suggestions, with particular reference to the neural network literature. The contributions of individual observations to fit and complexity can give rise to a diagnostic plot of deviance residuals against leverages. The procedure is illustrated in a number of examples, and throughout it is emphasised that the required quantities are trivial to compute in a Markov chain Monte Carlo analysis, and require no analytic work for new...
Estimating the integrated likelihood via posterior simulation using the harmonic mean identity
 Bayesian Statistics
, 2007
"... The integrated likelihood (also called the marginal likelihood or the normalizing constant) is a central quantity in Bayesian model selection and model averaging. It is defined as the integral over the parameter space of the likelihood times the prior density. The Bayes factor for model comparison a ..."
Abstract

Cited by 49 (2 self)
 Add to MetaCart
(Show Context)
The integrated likelihood (also called the marginal likelihood or the normalizing constant) is a central quantity in Bayesian model selection and model averaging. It is defined as the integral over the parameter space of the likelihood times the prior density. The Bayes factor for model comparison and Bayesian testing is a ratio of integrated likelihoods, and the model weights in Bayesian model averaging are proportional to the integrated likelihoods. We consider the estimation of the integrated likelihood from posterior simulation output, aiming at a generic method that uses only the likelihoods from the posterior simulation iterations. The key is the harmonic mean identity, which says that the reciprocal of the integrated likelihood is equal to the posterior harmonic mean of the likelihood. The simplest estimator based on the identity is thus the harmonic mean of the likelihoods. While this is an unbiased and simulationconsistent estimator, its reciprocal can have infinite variance and so it is unstable in general. We describe two methods for stabilizing the harmonic mean estimator. In the first one, the parameter space is reduced in such a way that the modified estimator involves a harmonic mean of heaviertailed densities, thus resulting in a finite variance estimator. The resulting
On MCMC Sampling in Hierarchical Longitudinal Models
 Statistics and Computing
, 1998
"... this paper we construct several (partially and fully blocked) MCMC algorithms for minimizing the autocorrelation in MCMC samples arising from important classes of longitudinal data models. We exploit an identity used by Chib (1995) in the context of Bayes factor computation to show how the parameter ..."
Abstract

Cited by 42 (4 self)
 Add to MetaCart
(Show Context)
this paper we construct several (partially and fully blocked) MCMC algorithms for minimizing the autocorrelation in MCMC samples arising from important classes of longitudinal data models. We exploit an identity used by Chib (1995) in the context of Bayes factor computation to show how the parameters in a general linear mixed model may be updated in a single block, improving convergence and producing essentially independent draws from the posterior of the parameters of interest. We also investigate the value of blocking in nonGaussian mixed models, as well as in a class of binary response data longitudinal models. We illustrate the approaches in detail with three realdata examples.
A Representation of the Posterior Mean for a Location Model
, 1991
"... liez's theorem. Directions for future development are indicated. Some key words: Bayesian inference; Conditional inference; Robustness; Score function. 1. INTRODUCTION An exact representation for the posterior mean, E(Oly), is given where y is a 1 x n vector of observations from a location m ..."
Abstract

Cited by 8 (7 self)
 Add to MetaCart
liez's theorem. Directions for future development are indicated. Some key words: Bayesian inference; Conditional inference; Robustness; Score function. 1. INTRODUCTION An exact representation for the posterior mean, E(Oly), is given where y is a 1 x n vector of observations from a location model, f(x0), and 0 has a prior density, p(0),that is a normal scale mixture. Let L(O) denote the likelihood function and let y = (, a) where 0 is the maximum likelihood estimator and a is the maximal ancillary. The representation makes use of two results: the conditional distribution of the maximum likelihood estimator, p([O, a) (BarndorffNielsen, 1983), and a result of Masreliez (1975). It is shown that, under a normaLprior , E(O[y) ca.n be represented as a linear transformation of the score function of p(O]a), where p(O[a)= p([O, a)p(O) dO. The representation can be viewed as a generalization of Masreliez's result that deals with the model, X = 0 + e, 0 N(m, 2) and represents the posterior m
Some Bayesian perspectives on statistical modelling
, 1988
"... I would like to thank my supervisor, Professor A. F. M. Smith, for all his advice and encourage ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
I would like to thank my supervisor, Professor A. F. M. Smith, for all his advice and encourage
Easy Estimation of Normalizing Constants and Bayes Factors from Posterior Simulation: Stabilizing the Harmonic Mean Estimator
, 2000
"... The Bayes factor is a useful summary for model selection. Calculation of this measure involves evaluating the integrated likelihood (or prior predictive density), which can be estimated from the output of MCMC and other posterior simulation methods using the harmonic mean estimator. While this is a ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
The Bayes factor is a useful summary for model selection. Calculation of this measure involves evaluating the integrated likelihood (or prior predictive density), which can be estimated from the output of MCMC and other posterior simulation methods using the harmonic mean estimator. While this is a simulationconsistent estimator, it can have innite variance. In this article we describe a method to stabilize the harmonic mean estimator. Under this approach, the parameter space is reduced such that the modied estimator involves a harmonic mean of heavier tailed densities, thus resulting in a nite variance estimator. We discuss general conditions under which this reduction is applicable and illustrate the proposed method through several examples. Keywords: Bayes factor, Betabinomial, Integrated likelihood, PoissonGamma distribution, Statistical genetics, Variance reduction. Contents 1 Introduction 1 2 Stabilizing the Harmonic Mean Estimator 2 3 Statistical Genetics 6 4 Beta{Binom...
Dynamic Generalized Linear Models
"... Dynamic Generalized Linear Models are generalizations of the Generalized Linear Models when the observations are time series and the parameters are allowed to vary through the time. They have been increasingly used in different areas such as epidemiology, econometrics and marketing. Here we make an ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Dynamic Generalized Linear Models are generalizations of the Generalized Linear Models when the observations are time series and the parameters are allowed to vary through the time. They have been increasingly used in different areas such as epidemiology, econometrics and marketing. Here we make an overview of the different statistical methodologies that have been proposed to deal with these models from the Bayesian viewpoint. Also, we present some of the challenges involved in the estimation process. Finally, two applications in epidemiology are presented showing the power of MCMCbased methodologies. 1 Introduction Real world often leads to the necessity of nonnormal data analysis. This issue was highly enlightened with the introduction of generalized linear models (GLM), clever extensions of linear regressions, by Nelder and Wedderburn (1972), and the Bayesian point of view on this subject can be found in chapter 1. As pointed out there, the observations are distributed in the expo...