Results 1  10
of
12
Bayesian measures of model complexity and fit
 Journal of the Royal Statistical Society, Series B
, 2002
"... [Read before The Royal Statistical Society at a meeting organized by the Research ..."
Abstract

Cited by 132 (2 self)
 Add to MetaCart
[Read before The Royal Statistical Society at a meeting organized by the Research
A Monte Carlo Approach to Nonnormal and Nonlinear StateSpace Modeling
, 1992
"... this article then is to develop methodology for modeling the nonnormality of the ut, the vt, or both. A second departure from the model specification ( 1 ) is to allow for unknown variances in the state or observational equation, as well as for unknown parameters in the transition matrices Ft and Ht ..."
Abstract

Cited by 126 (14 self)
 Add to MetaCart
this article then is to develop methodology for modeling the nonnormality of the ut, the vt, or both. A second departure from the model specification ( 1 ) is to allow for unknown variances in the state or observational equation, as well as for unknown parameters in the transition matrices Ft and Ht. As a third generalization we allow for nonlinear model structures; that is, X t = ft(Xtl) q Ut, and Yt = ht(xt) + vt, t = 1, ..., n, (2) whereft( ) and ht(. ) are given, but perhaps also depend on some unknown parameters. The experimenter may wish to entertain a variety of error distributions. Our goal throughout the article is an analysis for general statespace models that does not resort to convenient assumptions at the expense of model adequacy
Bayesian Deviance, the Effective Number of Parameters, and the Comparison of Arbitrarily Complex Models
, 1998
"... We consider the problem of comparing complex hierarchical models in which the number of parameters is not clearly defined. We follow Dempster in examining the posterior distribution of the loglikelihood under each model, from which we derive measures of fit and complexity (the effective number of p ..."
Abstract

Cited by 28 (7 self)
 Add to MetaCart
We consider the problem of comparing complex hierarchical models in which the number of parameters is not clearly defined. We follow Dempster in examining the posterior distribution of the loglikelihood under each model, from which we derive measures of fit and complexity (the effective number of parameters). These may be combined into a Deviance Information Criterion (DIC), which is shown to have an approximate decisiontheoretic justification. Analytic and asymptotic identities reveal the measure of complexity to be a generalisation of a wide range of previous suggestions, with particular reference to the neural network literature. The contributions of individual observations to fit and complexity can give rise to a diagnostic plot of deviance residuals against leverages. The procedure is illustrated in a number of examples, and throughout it is emphasised that the required quantities are trivial to compute in a Markov chain Monte Carlo analysis, and require no analytic work for new...
Estimating the integrated likelihood via posterior simulation using the harmonic mean identity
 Bayesian Statistics
, 2007
"... The integrated likelihood (also called the marginal likelihood or the normalizing constant) is a central quantity in Bayesian model selection and model averaging. It is defined as the integral over the parameter space of the likelihood times the prior density. The Bayes factor for model comparison a ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
The integrated likelihood (also called the marginal likelihood or the normalizing constant) is a central quantity in Bayesian model selection and model averaging. It is defined as the integral over the parameter space of the likelihood times the prior density. The Bayes factor for model comparison and Bayesian testing is a ratio of integrated likelihoods, and the model weights in Bayesian model averaging are proportional to the integrated likelihoods. We consider the estimation of the integrated likelihood from posterior simulation output, aiming at a generic method that uses only the likelihoods from the posterior simulation iterations. The key is the harmonic mean identity, which says that the reciprocal of the integrated likelihood is equal to the posterior harmonic mean of the likelihood. The simplest estimator based on the identity is thus the harmonic mean of the likelihoods. While this is an unbiased and simulationconsistent estimator, its reciprocal can have infinite variance and so it is unstable in general. We describe two methods for stabilizing the harmonic mean estimator. In the first one, the parameter space is reduced in such a way that the modified estimator involves a harmonic mean of heaviertailed densities, thus resulting in a finite variance estimator. The resulting
On MCMC Sampling in Hierarchical Longitudinal Models
 Statistics and Computing
, 1998
"... this paper we construct several (partially and fully blocked) MCMC algorithms for minimizing the autocorrelation in MCMC samples arising from important classes of longitudinal data models. We exploit an identity used by Chib (1995) in the context of Bayes factor computation to show how the parameter ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
this paper we construct several (partially and fully blocked) MCMC algorithms for minimizing the autocorrelation in MCMC samples arising from important classes of longitudinal data models. We exploit an identity used by Chib (1995) in the context of Bayes factor computation to show how the parameters in a general linear mixed model may be updated in a single block, improving convergence and producing essentially independent draws from the posterior of the parameters of interest. We also investigate the value of blocking in nonGaussian mixed models, as well as in a class of binary response data longitudinal models. We illustrate the approaches in detail with three realdata examples.
A Representation of the Posterior Mean for a Location Model
, 1991
"... liez's theorem. Directions for future development are indicated. Some key words: Bayesian inference; Conditional inference; Robustness; Score function. 1. INTRODUCTION An exact representation for the posterior mean, E(Oly), is given where y is a 1 x n vector of observations from a location model, ..."
Abstract

Cited by 7 (7 self)
 Add to MetaCart
liez's theorem. Directions for future development are indicated. Some key words: Bayesian inference; Conditional inference; Robustness; Score function. 1. INTRODUCTION An exact representation for the posterior mean, E(Oly), is given where y is a 1 x n vector of observations from a location model, f(x0), and 0 has a prior density, p(0),that is a normal scale mixture. Let L(O) denote the likelihood function and let y = (, a) where 0 is the maximum likelihood estimator and a is the maximal ancillary. The representation makes use of two results: the conditional distribution of the maximum likelihood estimator, p([O, a) (BarndorffNielsen, 1983), and a result of Masreliez (1975). It is shown that, under a normaLprior , E(O[y) ca.n be represented as a linear transformation of the score function of p(O]a), where p(O[a)= p([O, a)p(O) dO. The representation can be viewed as a generalization of Masreliez's result that deals with the model, X = 0 + e, 0 N(m, 2) and represents the posterior m
Easy Estimation of Normalizing Constants and Bayes Factors from Posterior Simulation: Stabilizing the Harmonic Mean Estimator
, 2000
"... The Bayes factor is a useful summary for model selection. Calculation of this measure involves evaluating the integrated likelihood (or prior predictive density), which can be estimated from the output of MCMC and other posterior simulation methods using the harmonic mean estimator. While this is a ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
The Bayes factor is a useful summary for model selection. Calculation of this measure involves evaluating the integrated likelihood (or prior predictive density), which can be estimated from the output of MCMC and other posterior simulation methods using the harmonic mean estimator. While this is a simulationconsistent estimator, it can have innite variance. In this article we describe a method to stabilize the harmonic mean estimator. Under this approach, the parameter space is reduced such that the modied estimator involves a harmonic mean of heavier tailed densities, thus resulting in a nite variance estimator. We discuss general conditions under which this reduction is applicable and illustrate the proposed method through several examples. Keywords: Bayes factor, Betabinomial, Integrated likelihood, PoissonGamma distribution, Statistical genetics, Variance reduction. Contents 1 Introduction 1 2 Stabilizing the Harmonic Mean Estimator 2 3 Statistical Genetics 6 4 Beta{Binom...
Some Bayesian perspectives on statistical modelling
, 1988
"... I would like to thank my supervisor, Professor A. F. M. Smith, for all his advice and encourage ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
I would like to thank my supervisor, Professor A. F. M. Smith, for all his advice and encourage
Dynamic Generalized Linear Models
"... Dynamic Generalized Linear Models are generalizations of the Generalized Linear Models when the observations are time series and the parameters are allowed to vary through the time. They have been increasingly used in different areas such as epidemiology, econometrics and marketing. Here we make an ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Dynamic Generalized Linear Models are generalizations of the Generalized Linear Models when the observations are time series and the parameters are allowed to vary through the time. They have been increasingly used in different areas such as epidemiology, econometrics and marketing. Here we make an overview of the different statistical methodologies that have been proposed to deal with these models from the Bayesian viewpoint. Also, we present some of the challenges involved in the estimation process. Finally, two applications in epidemiology are presented showing the power of MCMCbased methodologies. 1 Introduction Real world often leads to the necessity of nonnormal data analysis. This issue was highly enlightened with the introduction of generalized linear models (GLM), clever extensions of linear regressions, by Nelder and Wedderburn (1972), and the Bayesian point of view on this subject can be found in chapter 1. As pointed out there, the observations are distributed in the expo...
A FiniteSample Hierarchical Analysis of Wage Variation Across Public High Schools: Evidence From the NLSY and High School and Beyond 1
, 2002
"... Using data from both the National Longitudinal Survey of Youth (NLSY) and High School and Beyond (HSB), we investigate if public high schools differ in the “production ” of earnings and if rates of return to future education vary with public high school attended. Given evidence of such variation, we ..."
Abstract
 Add to MetaCart
Using data from both the National Longitudinal Survey of Youth (NLSY) and High School and Beyond (HSB), we investigate if public high schools differ in the “production ” of earnings and if rates of return to future education vary with public high school attended. Given evidence of such variation, we seek to explain why schools differ by proposing that standard measures of school “quality ” as well as proxies for community characteristics can explain the observed parameter variation across high schools. Since analysis of widelyused data sets such as the NLSY and HSB necessarily involves observing only a few students per high school, we employ an exact finite sample estimation approach. We find evidence that schools differ and that most proxies for high school quality play modest roles in explaining the variation in outcomes across public high schools. We do find evidence that the education of the teachers in the high school as well as the average family income associated with students in the school play a small part in explaining variation at the schoollevel.