Results 1  10
of
87
Bayesian measures of model complexity and fit
 Journal of the Royal Statistical Society, Series B
, 2002
"... [Read before The Royal Statistical Society at a meeting organized by the Research ..."
Abstract

Cited by 132 (2 self)
 Add to MetaCart
[Read before The Royal Statistical Society at a meeting organized by the Research
On the Relationship Between Markov Chain Monte Carlo Methods for Model Uncertainty
 JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS
, 2001
"... This article considers Markov chain computational methods for incorporating uncertainty about the dimension of a parameter when performing inference within a Bayesian setting. A general class of methods is proposed for performing such computations, based upon a product space representation of the ..."
Abstract

Cited by 31 (3 self)
 Add to MetaCart
This article considers Markov chain computational methods for incorporating uncertainty about the dimension of a parameter when performing inference within a Bayesian setting. A general class of methods is proposed for performing such computations, based upon a product space representation of the problem which is similar to that of Carlin and Chib. It is shown that all of the existing algorithms for incorporation of model uncertainty into Markov chain Monte Carlo (MCMC) can be derived as special cases of this general class of methods. In particular, we show that the popular reversible jump method is obtained when a special form of MetropolisHastings (MH) algorithm is applied to the product space. Furthermore, the Gibbs sampling method and the variable selection method are shown to derive straightforwardly from the general framework. We believe that these new relationships between methods, which were until now seen as diverse procedures, are an important aid to the understanding of MCMC model selection procedures and may assist in the future development of improved procedures. Our discussion also sheds some light upon the important issues of "pseudoprior" selection in the case of the Carlin and Chib sampler and choice of proposal distribution in the case of reversible jump. Finally, we propose efficient reversible jump proposal schemes that take advantage of any analytic structure that may be present in the model. These proposal schemes are compared with a standard reversible jump scheme for the problem of model order uncertainty in autoregressive time series, demonstrating the improvements which can be achieved through careful choice of proposals
MCMC Methods for Computing Bayes Factors: A Comparative Review
 Journal of the American Statistical Association
, 2000
"... this paper we review several of these methods, and subsequently compare them in the context of two examples, the first a simple regression example, and the second a much more challenging hierarchical longitudinal model of the kind often encountered in biostatistical practice. We find that the joint ..."
Abstract

Cited by 31 (1 self)
 Add to MetaCart
this paper we review several of these methods, and subsequently compare them in the context of two examples, the first a simple regression example, and the second a much more challenging hierarchical longitudinal model of the kind often encountered in biostatistical practice. We find that the joint modelparameter space search methods perform adequately but can be difficult to program and tune, while the marginal likelihood methods are often less troublesome and require less in the way of additional coding. Our results suggest that the latter methods may be most appropriate for practitioners working in many standard model choice settings, while the former remain important for comparing large numbers of models, or models whose parameters cannot be easily updated in relatively few blocks. We caution however that all of the methods we compare require significant human and computer effort, suggesting that less formal Bayesian model choice methods may offer a more realistic alternative in many cases.
Bayesian Deviance, the Effective Number of Parameters, and the Comparison of Arbitrarily Complex Models
, 1998
"... We consider the problem of comparing complex hierarchical models in which the number of parameters is not clearly defined. We follow Dempster in examining the posterior distribution of the loglikelihood under each model, from which we derive measures of fit and complexity (the effective number of p ..."
Abstract

Cited by 28 (7 self)
 Add to MetaCart
We consider the problem of comparing complex hierarchical models in which the number of parameters is not clearly defined. We follow Dempster in examining the posterior distribution of the loglikelihood under each model, from which we derive measures of fit and complexity (the effective number of parameters). These may be combined into a Deviance Information Criterion (DIC), which is shown to have an approximate decisiontheoretic justification. Analytic and asymptotic identities reveal the measure of complexity to be a generalisation of a wide range of previous suggestions, with particular reference to the neural network literature. The contributions of individual observations to fit and complexity can give rise to a diagnostic plot of deviance residuals against leverages. The procedure is illustrated in a number of examples, and throughout it is emphasised that the required quantities are trivial to compute in a Markov chain Monte Carlo analysis, and require no analytic work for new...
Smoothing spline ANOVA for multivariate Bernoulli observations, with applications to ophthalmology data, with discussion
, 2001
"... We combine a smoothing spline analysis of variance (SSANOVA) model and a loglinear model to build a partly � exible model for multivariate Bernoulli data. The joint distribution conditioning on the predictor variables is estimated. The log odds ratio is used to measure the association between outc ..."
Abstract

Cited by 26 (20 self)
 Add to MetaCart
We combine a smoothing spline analysis of variance (SSANOVA) model and a loglinear model to build a partly � exible model for multivariate Bernoulli data. The joint distribution conditioning on the predictor variables is estimated. The log odds ratio is used to measure the association between outcome variables. A numerical scheme based on the block onestep successive over relaxation SOR–NewtonRalphson algorithm is proposed to obtain an approximate solution for the variational problem. We extend the generalized approximate cross validation (GACV) and the randomized GACV for choosing smoothing parameters to the case of multivariate Bernoulli responses. The randomized version is fast and stable to compute and is used to adaptively select smoothing parameters in each block onestep SOR iteration. Approximate Bayesian con � dence intervals are obtained for the � exible estimates of the conditional logit functions. Simulation studies are conducted to check the performance of the proposed method, using the comparative Kullback–Leibler distance as a yardstick. Finally, the model is applied to twoeye observational data from the Beaver Dam Eye Study, to examine the association of pigmentary abnormalities and various covariates.
Mplus: Statistical Analysis with Latent Variables (Version 4.21) [Computer software
, 2007
"... Chapter 3: Regression and path analysis 19 Chapter 4: Exploratory factor analysis 43 ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
Chapter 3: Regression and path analysis 19 Chapter 4: Exploratory factor analysis 43
Socioeconomic status, health and lifestyle”, working paper York Seminar in Health Econometrics
, 2001
"... The role of lifestyle in mediating the relationship between socioeconomic characteristics and health has been discussed extensively in the epidemiological and economic literatures. Previous analyses have not considered a formal framework incorporating unobservable heterogeneity. In this paper we de ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
The role of lifestyle in mediating the relationship between socioeconomic characteristics and health has been discussed extensively in the epidemiological and economic literatures. Previous analyses have not considered a formal framework incorporating unobservable heterogeneity. In this paper we develop a simple economic model in which health is determined(partially) by lifestyle, which depends on preferences, budget and time constraints and unobservable characteristics. We estimate a recursive empirical speci cation consisting of a health production function and reduced forms for the lifestyle equations using Maximum Simulated Likelihood for a multivariate probit model with discrete indicators of lifestyle choices and selfassessed health(SAH) on British panel data from the 1984 and 1991 Health and Lifestyle Survey. We nd that prudent drinking and not smoking in 1984 have dramatic positive e ects on the probability of reporting excellent or good SAH in 1991. The failure of epidemiological analyses to account for unobserved heterogeneity can explain their low estimates of the relevance of lifestyle in the socioeconomic statushealth relationship. Accounting for unobserved heterogeneity also leads us to conclude that indicators for sleep, exercise, and breakfast in 1984 are unimportant for SAH in 1991. JEL codes I1 C0
On MCMC Sampling in Hierarchical Longitudinal Models
 Statistics and Computing
, 1998
"... this paper we construct several (partially and fully blocked) MCMC algorithms for minimizing the autocorrelation in MCMC samples arising from important classes of longitudinal data models. We exploit an identity used by Chib (1995) in the context of Bayes factor computation to show how the parameter ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
this paper we construct several (partially and fully blocked) MCMC algorithms for minimizing the autocorrelation in MCMC samples arising from important classes of longitudinal data models. We exploit an identity used by Chib (1995) in the context of Bayes factor computation to show how the parameters in a general linear mixed model may be updated in a single block, improving convergence and producing essentially independent draws from the posterior of the parameters of interest. We also investigate the value of blocking in nonGaussian mixed models, as well as in a class of binary response data longitudinal models. We illustrate the approaches in detail with three realdata examples.
Bayesian correlation estimation
, 2004
"... We propose prior probability models for variancecovariance matrices in order to address two important issues. First, the models allow a researcher to represent substantive prior information about the strength of correlations among a set of variables. Secondly, even in the absence of such informatio ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
We propose prior probability models for variancecovariance matrices in order to address two important issues. First, the models allow a researcher to represent substantive prior information about the strength of correlations among a set of variables. Secondly, even in the absence of such information, the increased flexibility of the models mitigates dependence on strict parametric assumptions in standard prior models. For example, the model allows a posteriori different levels of uncertainty about correlations among different subsets of variables. We achieve this by including a clustering mechanism in the prior probability model. Clustering is with respect to variables and pairs of variables. Our approach leads to shrinkage towards a mixture structure implied by the clustering. We discuss appropriate posterior simulation schemes to implement posterior inference in the proposed models, including the evaluation of normalising constants that are functions of parameters of interest. The normalising constants result from the restriction that the correlation matrix be positive definite. We discuss examples based on simulated data, a stock return dataset and a population genetics dataset.