Results 1  10
of
19
Markov Chain Monte Carlo Convergence Diagnostics: A Comparative Review
 Journal of the American Statistical Association
, 1996
"... A critical issue for users of Markov Chain Monte Carlo (MCMC) methods in applications is how to determine when it is safe to stop sampling and use the samples to estimate characteristics of the distribution of interest. Research into methods of computing theoretical convergence bounds holds promise ..."
Abstract

Cited by 223 (6 self)
 Add to MetaCart
A critical issue for users of Markov Chain Monte Carlo (MCMC) methods in applications is how to determine when it is safe to stop sampling and use the samples to estimate characteristics of the distribution of interest. Research into methods of computing theoretical convergence bounds holds promise for the future but currently has yielded relatively little that is of practical use in applied work. Consequently, most MCMC users address the convergence problem by applying diagnostic tools to the output produced by running their samplers. After giving a brief overview of the area, we provide an expository review of thirteen convergence diagnostics, describing the theoretical basis and practical implementation of each. We then compare their performance in two simple models and conclude that all the methods can fail to detect the sorts of convergence failure they were designed to identify. We thus recommend a combination of strategies aimed at evaluating and accelerating MCMC sampler conver...
TwoStep Estimation of Functional Linear Models with Applications to Longitudinal Data
 Journal of the Royal Statistical Society, Series B
, 2000
"... Functional linear models are useful in longitudinal data analysis. They include many classical and recently proposed statistical models for longitudinal data and other functional data. Recently, smoothing spline and kernel methods have been proposed for estimating their coefficient functions nonpara ..."
Abstract

Cited by 41 (5 self)
 Add to MetaCart
Functional linear models are useful in longitudinal data analysis. They include many classical and recently proposed statistical models for longitudinal data and other functional data. Recently, smoothing spline and kernel methods have been proposed for estimating their coefficient functions nonparametrically but these methods are either intensive in computation or inefficient in performance. Toovercome these drawbacks, in this paper, a simple and powerful twostep alternativeis proposed. In particular, the implementation of the proposed approach via local polynomial smoothing is discussed. Methods for estimating standard deviations of estimated coefficient functions are also proposed. Some asymptotic results for the local polynomial estimators are established. Two longitudinal data sets, one of which involves timedependent covariates, are used to demonstrate the proposed approach. Simulation studies show that our twostep approach improves the kernel method proposed in Hoover, et al...
The Art of Data Augmentation
, 2001
"... The term data augmentation refers to methods for constructing iterative optimization or sampling algorithms via the introduction of unobserved data or latent variables. For deterministic algorithms,the method was popularizedin the general statistical community by the seminal article by Dempster, Lai ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
The term data augmentation refers to methods for constructing iterative optimization or sampling algorithms via the introduction of unobserved data or latent variables. For deterministic algorithms,the method was popularizedin the general statistical community by the seminal article by Dempster, Laird, and Rubin on the EM algorithm for maximizing a likelihood function or, more generally, a posterior density. For stochastic algorithms, the method was popularized in the statistical literature by Tanner and Wong’s Data Augmentation algorithm for posteriorsampling and in the physics literatureby Swendsen and Wang’s algorithm for sampling from the Ising and Potts models and their generalizations; in the physics literature,the method of data augmentationis referred to as the method of auxiliary variables. Data augmentationschemes were used by Tanner and Wong to make simulation feasible and simple, while auxiliary variables were adopted by Swendsen and Wang to improve the speed of iterative simulation. In general,however, constructing data augmentation schemes that result in both simple and fast algorithms is a matter of art in that successful strategiesvary greatlywith the (observeddata) models being considered.After an overview of data augmentation/auxiliary variables and some recent developments in methods for constructing such
On MCMC Sampling in Hierarchical Longitudinal Models
 Statistics and Computing
, 1998
"... this paper we construct several (partially and fully blocked) MCMC algorithms for minimizing the autocorrelation in MCMC samples arising from important classes of longitudinal data models. We exploit an identity used by Chib (1995) in the context of Bayes factor computation to show how the parameter ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
this paper we construct several (partially and fully blocked) MCMC algorithms for minimizing the autocorrelation in MCMC samples arising from important classes of longitudinal data models. We exploit an identity used by Chib (1995) in the context of Bayes factor computation to show how the parameters in a general linear mixed model may be updated in a single block, improving convergence and producing essentially independent draws from the posterior of the parameters of interest. We also investigate the value of blocking in nonGaussian mixed models, as well as in a class of binary response data longitudinal models. We illustrate the approaches in detail with three realdata examples.
Gibbs Sampling
 Journal of the American Statistical Association
, 1995
"... 8> R f(`)d`. To marginalize, say for ` i ; requires h(` i ) = R f(`)d` (i) = R f(`)d` where ` (i) denotes all components of ` save ` i : To obtain Eg(` i ) requires similar integration; to obtain the marginal distribution of say g(`) or its expectation requires similar integration. When p is l ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
8> R f(`)d`. To marginalize, say for ` i ; requires h(` i ) = R f(`)d` (i) = R f(`)d` where ` (i) denotes all components of ` save ` i : To obtain Eg(` i ) requires similar integration; to obtain the marginal distribution of say g(`) or its expectation requires similar integration. When p is large (as it will be in the applications we envision) such integration is analytically infeasible (the socalled curse of dimensionality*). Gibbs sampling provides a Monte Carlo approach for carrying out such integrations. In what sorts of settings would we have need to mar
A stochastic model for analysis of longitudinal AIDS data
 Journal of the American Statistical Association
, 1994
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
Markov Chain Monte Carlo Methods in Biostatistics
 Statistical Methods in Medical Research 5:339355
, 1996
"... this article, we review some important general methods for Markov chain Monte Carlo ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
this article, we review some important general methods for Markov chain Monte Carlo
Residuals and Outliers in Repeated Measures Random Effects Models
 Expected Total
, 1995
"... An approach for developing Bayesian outlier and goodness of fit statistics is presented for the linear model and extended to a hierarchical random effects model for repeated measures data. Diagnostics for univariate outliers, missing covariates, multivariate outliers and global goodness of fit are d ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
An approach for developing Bayesian outlier and goodness of fit statistics is presented for the linear model and extended to a hierarchical random effects model for repeated measures data. Diagnostics for univariate outliers, missing covariates, multivariate outliers and global goodness of fit are developed. Distribution theory for the posterior of the residuals is worked out. A local approach is used to show how omitted covariates and fixed and random effects affect residual summaries. Standard plots are interpreted in light of these understandings. Key Words: Bayesian Data Analysis, GoodnessofFit, Hierarchical Models, Longitudinal Data, Outlier, Philosophy of Statistics, Shrinkage. 1 Introduction. This paper develops a Bayesian approach to residual analysis and extends the approach to the random effects model (REM) used to model repeated Robert E. Weiss is Assistant Professor, Department of Biostatistics, Box 177220; UCLA School of Public Health; Los Angeles CA 900951772 U.S....
Does the Covariance Structure Matter in Longitudinal Modelling for the Prediction of Future CD4 Counts?
"... We investigate the importance of the assumed covariance structure for longitudinal modelling of CD4 counts. We examine how individual predictions of future CD4 counts are affected by the covariance structure. We consider four covariance structures: one based on an integrated OrnsteinUhlenbeck st ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We investigate the importance of the assumed covariance structure for longitudinal modelling of CD4 counts. We examine how individual predictions of future CD4 counts are affected by the covariance structure. We consider four covariance structures: one based on an integrated OrnsteinUhlenbeck stochastic process, one based on Brownian motion and two derived from standard linear and quadratic random e#ects models. Using data from the Multicenter AIDS Cohort Study and from a simulation study,we show that there is a noticeable deterioration in the coverage rate of confidence intervals if we assume the wrong covariance. There is also a loss in e#ciency. The quadratic random effects model is found to be the best in terms of correctly calibrated prediction intervals, but is substantially less efficient than the others. Incorrectly specifying the covariance structure as linear random effects gives too narrow prediction intervals with poor coverage rates. Fitting using the model based on th...
Residuals and Outliers in Bayesian Random Effects Models
, 1994
"... Common repeated measures random effects models contain two random components, a random person effect and timevarying errors. An observation can be an outlier due to either an extreme person effect or an extreme time varying error. Outlier statistics are presented that can distinguish between these ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Common repeated measures random effects models contain two random components, a random person effect and timevarying errors. An observation can be an outlier due to either an extreme person effect or an extreme time varying error. Outlier statistics are presented that can distinguish between these types of outliers. For each person there is one statistic per observation, plus one statistic per random effect. Methodology is developed to reduce the explosion of statistics to two summary outlier statistics per person; one for the random effects and one for the time varying errors. If either of these screening statistics are large, then individual statistics for each observation or random effect can be inspected. Multivariate, targeted outlier statistics and goodnessoffit tests are also developed. Distribution theory is given, along with some geometric intuition. Key Words: Bayesian Data Analysis, GoodnessofFit, Hierarchical Models, Observed Errors, Repeated Measures. 1 Introduction...