Results 1  10
of
10
Markov chain monte carlo convergence diagnostics
 JASA
, 1996
"... A critical issue for users of Markov Chain Monte Carlo (MCMC) methods in applications is how to determine when it is safe to stop sampling and use the samples to estimate characteristics of the distribution of interest. Research into methods of computing theoretical convergence bounds holds promise ..."
Abstract

Cited by 274 (6 self)
 Add to MetaCart
A critical issue for users of Markov Chain Monte Carlo (MCMC) methods in applications is how to determine when it is safe to stop sampling and use the samples to estimate characteristics of the distribution of interest. Research into methods of computing theoretical convergence bounds holds promise for the future but currently has yielded relatively little that is of practical use in applied work. Consequently, most MCMC users address the convergence problem by applying diagnostic tools to the output produced by running their samplers. After giving a brief overview of the area, we provide an expository review of thirteen convergence diagnostics, describing the theoretical basis and practical implementation of each. We then compare their performance in two simple models and conclude that all the methods can fail to detect the sorts of convergence failure they were designed to identify. We thus recommend a combination of strategies aimed at evaluating and accelerating MCMC sampler convergence, including applying diagnostic procedures to a small number of parallel chains, monitoring autocorrelations and crosscorrelations, and modifying parameterizations or sampling algorithms appropriately. We emphasize, however, that it is not possible to say with certainty that a finite sample from an MCMC algorithm is representative of an underlying stationary distribution. 1
Spatial modelling using a new class of nonstationary covariance functions
 Environmetrics
, 2006
"... We introduce a new class of nonstationary covariance functions for spatial modelling. Nonstationary covariance functions allow the model to adapt to spatial surfaces whose variability changes with location. The class includes a nonstationary version of the Matérn stationary covariance, in which the ..."
Abstract

Cited by 28 (0 self)
 Add to MetaCart
We introduce a new class of nonstationary covariance functions for spatial modelling. Nonstationary covariance functions allow the model to adapt to spatial surfaces whose variability changes with location. The class includes a nonstationary version of the Matérn stationary covariance, in which the differentiability of the spatial surface is controlled by a parameter, freeing one from fixing the differentiability in advance. The class allows one to knit together local covariance parameters into a valid global nonstationary covariance, regardless of how the local covariance structure is estimated. We employ this new nonstationary covariance in a fully Bayesian model in which the unknown spatial process has a Gaussian process (GP) distribution with a nonstationary covariance function from the class. We model the nonstationary structure in a computationally efficient way that creates nearly stationary local behavior and for which stationarity is a special case. We also suggest nonBayesian approaches to nonstationary kriging. To assess the method, we compare the Bayesian nonstationary GP model with a Bayesian stationary GP model, various standard spatial smoothing approaches, and nonstationary models that can adapt to function heterogeneity. In simulations, the nonstationary GP model adapts to function heterogeneity, unlike the stationary models, and also outperforms the other nonstationary models. On a real dataset, GP models outperform the competitors, but while the nonstationary GP gives qualitatively more sensible results, it fails to outperform the stationary GP on heldout data, illustrating the difficulty in fitting complex spatial functions with relatively few observations. The nonstationary covariance model could also be used for nonGaussian data and embedded in additive models as well as in more complicated, hierarchical spatial or spatiotemporal models. More complicated models may require simpler parameterizations for computational efficiency.
On MCMC Sampling in Hierarchical Longitudinal Models
 Statistics and Computing
, 1998
"... this paper we construct several (partially and fully blocked) MCMC algorithms for minimizing the autocorrelation in MCMC samples arising from important classes of longitudinal data models. We exploit an identity used by Chib (1995) in the context of Bayes factor computation to show how the parameter ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
this paper we construct several (partially and fully blocked) MCMC algorithms for minimizing the autocorrelation in MCMC samples arising from important classes of longitudinal data models. We exploit an identity used by Chib (1995) in the context of Bayes factor computation to show how the parameters in a general linear mixed model may be updated in a single block, improving convergence and producing essentially independent draws from the posterior of the parameters of interest. We also investigate the value of blocking in nonGaussian mixed models, as well as in a class of binary response data longitudinal models. We illustrate the approaches in detail with three realdata examples.
Bilinear Mixed Effects Models for Dyadic Data
, 2003
"... This article discusses the use of a symmetric multiplicative interaction effect to capture certain types of thirdorder dependence patterns often present in social networks and other dyadic datasets. Such an effect, along with standard linear fixed and random effects, is incorporated into a general ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
This article discusses the use of a symmetric multiplicative interaction effect to capture certain types of thirdorder dependence patterns often present in social networks and other dyadic datasets. Such an effect, along with standard linear fixed and random effects, is incorporated into a generalized linear model, and a Markov chain Monte Carlo algorithm is provided for Bayesian estimation and inference. In an example analysis of international relations data, accounting for such patterns improves model fit and predictive performance.
A General Framework for the Parametrization of Hierarchical Models
 Statistical Science
, 2007
"... ..."
To Center or Not To Center: That Is Not The Question
 in progress) Paul Baines 101909 Bayesian Computation in ColorMagnitude Diagrams
, 2009
"... For a broad class of multilevel models, there exist two wellknown competing parameterizations, the centered parametrization (CP) and the noncentered parametrization (NCP), for effective MCMC implementation. Much literature has been devoted to the questions of when to use which and how to compromi ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
For a broad class of multilevel models, there exist two wellknown competing parameterizations, the centered parametrization (CP) and the noncentered parametrization (NCP), for effective MCMC implementation. Much literature has been devoted to the questions of when to use which and how to compromise between them via partial CP/NCP. This paper introduces an alternative strategy for boosting MCMC efficiency via simply interweaving— but not alternating—the two parameterizations. This strategy has the surprising property that failure of both the CP and NCP chains to converge geometrically does not prevent the interweaving algorithm from doing so. It achieves this seemingly magical property by taking advantage of the discordance of the two parameterizations, namely, the sufficiency of CP and the ancillarity of NCP, to substantially reduce the Markovian dependence, especially when the original CP and NCP form a “beauty and beast ” pair (i.e., when one chain mixes far more rapidly than the other). The ancillaritysufficiency reformulation of the CPNCP dichotomy allows us to borrow insight from the wellknown Basu’s theorem on the independence of (complete) sufficient and ancillary statistics, albeit a Bayesian version of Basu’s
Prediction Using Orthogonalized Model Mixing
, 1996
"... (Statistics) PREDICTION USING ORTHOGONALIZED MODEL MIXING by Heather Denise DeSimone Institute of Statistics and Decision Sciences Duke University Date: Approved: Dr. Merlise Clyde, Supervisor Dr. Giovanni Parmigiani, Supervisor Dr. Donald Berry Dr. Victor Hasselblad Dr. Robert Wolpert An abstra ..."
Abstract
 Add to MetaCart
(Statistics) PREDICTION USING ORTHOGONALIZED MODEL MIXING by Heather Denise DeSimone Institute of Statistics and Decision Sciences Duke University Date: Approved: Dr. Merlise Clyde, Supervisor Dr. Giovanni Parmigiani, Supervisor Dr. Donald Berry Dr. Victor Hasselblad Dr. Robert Wolpert An abstract of a dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Institute of Statistics and Decision Sciences in the Graduate School of Duke University Abstract This dissertation investigates modeling strategies and numerical methods for prediction under model averaging in normal linear models and in Poisson regression with extensions to other generalized linear models. The focus is on mixing over possible subsets of candidate predictors. For linear regression models a sampling approach which uses an importance sampling technique is developed. This technique is based on an approximation of the posterior model probabilities using an or...
Abstract Markov Chain Monte Carlo Convergence Diagnostics: A Comparative Review
"... A critical issue for users of Markov Chain Monte Carlo (MCMC) methods in applications is how to determine when it is safe to stop sampling and use the samples to estimate characteristics of the distribution of interest. Research into methods of computing theoretical convergence bounds holds promise ..."
Abstract
 Add to MetaCart
A critical issue for users of Markov Chain Monte Carlo (MCMC) methods in applications is how to determine when it is safe to stop sampling and use the samples to estimate characteristics of the distribution of interest. Research into methods of computing theoretical convergence bounds holds promise for the future but currently has yielded relatively little that is of practical use in applied work. Consequently, most MCMC users address the convergence problem by applying diagnostic tools to the output produced by running their samplers. After giving a brief overview of the area, we provide an expository review of thirteen convergence diagnostics, describing the theoretical basis and practical implementation of each. We then compare their performance in two simple models and conclude that all the methods can fail to detect the sorts of convergence failure they were designed to identify. We thus recommend a combination of strategies aimed at evaluating and accelerating MCMC sampler convergence, including applying diagnostic procedures to a small number of parallel chains, monitoring autocorrelations and crosscorrelations, and modifying parameterizations or sampling algorithms appropriately. We emphasize, however, that it is not possible to say with certainty that a finite sample from an MCMC algorithm is representative of an underlying stationary distribution.
ON COMPUTATION USING GIBBS SAMPLING FOR MULTILEVEL MODELS
"... Abstract: Multilevel models incorporating random effects at the various levels are enjoying increased popularity. An implicit problem with such models is identifiability. From a Bayesian perspective, formal identifiability is not an issue. Rather, when implementing iterative simulationbased model ..."
Abstract
 Add to MetaCart
Abstract: Multilevel models incorporating random effects at the various levels are enjoying increased popularity. An implicit problem with such models is identifiability. From a Bayesian perspective, formal identifiability is not an issue. Rather, when implementing iterative simulationbased model fitting, a poorly behaved Gibbs sampler frequently arises. The objective of this paper is to shed light on two computational issues in this regard. The first concerns autocorrelation in the sequence of iterates of the Markov chain. For estimable functions we clarify when, after convergence, autocorrelation will drop off to zero rapidly, enabling high effective sample size. The second concerns immediate convergence, i.e., when, at an arbitrary iteration, the simulated value of a variable is in fact an observation from the posterior distribution of the variable. Again, for estimable functions, we clarify when the chain will produce at each iteration a sample drawn essentially from the true posterior of the function. We provide both analytical and computational support for our conclusions, including exemplification for three multilevel models having normal, Poisson, and binary responses, respectively. Key words and phrases: Autocorrelation, estimable function, exact sampling, identifiability.
Parametric Covariance Matrix Modeling in Bayesian Panel Regression
"... The full Bayesian treatment of error component models typically relies on data augmentation to produce the required inference. Never strictly necessary a direct approach is always possible though not necessarily practical. The mechanics of direct sampling are outlined and a template for including m ..."
Abstract
 Add to MetaCart
The full Bayesian treatment of error component models typically relies on data augmentation to produce the required inference. Never strictly necessary a direct approach is always possible though not necessarily practical. The mechanics of direct sampling are outlined and a template for including model uncertainty is described. The needed tools, relying on various Markov chain Monte Carlo techniques, are developed and direct sampling, with and without effect selection, is illustrated.