Results 1  10
of
67
Batch Means and Spectral Variance Estimation in Markov Chain Monte Carlo
, 2009
"... Calculating a Monte Carlo standard error (MCSE) is an important step in the statistical analysis of the simulation output obtained from a Markov chain Monte Carlo experiment. An MCSE is usually based on an estimate of the variance of the asymptotic normal distribution. We consider spectral and batch ..."
Abstract

Cited by 27 (9 self)
 Add to MetaCart
Calculating a Monte Carlo standard error (MCSE) is an important step in the statistical analysis of the simulation output obtained from a Markov chain Monte Carlo experiment. An MCSE is usually based on an estimate of the variance of the asymptotic normal distribution. We consider spectral and batch means methods for estimating this variance. In particular, we establish conditions which guarantee that these estimators are strongly consistent as the simulation effort increases. In addition, for the batch means and overlapping batch means methods we establish conditions ensuring consistency in the meansquare sense which in turn allows us to calculate the optimal batch size up to a constant of proportionality. Finally, we examine the empirical finitesample properties of spectral variance and batch means estimators and provide recommendations for practitioners.
Improving the Convergence Properties of the Data Augmentation Algorithm with an Application to Bayesian Mixture Modelling
, 2009
"... Every reversible Markov chain defines an operator whose spectrum encodes the convergence properties of the chain. When the state space is finite, the spectrum is just the set of eigenvalues of the corresponding Markov transition matrix. However, when the state space is infinite, the spectrum may be ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
Every reversible Markov chain defines an operator whose spectrum encodes the convergence properties of the chain. When the state space is finite, the spectrum is just the set of eigenvalues of the corresponding Markov transition matrix. However, when the state space is infinite, the spectrum may be uncountable, and is nearly always impossible to calculate. In most applications of the data augmentation (DA) algorithm, the state space of the DA Markov chain is infinite. However, we show that, under regularity conditions that include the finiteness of the augmented space, the operators defined by the DA chain and Hobert and Marchev’s (2008) alternative chain are both compact, and the corresponding spectra are both finite subsets of [0, 1). Moreover, we prove that the spectrum of Hobert and Marchev’s (2008) chain dominates the spectrum of the DA chain in the sense that the ordered elements of the former are all less than or equal to the corresponding elements of the latter. As a concrete example, we study a widely used DA algorithm for the exploration of posterior densities associated with Bayesian mixture models (Diebolt and Robert, 1994). In particular, we compare this mixture DA algorithm with an alternative algorithm proposed by FrühwirthSchnatter (2001) that is based on random label switching. 1 1
Gibbs sampling for a Bayesian hierarchical general linear model
 ELECTRONIC J. STATIST
, 2008
"... We consider twocomponent block Gibbs sampling for a Bayesian hierarchical version of the normal theory general linear model. This model is practically relevant in the sense that it is general enough to have many applications and in that it is not straightforward to sample directly from the correspo ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
We consider twocomponent block Gibbs sampling for a Bayesian hierarchical version of the normal theory general linear model. This model is practically relevant in the sense that it is general enough to have many applications and in that it is not straightforward to sample directly from the corresponding posterior distribution. There are two possible orders in which to update the components of our block Gibbs sampler. For both update orders, drift and minorization conditions are constructed for the corresponding Markov chains. Most importantly, these results establish geometric ergodicity for the block Gibbs sampler. We also construct a general minorization condition and use it to investigate the applicability of regenerative simulation techniques for constructing valid Monte Carlo standard errors.
CONVERGENCE ANALYSIS OF THE GIBBS SAMPLER FOR BAYESIAN GENERAL LINEAR MIXED MODELS WITH IMPROPER PRIORS
"... Bayesian analysis of data from the general linear mixed model is challenging because any nontrivial prior leads to an intractable posterior density. However, if a conditionally conjugate prior density is adopted, then there is a simple Gibbs sampler that can be employed to explore the posterior dens ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
Bayesian analysis of data from the general linear mixed model is challenging because any nontrivial prior leads to an intractable posterior density. However, if a conditionally conjugate prior density is adopted, then there is a simple Gibbs sampler that can be employed to explore the posterior density. A popular default among the conditionally conjugate priors is an improper prior that takes a product form with a flat prior on the regression parameter, and socalled power priors on each of the variance components. In this paper, a convergence rate analysis of the corresponding Gibbs sampler is undertaken. The main result is a simple, easilychecked sufficient condition for geometric ergodicity of the Gibbs–Markov chain. This result is close to the best possible result in the sense that the sufficient condition is only slightly stronger than what is required to ensure posterior propriety. The theory developed in this paper is extremely important from a practical standpoint because it guarantees the existence of central limit theorems that allow for the computation of valid asymptotic standard errors for the estimates computed using the Gibbs sampler.
On Monte Carlo methods for Bayesian multivariate regression models with heavytailed errors
 Journal of Multivariate Analysis
"... We consider Bayesian analysis of data from multivariate linear regression models whose errors have a distribution that is a scale mixture of normals. Such models are used to analyze data on financial returns, which are notoriously heavytailed. Let pi denote the intractable posterior density that re ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
We consider Bayesian analysis of data from multivariate linear regression models whose errors have a distribution that is a scale mixture of normals. Such models are used to analyze data on financial returns, which are notoriously heavytailed. Let pi denote the intractable posterior density that results when this regression model is combined with the standard noninformative prior on the unknown regression coefficients and scale matrix of the errors. Roughly speaking, the posterior is proper if and only if n ≥ d + k, where n is the sample size, d is the dimension of the response, and k is number of covariates. We provide a method of making exact draws from pi in the special case where n = d + k, and we study Markov chain Monte Carlo (MCMC) algorithms that can be used to explore pi when n> d+ k. In particular, we show how the Haar PXDA technology studied in Hobert and Marchev (2008) can be used to improve upon Liu’s (1996) data augmentation (DA) algorithm. Indeed, the new algorithm that we introduce is theoretically superior to the DA algorithm, yet equivalent to DA in terms of computational complexity. Moreover, we analyze the convergence rates of these MCMC algorithms in the important special case where the regression errors have a Student’s t distribution. We prove that, under conditions on n, d, k, and the degrees of freedom of the t distribution, both algorithms converge at a geometric rate. These convergence rate results are important from a practical standpoint because geometric ergodicity guarantees the existence of central limit theorems which are essential for the calculation of valid asymptotic standard errors for MCMC based estimates.
Spatial Bayesian Variable Selection Models on Functional Magnetic Resonance Imaging TimeSeries Data
, 2011
"... One of the major objectives of fMRI (functional magnetic resonance imaging) studies is to determine subjectspecific areas of increased blood oxygenation level dependent (BOLD) signal contrast in response to a stimulus or task, and hence to infer regional neuronal activity. We posit and investigate ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
One of the major objectives of fMRI (functional magnetic resonance imaging) studies is to determine subjectspecific areas of increased blood oxygenation level dependent (BOLD) signal contrast in response to a stimulus or task, and hence to infer regional neuronal activity. We posit and investigate a Bayesian approach that incorporates spatial dependence in the image and allows for the taskrelated change in the BOLD signal to change dynamically over the scanning session. In this way, our model accounts for potential learning effects, in addition to other mechanisms of temporal drift in taskrelated signals. However, using the posterior for inference requires Markov chain Monte Carlo (MCMC) methods. We study the properties of the model and the MCMC algorithms through their performance on simulated and real data sets. 1
The PolyaGamma Gibbs Sampler for Bayesian Logistic Regression is Uniformly Ergodic
, 2013
"... One of the most widely used data augmentation algorithms is Albert and Chib’s (1993) algorithm for Bayesian probit regression. Polson, Scott and Windle (2013) recently introduced an analogous algorithm for Bayesian logistic regression. The main difference between the two is that Albert and Chib’s (1 ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
One of the most widely used data augmentation algorithms is Albert and Chib’s (1993) algorithm for Bayesian probit regression. Polson, Scott and Windle (2013) recently introduced an analogous algorithm for Bayesian logistic regression. The main difference between the two is that Albert and Chib’s (1993) truncated normals are replaced by socalled PolyaGamma random variables. In this note, we establish that the Markov chain underlying Polson et al.’s (2013) algorithm is uniformly ergodic. This theoretical result has important practical benefits. In particular, it guarantees the existence of central limit theorems that can be used to make an informed decision about how long the simulation should be run. 1
A NEW MULTIVARIATE MEASUREMENT ERROR MODEL WITH ZEROINFLATED DIETARY DATA, AND ITS APPLICATION TO DIETARY ASSESSMENT 1
"... In the United States the preferred method of obtaining dietary intake data is the 24hour dietary recall, yet the measure of most interest is usual or longterm average daily intake, which is impossible to measure. Thus, usual dietary intake is assessed with considerable measurement error. Also, diet ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
In the United States the preferred method of obtaining dietary intake data is the 24hour dietary recall, yet the measure of most interest is usual or longterm average daily intake, which is impossible to measure. Thus, usual dietary intake is assessed with considerable measurement error. Also, diet represents numerous foods, nutrients and other components, each of which have distinctive attributes. Sometimes, it is useful to examine intake of these components separately, but increasingly nutritionists are interested in exploring them collectively to capture overall dietary patterns. Consumption of these components varies widely: some are consumed daily by almost everyone on every day, while others are episodically consumed so that 24hour recall data are zeroinflated. In addition, they are often correlated with each other. Finally, it is often preferable to analyze the amount of a dietary component relative to the amount of energy (calories) in a diet because dietary recommendations often vary with energy level. The quest to understand overall dietary patterns of usual intake has to this point reached a standstill. There are no statistical
On automating Markov chain Monte Carlo for a class of spatial models
"... Markov chain Monte Carlo (MCMC) algorithms provide a very general recipe for estimating properties of complicated distributions. While their use has become commonplace and there is a large literature on MCMC theory and practice, MCMC users still have to contend with several challenges with each impl ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Markov chain Monte Carlo (MCMC) algorithms provide a very general recipe for estimating properties of complicated distributions. While their use has become commonplace and there is a large literature on MCMC theory and practice, MCMC users still have to contend with several challenges with each implementation of the algorithm. These challenges include determining how to construct an efficient algorithm, finding reasonable starting values, deciding whether the samplebased estimates are accurate, and determining an appropriate length (stopping rule) for the Markov chain. We describe an approach for resolving these issues in a theoretically sound fashion in the context of spatial generalized linear models, an important class of models that result in challenging posterior distributions. Our approach combines analytical approximations for constructing provably fast mixing MCMC algorithms, and takes advantage of recent developments in MCMC theory. We apply our methods to real data examples, and 1 find that our MCMC algorithm is automated and efficient. Furthermore, since starting values, rigorous error estimates and theoretically justified stopping rules for the sampling algorithm are all easily obtained for our examples, our MCMCbased estimation is practically as easy to perform as Monte Carlo estimation based on independent and identically distributed draws. 1