Results 1  10
of
46
Markov chains for exploring posterior distributions
 Annals of Statistics
, 1994
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 753 (6 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
Markov chain monte carlo convergence diagnostics
 JASA
, 1996
"... A critical issue for users of Markov Chain Monte Carlo (MCMC) methods in applications is how to determine when it is safe to stop sampling and use the samples to estimate characteristics of the distribution of interest. Research into methods of computing theoretical convergence bounds holds promise ..."
Abstract

Cited by 232 (6 self)
 Add to MetaCart
A critical issue for users of Markov Chain Monte Carlo (MCMC) methods in applications is how to determine when it is safe to stop sampling and use the samples to estimate characteristics of the distribution of interest. Research into methods of computing theoretical convergence bounds holds promise for the future but currently has yielded relatively little that is of practical use in applied work. Consequently, most MCMC users address the convergence problem by applying diagnostic tools to the output produced by running their samplers. After giving a brief overview of the area, we provide an expository review of thirteen convergence diagnostics, describing the theoretical basis and practical implementation of each. We then compare their performance in two simple models and conclude that all the methods can fail to detect the sorts of convergence failure they were designed to identify. We thus recommend a combination of strategies aimed at evaluating and accelerating MCMC sampler convergence, including applying diagnostic procedures to a small number of parallel chains, monitoring autocorrelations and crosscorrelations, and modifying parameterizations or sampling algorithms appropriately. We emphasize, however, that it is not possible to say with certainty that a finite sample from an MCMC algorithm is representative of an underlying stationary distribution. 1
Rates of convergence of the Hastings and Metropolis algorithms
 ANNALS OF STATISTICS
, 1996
"... We apply recent results in Markov chain theory to Hastings and Metropolis algorithms with either independent or symmetric candidate distributions, and provide necessary and sufficient conditions for the algorithms to converge at a geometric rate to a prescribed distribution ß. In the independence ca ..."
Abstract

Cited by 157 (15 self)
 Add to MetaCart
We apply recent results in Markov chain theory to Hastings and Metropolis algorithms with either independent or symmetric candidate distributions, and provide necessary and sufficient conditions for the algorithms to converge at a geometric rate to a prescribed distribution ß. In the independence case (in IR k ) these indicate that geometric convergence essentially occurs if and only if the candidate density is bounded below by a multiple of ß; in the symmetric case (in IR only) we show geometric convergence essentially occurs if and only if ß has geometric tails. We also evaluate recently developed computable bounds on the rates of convergence in this context: examples show that these theoretical bounds can be inherently extremely conservative, although when the chain is stochastically monotone the bounds may well be effective.
An exact likelihood analysis of the multinomial probit model
, 1994
"... We develop new methods for conducting a finite sample, likelihoodbased analysis of the multinomial probit model. Using a variant of the Gibbs sampler, an algorithm is developed to draw from the exact posterior of the multinomial probit model with correlated errors. This approach avoids direct evalu ..."
Abstract

Cited by 90 (4 self)
 Add to MetaCart
We develop new methods for conducting a finite sample, likelihoodbased analysis of the multinomial probit model. Using a variant of the Gibbs sampler, an algorithm is developed to draw from the exact posterior of the multinomial probit model with correlated errors. This approach avoids direct evaluation of the likelihood and, thus, avoids the problems associated with calculating choice probabilities which affect both the standard likelihood and method of simulated moments approaches. Both simulated and actual consumer panel data are used to fit sixdimensional choice models. We also develop methods for analyzing random coefficient and multiperiod probit models.
Convergence rates of Markov chains
, 1995
"... this paper, we attempt to describe various mathematical techniques which have been used to bound such rates of convergence. In particular, we describe eigenvalue analysis, random walks on groups, coupling, and minorization conditions. Connections are made to modern areas of research wherever possibl ..."
Abstract

Cited by 62 (4 self)
 Add to MetaCart
this paper, we attempt to describe various mathematical techniques which have been used to bound such rates of convergence. In particular, we describe eigenvalue analysis, random walks on groups, coupling, and minorization conditions. Connections are made to modern areas of research wherever possible. Elements of linear algebra, probability theory, group theory, and measure theory are used, but efforts are made to keep the presentation elementary and accessible. Acknowledgements. I thank Eric Belsley for comments and corrections, and thank Persi Diaconis for introducing me to this subject and teaching me so much. 1. Introduction and motivation.
Computable bounds for geometric convergence rates of Markov chains
, 1994
"... Recent results for geometrically ergodic Markov chains show that there exist constants R ! 1; ae ! 1 such that sup jfjV j Z P n (x; dy)f(y) \Gamma Z ß(dy)f(y)j RV (x)ae n where ß is the invariant probability measure and V is any solution of the drift inequalities Z P (x; dy)V (y) V (x) ..."
Abstract

Cited by 49 (6 self)
 Add to MetaCart
Recent results for geometrically ergodic Markov chains show that there exist constants R ! 1; ae ! 1 such that sup jfjV j Z P n (x; dy)f(y) \Gamma Z ß(dy)f(y)j RV (x)ae n where ß is the invariant probability measure and V is any solution of the drift inequalities Z P (x; dy)V (y) V (x) + b1l C (x) which are known to guarantee geometric convergence for ! 1; b ! 1 and a suitable small set C. In this paper we identify for the first time computable bounds on R and ae in terms of ; b and the minorizing constants which guarantee the smallness of C. In the simplest case where C is an atom ff with P (ff; ff) ffi we can choose any ae ? # where [1 \Gamma #] \Gamma1 = 1 (1 \Gamma ) 2 h 1 \Gamma + b + b 2 + i ff (b(1 \Gamma ) + b 2 ) i and i ff i 34 \Gamma 8ffi 2 ffi 3 ji b 1 \Gamma j 2 ; and we can then choose R ae=[ae \Gamma #]. The bounds for general small sets C are similar but more complex. We apply these to simple queueing models and Markov chain Mo...
Rates of Convergence for Gibbs Sampling for Variance Component Models
 Ann. Stat
, 1991
"... This paper analyzes the Gibbs sampler applied to a standard variance component model, and considers the question of how many iterations are required for convergence. It is proved that for K location parameters, with J observations each, the number of iterations required for convergence (for large K ..."
Abstract

Cited by 36 (10 self)
 Add to MetaCart
This paper analyzes the Gibbs sampler applied to a standard variance component model, and considers the question of how many iterations are required for convergence. It is proved that for K location parameters, with J observations each, the number of iterations required for convergence (for large K and J) is a constant times
Analysis of the Gibbs sampler for a model related to JamesStein estimators
, 1995
"... this paper we investigate the convergence properties of the Gibbs sampler as applied to a particular hierarchical Bayes model. The model is related to JamesStein estimators (James and Stein, 1961; Efron and Morris, 1973, 1975; Morris, 1983). Briefly, JamesStein estimators may be defined as the mea ..."
Abstract

Cited by 34 (15 self)
 Add to MetaCart
this paper we investigate the convergence properties of the Gibbs sampler as applied to a particular hierarchical Bayes model. The model is related to JamesStein estimators (James and Stein, 1961; Efron and Morris, 1973, 1975; Morris, 1983). Briefly, JamesStein estimators may be defined as the mean of a certain empirical Bayes posterior distribution (as discussed in the next section). We consider the problem of using the Gibbs sampler as a way of sampling from a richer posterior distribution, as suggested by Jun Liu (personal communication). Such a technique would eliminate the need to estimate a certain parameter empirically and to provide a "guess" at another one, and would give additional information about the distribution of the parameters involved. We consider, in particular, the convergence properties of this Gibbs sampler. For a certain range of prior distributions, we establish (Section 3) rigorous, numerical, reasonable rates of convergence. The bounds are obtained using the methods of Rosenthal (1995b). We thus rigorously bound the running time for this Gibbs sampler to converge to the posterior distribution, within a specified accuracy (as measured by total variation distance). We provide a general formula for this bound, which is of reasonable size, in terms of the prior distribution and the data. This Gibbs sampler is perhaps the most complicated example to date for which reasonable quantitative convergence rates have been obtained. We apply our bounds to the numerical baseball data of Efron and Morris (1975) and Morris (1983), based on batting averages of baseball players, and show that approximately 140 iterations are sufficient to achieve convergence in this case. For a different range of prior distributions, we use the Submartingale Convergence Theo...
Methods for Approximating Integrals in Statistics with Special Emphasis on Bayesian Integration Problems
 Statistical Science
"... This paper is a survey of the major techniques and approaches available for the numerical approximation of integrals in statistics. We classify these into five broad categories; namely, asymptotic methods, importance sampling, adaptive importance sampling, multiple quadrature and Markov chain method ..."
Abstract

Cited by 33 (4 self)
 Add to MetaCart
This paper is a survey of the major techniques and approaches available for the numerical approximation of integrals in statistics. We classify these into five broad categories; namely, asymptotic methods, importance sampling, adaptive importance sampling, multiple quadrature and Markov chain methods. Each method is discussed giving an outline of the basic supporting theory and particular features of the technique. Conclusions are drawn concerning the relative merits of the methods based on the discussion and their application to three examples. The following broad recommendations are made. Asymptotic methods should only be considered in contexts where the integrand has a dominant peak with approximate ellipsoidal symmetry. Importance sampling, and preferably adaptive importance sampling, based on a multivariate Student should be used instead of asymptotics methods in such a context. Multiple quadrature, and in particular subregion adaptive integration, are the algorithms of choice for...