Results 1  10
of
12
Markov Chain Monte Carlo Convergence Diagnostics: A Comparative Review
 Journal of the American Statistical Association
, 1996
"... A critical issue for users of Markov Chain Monte Carlo (MCMC) methods in applications is how to determine when it is safe to stop sampling and use the samples to estimate characteristics of the distribution of interest. Research into methods of computing theoretical convergence bounds holds promise ..."
Abstract

Cited by 223 (6 self)
 Add to MetaCart
A critical issue for users of Markov Chain Monte Carlo (MCMC) methods in applications is how to determine when it is safe to stop sampling and use the samples to estimate characteristics of the distribution of interest. Research into methods of computing theoretical convergence bounds holds promise for the future but currently has yielded relatively little that is of practical use in applied work. Consequently, most MCMC users address the convergence problem by applying diagnostic tools to the output produced by running their samplers. After giving a brief overview of the area, we provide an expository review of thirteen convergence diagnostics, describing the theoretical basis and practical implementation of each. We then compare their performance in two simple models and conclude that all the methods can fail to detect the sorts of convergence failure they were designed to identify. We thus recommend a combination of strategies aimed at evaluating and accelerating MCMC sampler conver...
On the Convergence of Monte Carlo Maximum Likelihood Calculations
 Journal of the Royal Statistical Society B
, 1992
"... Monte Carlo maximum likelihood for normalized families of distributions (Geyer and Thompson, 1992) can be used for an extremely broad class of models. Given any family f h ` : ` 2 \Theta g of nonnegative integrable functions, maximum likelihood estimates in the family obtained by normalizing the the ..."
Abstract

Cited by 59 (3 self)
 Add to MetaCart
Monte Carlo maximum likelihood for normalized families of distributions (Geyer and Thompson, 1992) can be used for an extremely broad class of models. Given any family f h ` : ` 2 \Theta g of nonnegative integrable functions, maximum likelihood estimates in the family obtained by normalizing the the functions to integrate to one can be approximated by Monte Carlo, the only regularity conditions being a compactification of the parameter space such that the the evaluation maps ` 7! h ` (x) remain continuous. Then with probability one the Monte Carlo approximant to the log likelihood hypoconverges to the exact log likelihood, its maximizer converges to the exact maximum likelihood estimate, approximations to profile likelihoods hypoconverge to the exact profile, and level sets of the approximate likelihood (support regions) converge to the exact sets (in Painlev'eKuratowski set convergence). The same results hold when there are missing data (Thompson and Guo, 1991, Gelfand and Carlin, 19...
Non and SemiParametric Estimation of Interaction in Inhomogeneous Point Patterns
, 2000
"... We develop methods for analysing the `interaction' or dependence between points in a spatial point pattern, when the pattern is spatially inhomogeneous. Completely nonparametric study of interactions is possible using an analogue of the Kfunction. Alternatively one may assume a semiparametric mo ..."
Abstract

Cited by 43 (17 self)
 Add to MetaCart
We develop methods for analysing the `interaction' or dependence between points in a spatial point pattern, when the pattern is spatially inhomogeneous. Completely nonparametric study of interactions is possible using an analogue of the Kfunction. Alternatively one may assume a semiparametric model in which a (parametrically specified) homogeneous Markov point process is subjected to (nonparametric) inhomogeneous independent thinning. The effectiveness of these approaches is tested on datasets representing the positions of trees in forests.
Gibbs Sampling
 Journal of the American Statistical Association
, 1995
"... 8> R f(`)d`. To marginalize, say for ` i ; requires h(` i ) = R f(`)d` (i) = R f(`)d` where ` (i) denotes all components of ` save ` i : To obtain Eg(` i ) requires similar integration; to obtain the marginal distribution of say g(`) or its expectation requires similar integration. When p is l ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
8> R f(`)d`. To marginalize, say for ` i ; requires h(` i ) = R f(`)d` (i) = R f(`)d` where ` (i) denotes all components of ` save ` i : To obtain Eg(` i ) requires similar integration; to obtain the marginal distribution of say g(`) or its expectation requires similar integration. When p is large (as it will be in the applications we envision) such integration is analytically infeasible (the socalled curse of dimensionality*). Gibbs sampling provides a Monte Carlo approach for carrying out such integrations. In what sorts of settings would we have need to mar
MONTE CARLO LIKELIHOOD INFERENCE FOR MISSING DATA MODELS
"... We describe a Monte Carlo method to approximate the maximum likelihood estimate (MLE), when there are missing data and the observed data likelihood is not available in closed form. This method uses simulated missing data that are independent and identically distributed and independent of the observe ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
We describe a Monte Carlo method to approximate the maximum likelihood estimate (MLE), when there are missing data and the observed data likelihood is not available in closed form. This method uses simulated missing data that are independent and identically distributed and independent of the observed data. Our Monte Carlo approximation to the MLE is a consistent and asymptotically normal estimate of the minimizer θ ∗ of the KullbackLeibler information, as both Monte Carlo and observed data sample sizes go to infinity simultaneously. Plugin estimates of the asymptotic variance are provided for constructing confidence regions for θ ∗. We give LogitNormal generalized linear mixed model examples, calculated using an R package. AMS 2000 subject classifications. Primary 62F12; secondary 65C05. Key words and phrases. Asymptotic theory, Monte Carlo, maximum likelihood, generalized
Extrapolating and Interpolating Spatial Patterns
 IN SPATIAL CLUSTER MODELLING, A.B. LAWSON AND D.G.T. DENISON (EDS.) BOCA RATON: CHAPMAN AND HALL/CRC
, 2001
"... We discuss issues arising when a spatial pattern is observed within some bounded region of space, and one wishes to predict the process outside of this region (extrapolation) as well as to perform inference on features of the pattern that cannot be observed (interpolation). We focus on spatial cl ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
We discuss issues arising when a spatial pattern is observed within some bounded region of space, and one wishes to predict the process outside of this region (extrapolation) as well as to perform inference on features of the pattern that cannot be observed (interpolation). We focus on spatial cluster analysis. Here the interpolation arises from the fact that the centres of clustering are not observed. We take a Bayesian approach with a repulsive Markov prior, derive the posterior distribution of the complete data, i.e. cluster centres with associated offspring marks, and propose an adaptive coupling from the past algorithm to sample from this posterior. The approach is illustrated by means of the redwood data set (Ripley, 1977).
Method of Moments Using Monte Carlo Simulation
 Journal of Computational and Graphical Statistics
, 1995
"... We present a computational approach to the method of moments using Monte Carlo simulation. Simple algebraic identities are used so that all computations can be performed directly using simulation draws and computation of the derivative of the loglikelihood. We present a simple implementation using ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We present a computational approach to the method of moments using Monte Carlo simulation. Simple algebraic identities are used so that all computations can be performed directly using simulation draws and computation of the derivative of the loglikelihood. We present a simple implementation using the NewtonRaphson algorithm, with the understanding that other optimization methods may be used in more complicated problems. The method can be applied to families of distributions with unknown normalizing constants and can be extended to leastsquares fitting in the case that the number of moments observed exceeds the number of parameters in the model. The method can be further generalized to allow "moments" that are any function of data and parameters, including as a special case maximum likelihood for models with unknown normalizing constants or missing data. In addition to being used for estimation, our method may be useful for setting the parameters of a Bayes prior distribution by spe...
Contributions to the Statistical Modelling of Image Data and Spatial Point Patterns
"... this paper D ` R ..."
Mail Stop 10R
"... ETS research prior to publication. They are available without charge from: Research Publications Office ..."
Abstract
 Add to MetaCart
ETS research prior to publication. They are available without charge from: Research Publications Office
Computer Based Statistical Treatment in Models with Incidental Parameters Inspired by Car Crash Data
"... in recent years. We study computer intensive methods that can be used in complex situations where it is not possible to express the likelihood estimates or the posterior analytically. The work is inspired by a set of car crash data from real traffic. We formulate and develop a model for car crash da ..."
Abstract
 Add to MetaCart
in recent years. We study computer intensive methods that can be used in complex situations where it is not possible to express the likelihood estimates or the posterior analytically. The work is inspired by a set of car crash data from real traffic. We formulate and develop a model for car crash data that aims to estimate and compare the relative collision safety among different car models. This model works sufficiently well, although complications arise due to a growing vector of incidental parameters. The bootstrap is shown to be a useful tool for studying uncertainties of the estimates of the structural parameters. This model is further extended to include driver characteristics. In a Poisson model with similar, but simpler structure, estimates of the structural parameter in the presence of incidental parameters are studied. The profile likelihood, bootstrap and the delta method are compared for deterministic and random incidental parameters. The same asymptotic properties, up to first order, are seen for deterministic as well as random