Results 1 
4 of
4
On the Convergence of Monte Carlo Maximum Likelihood Calculations
 Journal of the Royal Statistical Society B
, 1992
"... Monte Carlo maximum likelihood for normalized families of distributions (Geyer and Thompson, 1992) can be used for an extremely broad class of models. Given any family f h ` : ` 2 \Theta g of nonnegative integrable functions, maximum likelihood estimates in the family obtained by normalizing the the ..."
Abstract

Cited by 67 (5 self)
 Add to MetaCart
Monte Carlo maximum likelihood for normalized families of distributions (Geyer and Thompson, 1992) can be used for an extremely broad class of models. Given any family f h ` : ` 2 \Theta g of nonnegative integrable functions, maximum likelihood estimates in the family obtained by normalizing the the functions to integrate to one can be approximated by Monte Carlo, the only regularity conditions being a compactification of the parameter space such that the the evaluation maps ` 7! h ` (x) remain continuous. Then with probability one the Monte Carlo approximant to the log likelihood hypoconverges to the exact log likelihood, its maximizer converges to the exact maximum likelihood estimate, approximations to profile likelihoods hypoconverge to the exact profile, and level sets of the approximate likelihood (support regions) converge to the exact sets (in Painlev'eKuratowski set convergence). The same results hold when there are missing data (Thompson and Guo, 1991, Gelfand and Carlin, 19...
Estimating Normalizing Constants and Reweighting Mixtures in Markov Chain Monte Carlo
, 1994
"... Markov chain Monte Carlo (the MetropolisHastings algorithm and the Gibbs sampler) is a general multivariate simulation method that permits sampling from any stochastic process whose density is known up to a constant of proportionality. It has recently received much attention as a method of carrying ..."
Abstract

Cited by 45 (0 self)
 Add to MetaCart
Markov chain Monte Carlo (the MetropolisHastings algorithm and the Gibbs sampler) is a general multivariate simulation method that permits sampling from any stochastic process whose density is known up to a constant of proportionality. It has recently received much attention as a method of carrying out Bayesian, likelihood, and frequentist inference in analytically intractable problems. Although many applications of Markov chain Monte Carlo do not need estimation of normalizing constants, three do: calculation of Bayes factors, calculation of likelihoods in the presence of missing data, and importance sampling from mixtures. Here reverse logistic regression is proposed as a solution to the problem of estimating normalizing constants, and convergence and asymptotic normality of the estimates are proved under very weak regularity conditions. Markov chain Monte Carlo is most useful when combined with importance reweighting so that a Monte Carlo sample from one distribution can be used fo...
Reweighting Monte Carlo Mixtures
 J. AMER. STATIST. ASSOC
, 1991
"... Markov chain Monte Carlo (e. g., the Metropolis algorithm, Hastings algorithm, and Gibbs sampler) is a general multivariate simulation method applicable to a wide range of problems. It permits sampling from any stochastic process whose density is known up to a constant of proportionality. The Gibbs ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Markov chain Monte Carlo (e. g., the Metropolis algorithm, Hastings algorithm, and Gibbs sampler) is a general multivariate simulation method applicable to a wide range of problems. It permits sampling from any stochastic process whose density is known up to a constant of proportionality. The Gibbs sampler has recently received much attention as a method of simulating from posterior distributions in Bayesian inference, but Markov chain Monte Carlo is no less important in frequentist inference with applications in maximum likelihood, hypothesis testing, and the parametric bootstrap. It is most useful when combined with importance reweighting so that a Monte Carlo sample from one distribution can be used for inference about many distributions. In Bayesian inference, reweighting permits the calculation of posteriors corresponding to a range of priors using a Monte Carlo sample from just one posterior. In likelihood inference, reweighting permits the calculation of the whole likelihood function using a Monte Carlo sample from just one distribution in the model. Given this estimate of the likelihood, a parametric bootstrap calculation of the sampling distribution of the maximum likelihood estimate can be done using just one more Monte Carlo sample. Although reweighting can save much calculation, it does not work well unless the distribution being reweighted places appreciable mass in all regions of interest. Hence it is often not advisable to sample from a distribution in the model. Reweighting a mixture of distributions in the model may perform much better. But using such a mixture gives rise to another problem when the densities are known only up to constants of proportionality. These normalizing constants must be calculated to obtain the mixture density. Direct Monte Carl...
Discussion of the paper "Markov chains for exploring posterior distributions" by Luke Tierney
 Leipzig und Berlin
, 1994
"... this paper, which even before its appearance has done a valuable service in clarifying both theory and practice in this important area. For example, the discussion of combining strategies in Section 2.4 helped researchers break away from pure Gibbs sampling in 1991; it was, for example, part of the ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
this paper, which even before its appearance has done a valuable service in clarifying both theory and practice in this important area. For example, the discussion of combining strategies in Section 2.4 helped researchers break away from pure Gibbs sampling in 1991; it was, for example, part of the reasoning that lead to the "Metropoliscoupled" scheme of Geyer (1991) mentioned at the end of Section 2.3.3.