Results 1  10
of
49
Regeneration in Markov Chain Samplers
, 1994
"... Markov chain sampling has received considerable attention in the recent literature, in particular in the context of Bayesian computation and maximum likelihood estimation. This paper discusses the use of Markov chain splitting, originally developed as a tool for the theoretical analysis of general s ..."
Abstract

Cited by 87 (5 self)
 Add to MetaCart
Markov chain sampling has received considerable attention in the recent literature, in particular in the context of Bayesian computation and maximum likelihood estimation. This paper discusses the use of Markov chain splitting, originally developed as a tool for the theoretical analysis of general state space Markov chains, to introduce regeneration times into Markov chain samplers. This allows the use of regenerative methods for analyzing the output of these samplers, and can also provide a useful diagnostic of the performance of the samplers. The general approach is applied to several different samplers and is illustrated in a number of examples. 1 Introduction In Markov chain Monte Carlo, a distribution ß is examined by obtaining sample paths from a Markov chain constructed to have equilibrium distribution ß. This approach was introduced by Metropolis et al. (1953) and has recently received considerable attention as a method for examining posterior distributions in Bayesian infer...
ML parameter estimation for Markov random fields, with applications to Bayesian tomography
 IEEE Trans. on Image Processing
, 1998
"... Abstract 1 Markov random fields (MRF) have been widely used to model images in Bayesian frameworks for image reconstruction and restoration. Typically, these MRF models have parameters that allow the prior model to be adjusted for best performance. However, optimal estimation of these parameters (so ..."
Abstract

Cited by 49 (18 self)
 Add to MetaCart
Abstract 1 Markov random fields (MRF) have been widely used to model images in Bayesian frameworks for image reconstruction and restoration. Typically, these MRF models have parameters that allow the prior model to be adjusted for best performance. However, optimal estimation of these parameters (sometimes referred to as hyperparameters) is difficult in practice for two reasons: 1) Direct parameter estimation for MRF’s is known to be mathematically and numerically challenging. 2) Parameters can not be directly estimated because the true image crosssection is unavailable. In this paper, we propose a computationally efficient scheme to address both these difficulties for a general class of MRF models, and we derive specific methods of parameter estimation for the MRF model known as a generalized Gaussian MRF (GGMRF). The first section of the paper derives methods of direct estimation of scale and shape parameters for a general continuously valued MRF. For the GGMRF case, we show that the ML estimate of the scale parameter, σ, has a simple closed form solution, and we present an efficient scheme for computing the ML estimate of the shape parameter, p, by an offline numerical computation of the dependence of the partition function on p.
Non and SemiParametric Estimation of Interaction in Inhomogeneous Point Patterns
, 2000
"... We develop methods for analysing the `interaction' or dependence between points in a spatial point pattern, when the pattern is spatially inhomogeneous. Completely nonparametric study of interactions is possible using an analogue of the Kfunction. Alternatively one may assume a semiparametric mo ..."
Abstract

Cited by 43 (17 self)
 Add to MetaCart
We develop methods for analysing the `interaction' or dependence between points in a spatial point pattern, when the pattern is spatially inhomogeneous. Completely nonparametric study of interactions is possible using an analogue of the Kfunction. Alternatively one may assume a semiparametric model in which a (parametrically specified) homogeneous Markov point process is subjected to (nonparametric) inhomogeneous independent thinning. The effectiveness of these approaches is tested on datasets representing the positions of trees in forests.
Inference in Curved Exponential Family Models for Networks
 Journal of Computational and Graphical Statistics
, 2006
"... Network data arise in a wide variety of applications. Although descriptive statistics for networks abound in the literature, the science of fitting statistical models to complex network data is still in its infancy. The models considered in this article are based on exponential families; therefore, ..."
Abstract

Cited by 42 (9 self)
 Add to MetaCart
Network data arise in a wide variety of applications. Although descriptive statistics for networks abound in the literature, the science of fitting statistical models to complex network data is still in its infancy. The models considered in this article are based on exponential families; therefore, we refer to them as exponential random graph models (ERGMs). Although ERGMs are easy to postulate, maximum likelihood estimation of parameters in these models is very difficult. In this article, we first review the method of maximum likelihood estimation using Markov chain Monte Carlo in the context of fitting linear ERGMs. We then extend this methodology to the situation where the model comes from a curved exponential family. The curved exponential family methodology is applied to new specifications of ERGMs, proposed by Snijders et al. (2004), having nonlinear parameters to represent structural properties of networks such as transitivity and heterogeneity of degrees. We review the difficult topic of implementing likelihood ratio tests for these models, then apply all these modelfitting and testing techniques to the estimation of linear and nonlinear parameters for a collaboration network between partners in a New England law firm.
Asymptotic properties of the maximum likelihood estimator in autoregressive models with Markov regime
 ANN. STATIST
, 2004
"... An autoregressive process with Markov regime is an autoregressive process for which the regression function at each time point is given by a nonobservable Markov chain. In this paper we consider the asymptotic properties of the maximum likelihood estimator in a possibly nonstationary process of this ..."
Abstract

Cited by 34 (6 self)
 Add to MetaCart
An autoregressive process with Markov regime is an autoregressive process for which the regression function at each time point is given by a nonobservable Markov chain. In this paper we consider the asymptotic properties of the maximum likelihood estimator in a possibly nonstationary process of this kind for which the hidden state space is compact but not necessarily finite. Consistency and asymptotic normality are shown to follow from uniform exponential forgetting of the initial distribution for the hidden Markov chain conditional on the observations.
Stochastic Approximation in Monte Carlo Computation
, 2006
"... The WangLandau algorithm is an adaptive Markov chain Monte Carlo algorithm to calculate the spectral density for a physical system. A remarkable feature of the algorithm is that it is not trapped by local energy minima, which is very important for systems with rugged energy landscapes. This feature ..."
Abstract

Cited by 23 (13 self)
 Add to MetaCart
The WangLandau algorithm is an adaptive Markov chain Monte Carlo algorithm to calculate the spectral density for a physical system. A remarkable feature of the algorithm is that it is not trapped by local energy minima, which is very important for systems with rugged energy landscapes. This feature has led to many successful applications of the algorithm in statistical physics and biophysics. However, there does not exist rigorous theory to support its convergence, and the estimates produced by the algorithm can only reach a limited statistical accuracy. In this paper, we propose the stochastic approximation Monte Carlo (SAMC) algorithm, which overcomes the shortcomings of the WangLandau algorithm. We establish a theorem concerning its convergence. The estimates produced by SAMC can be improved continuously as the simulation goes on. SAMC also extends applications of the WangLandau algorithm to continuum systems. The potential uses of SAMC in statistics are discussed through two classes of applications, importance sampling and model selection. The results show that SAMC can work as a general importance
Hyperparameter estimation for satellite image restoration using a MCMC Maximum Likelihood method
 Pattern Recognition
, 2000
"... The satellite image deconvolution problem is illposed and must be regularized. Herein, we use an edgepreserving regularization model using a ' function, involving two hyperparameters. Our goal is to estimate the optimal parameters in order to automatically reconstruct images. We propose to use the ..."
Abstract

Cited by 22 (7 self)
 Add to MetaCart
The satellite image deconvolution problem is illposed and must be regularized. Herein, we use an edgepreserving regularization model using a ' function, involving two hyperparameters. Our goal is to estimate the optimal parameters in order to automatically reconstruct images. We propose to use the Maximum Likelihood Estimator (MLE), applied to the observed image. We need sampling from prior and posterior distributions. Since the convolution prevents from using standard samplers, we have developed a modied GemanYang algorithm, using an auxiliary variable and a cosine transform. We present a Markov Chain Monte Carlo Maximum Likelihood (MCMCML) technique which is able to simultaneously achieve the estimation and the reconstruction.
Optimal Design via Curve Fitting of Monte Carlo Experiments
, 1996
"... This paper explores numerical methods for stochastic optimization, with special attention to Bayesian design problems. A common and challenging situation occurs when the objective function (in Bayesian applications the expected utility) is very expensive to evaluate, perhaps because it requires inte ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
This paper explores numerical methods for stochastic optimization, with special attention to Bayesian design problems. A common and challenging situation occurs when the objective function (in Bayesian applications the expected utility) is very expensive to evaluate, perhaps because it requires integration over a space of very large dimensionality. Our goal is to explore a class of optimization algorithms designed to gain efficiency in such situations, by exploiting smoothness of the expected utility surface and borrowing information from neighboring design points. The central idea is that of implementing stochastic optimization by curve fitting of Monte Carlo samples. This is done by simulating draws from the joint parameter/sample space and evaluating the observed utilities. Fitting a smooth surface through these simulated points serves as estimate for the expected utility surface. The optimal design can then be found deterministically. In this paper we introduce a general algorithm for curvefittingbased optimization, we discuss implementation options, and we present a consistency property for one particular implementation of the algorithm. To illustrate the advantages and limitations of curvefittingbased optimization, and compare it with some of the alternatives, we consider in detail three important practical applications. The first is an information theoretical stopping rule for a clinical trial. The objective function is based on the expected amount of information acquired about a subvector of parameters of interest. The second is concerned with the timing of examination for the early detection of breast cancer in mass screening programs. It involves a twodimensional optimization and an objective function embodying a costbenefit analysis. The third applicat...
Stochastic approximation algorithms for partition function estimation of Gibbs random fields
 IEEE Trans. Inform. Theory
, 1997
"... Abstract—We present an analysis of recently proposed Monte Carlo algorithms for estimating the partition function of a Gibbs random field. We show that this problem reduces to estimating one or more expectations of suitable functionals of the Gibbs states with respect to properly chosen Gibbs distri ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
Abstract—We present an analysis of recently proposed Monte Carlo algorithms for estimating the partition function of a Gibbs random field. We show that this problem reduces to estimating one or more expectations of suitable functionals of the Gibbs states with respect to properly chosen Gibbs distributions. As expected, the resulting estimators are consistent. Certain generalizations are also provided. We study computational complexity with respect to grid size and show that Monte Carlo partition function estimation algorithms can be classified into two categories: EType algorithms that are of exponential complexity and PType algorithms that are of polynomial complexity, Turing reducible to the problem of sampling from the Gibbs distribution. EType algorithms require estimating a single expectation, whereas, PType algorithms require estimating a number of expectations with respect to Gibbs distributions which are chosen to be sufficiently “close ” to each other. In the latter case, the required number of expectations is of polynomial order with respect to grid size. We compare computational complexity by using both theoretical results and simulation experiments. We determine the most efficient EType and PType algorithms and conclude that PType algorithms are more appropriate for partition function estimation. We finally suggest a practical and efficient PType algorithm for this task. Index Terms—Computational complexity, Gibbs random fields, importance sampling, Monte Carlo simulations, partition function estimation, stochastic approximation. I.
Adaptively scaling the Metropolis algorithm using expected squared jumped distance
, 2003
"... Using existing theory on efficient jumping rules and on adaptive MCMC, we construct and demonstrate the effectiveness of a workable scheme for improving the efficiency of Metropolis algorithms. A good choice of the proposal distribution is crucial for the rapid convergence of the Metropolis algorith ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
Using existing theory on efficient jumping rules and on adaptive MCMC, we construct and demonstrate the effectiveness of a workable scheme for improving the efficiency of Metropolis algorithms. A good choice of the proposal distribution is crucial for the rapid convergence of the Metropolis algorithm. In this paper, given a family of parametric Markovian kernels, we develop an algorithm for optimizing the kernel by maximizing the expected squared jumped distance, an objective function that characterizes the Markov chain under its ddimensional stationary distribution. The algorithm uses the information accumulated by a single path and adapts the choice of the parametric kernel in the direction of the local maximum of the objective function using multiple importance sampling techniques. We follow a twostage approach: a series of adaptive optimization steps followed by an MCMC run with fixed kernel. It is not necessary for the adaptation itself to converge. Using several examples, we demonstrate the effectiveness of our method, even for cases in which the Metropolis transition kernel is initialized at very poor values.