Results 1  10
of
56
General state space Markov chains and MCMC algorithm
 PROBABILITY SURVEYS
, 2004
"... This paper surveys various results about Markov chains on general (noncountable) state spaces. It begins with an introduction to Markov chain Monte Carlo (MCMC) algorithms, which provide the motivation and context for the theory which follows. Then, sufficient conditions for geometric and uniform e ..."
Abstract

Cited by 114 (27 self)
 Add to MetaCart
This paper surveys various results about Markov chains on general (noncountable) state spaces. It begins with an introduction to Markov chain Monte Carlo (MCMC) algorithms, which provide the motivation and context for the theory which follows. Then, sufficient conditions for geometric and uniform ergodicity are presented, along with quantitative bounds on the rate of convergence to stationarity. Many of these results are proved using direct coupling constructions based on minorisation and drift conditions. Necessary and sufficient conditions for Central Limit Theorems (CLTs) are also presented, in some cases proved via the Poisson Equation or direct regeneration constructions. Finally, optimal scaling and weak convergence results for MetropolisHastings algorithms are discussed. None of the results presented is new, though many of the proofs are. We also describe some Open Problems.
Optimal Scaling for Various MetropolisHastings Algorithms
, 2001
"... We review and extend results related to optimal scaling of MetropolisHastings algorithms. We present various theoretical results for the highdimensional limit. We also present simulation studies which confirm the theoretical results in finite dimensional contexts. ..."
Abstract

Cited by 91 (25 self)
 Add to MetaCart
We review and extend results related to optimal scaling of MetropolisHastings algorithms. We present various theoretical results for the highdimensional limit. We also present simulation studies which confirm the theoretical results in finite dimensional contexts.
An efficient Markov chain Monte Carlo method for distributions with intractable normalising constants
 Biometrika
, 2006
"... Maximum likelihood parameter estimation and sampling from Bayesian posterior distributions are problematic when the probability density for the parameter of interest involves an intractable normalising constant which is also a function of that parameter. In this paper, an auxiliary variable method i ..."
Abstract

Cited by 51 (2 self)
 Add to MetaCart
Maximum likelihood parameter estimation and sampling from Bayesian posterior distributions are problematic when the probability density for the parameter of interest involves an intractable normalising constant which is also a function of that parameter. In this paper, an auxiliary variable method is presented which requires only that independent samples can be drawn from the unnormalised density at any particular parameter value. The proposal distribution is constructed so that the normalising constant cancels from the Metropolis–Hastings ratio. The method is illustrated by producing posterior samples for parameters of the Ising model given a particular lattice realisation.
Exponential Convergence of Langevin Diffusions and Their Discrete Approximations
 BERNOULLI
, 1997
"... In this paper we consider a continous time method of approximating a given distribution ß using the Langevin diffusion dL t = dW t + 1 2 r log ß(L t )dt: We find conditions under which this diffusion converges exponentially quickly to ß or does not: in one dimension, these are essentially that for ..."
Abstract

Cited by 42 (15 self)
 Add to MetaCart
In this paper we consider a continous time method of approximating a given distribution ß using the Langevin diffusion dL t = dW t + 1 2 r log ß(L t )dt: We find conditions under which this diffusion converges exponentially quickly to ß or does not: in one dimension, these are essentially that for distributions with exponential tails of the form ß(x) / exp(\Gammafljxj fi ), 0 ! fi ! 1, exponential convergence occurs if and only if fi 1. We then consider conditions under which the discrete approximations to the diffusion converge. We first show that even when the diffusion itself converges, naive discretisations need not do so. We then consider a "Metropolisadjusted" version of the algorithm, and find conditions under which this also converges at an exponential rate: perhaps surprisingly, even the Metropolised version need not converge exponentially fast even if the diffusion does. We briefly discuss a truncated form of the algorithm which, in practice, should avoid the difficultie...
Controlled MCMC for Optimal Sampling
, 2001
"... this paper we develop an original and general framework for automatically optimizing the statistical properties of Markov chain Monte Carlo (MCMC) samples, which are typically used to evaluate complex integrals. The MetropolisHastings algorithm is the basic building block of classical MCMC methods ..."
Abstract

Cited by 39 (6 self)
 Add to MetaCart
this paper we develop an original and general framework for automatically optimizing the statistical properties of Markov chain Monte Carlo (MCMC) samples, which are typically used to evaluate complex integrals. The MetropolisHastings algorithm is the basic building block of classical MCMC methods and requires the choice of a proposal distribution, which usually belongs to a parametric family. The correlation properties together with the exploratory ability of the Markov chain heavily depend on the choice of the proposal distribution. By monitoring the simulated path, our approach allows us to learn "on the fly" the optimal parameters of the proposal distribution for several statistical criteria. Keywords: Monte Carlo, adaptive MCMC, calibration, stochastic approximation, gradient method, optimal scaling, random walk, Langevin, Gibbs, controlled Markov chain, learning algorithm, reversible jump MCMC. 1. Motivation 1.1. Introduction Markov chain Monte Carlo (MCMC) is a general strategy for generating samples x i (i = 0; 1; : : :) from complex highdimensional distributions, say defined on the space X ae R nx , from which integrals of the type I (f) = Z X f (x) (x) dx; can be calculated using the estimator b I N (f) = 1 N + 1 N X i=0 f (x i ) ; provided that the Markov chain produced is ergodic. The main building block of this class of algorithms is the MetropolisHastings (MH) algorithm. It requires the definition of a proposal distribution q whose role is to generate possible transitions for the Markov chain, say from x to y, which are then accepted or rejected according to the probabilityy ff (x; y) = min ae 1; (y) q (y; x) (x) q (x; y) oe : The simplicity and universality of this algorithm are both its strength and weakness. The choice of ...
Efficient construction of reversible jump markov chain monte carlo proposal distributions
 Journal of the Royal Statistical Society: Series B (Statistical Methodology
"... Summary. The major implementational problem for reversible jump Markov chain Monte Carlo methods is that there is commonly no natural way to choose jump proposals since there is no Euclidean structure in the parameter space to guide our choice. We consider mechanisms for guiding the choice of propos ..."
Abstract

Cited by 38 (2 self)
 Add to MetaCart
Summary. The major implementational problem for reversible jump Markov chain Monte Carlo methods is that there is commonly no natural way to choose jump proposals since there is no Euclidean structure in the parameter space to guide our choice. We consider mechanisms for guiding the choice of proposal. The first group of methods is based on an analysis of acceptance probabilities for jumps. Essentially, these methods involve a Taylor series expansion of the acceptance probability around certain canonical jumps and turn out to have close connections to Langevin algorithms.The second group of methods generalizes the reversible jump algorithm by using the socalled saturated space approach. These allow the chain to retain some degree of memory so that, when proposing to move from a smaller to a larger model, information is borrowed from the last time that the reverse move was performed. The main motivation for this paper is that, in complex problems, the probability that the Markov chain moves between such spaces may be prohibitively small, as the probability mass can be very thinly spread across the space. Therefore, finding reasonable jump proposals becomes extremely important. We illustrate the procedure by using several examples of reversible jump Markov chain Monte Carlo applications including the analysis of autoregressive time series, graphical Gaussian modelling and mixture modelling.
Riemann manifold Langevin and Hamiltonian Monte Carlo methods
 J. of the Royal Statistical Society, Series B (Methodological
"... sampling methods defined on the Riemann manifold to resolve the shortcomings of existing Monte Carlo algorithms when sampling from target densities that may be high dimensional and exhibit strong correlations. The methods provide fully automated adaptation mechanisms that circumvent the costly pilot ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
sampling methods defined on the Riemann manifold to resolve the shortcomings of existing Monte Carlo algorithms when sampling from target densities that may be high dimensional and exhibit strong correlations. The methods provide fully automated adaptation mechanisms that circumvent the costly pilot runs required to tune proposal densities for MetropolisHastings or indeed Hamiltonian Monte Carlo and Metropolis Adjusted Langevin Algorithms. This allows for highly efficient sampling even in very high dimensions where different scalings may be required for the transient and stationary phases of the Markov chain. The proposed methodology exploits the Riemannian geometry of the parameter space of statistical models and thus automatically adapts to the local structure when simulating paths across this manifold providing highly efficient convergence and exploration of the target density. The performance of these Riemannian Manifold Monte Carlo methods is rigorously assessed by performing inference on logistic regression models, logGaussian Cox point processes, stochastic volatility models, and Bayesian estimation of dynamical systems described by nonlinear differential equations. Substantial improvements in the time normalised Effective Sample Size are reported when compared to alternative sampling approaches. Matlab code at
Bayesian Prediction of Spatial Count Data Using Generalised Linear Mixed Models
, 2001
"... Introduction Site specic farming is aiming at targeting inputs of fertiliser, pesticide, and herbicide according to locally determined requirements. In connection with herbicide application on a eld, it is important to map the weed intensity so that the dose of herbicide applied at any location can ..."
Abstract

Cited by 24 (3 self)
 Add to MetaCart
Introduction Site specic farming is aiming at targeting inputs of fertiliser, pesticide, and herbicide according to locally determined requirements. In connection with herbicide application on a eld, it is important to map the weed intensity so that the dose of herbicide applied at any location can be adjusted to the amount of weed present at the location. In a Danish project on precision farming (Olesen, 1997) one objective was to investigate whether observations of soil properties could be used for prediction of weed intensity. In practice the farmer or his advisor should then establish a relation between soil properties and weed occurrence from extensive observations collected one year and use this for prediction of the weed intensity in subsequent years where only a limited number of weed count observations would be 1 collected. Many soil properties are fairly constant over time so that observations of soil samples obtained the rst year can also be used in subseq
MCMC methods for continuoustime financial econometrics

, 2003
"... This chapter develops Markov Chain Monte Carlo (MCMC) methods for Bayesian inference in continuoustime asset pricing models. The Bayesian solution to the inference problem is the distribution of parameters and latent variables conditional on observed data, and MCMC methods provide a tool for explor ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
This chapter develops Markov Chain Monte Carlo (MCMC) methods for Bayesian inference in continuoustime asset pricing models. The Bayesian solution to the inference problem is the distribution of parameters and latent variables conditional on observed data, and MCMC methods provide a tool for exploring these highdimensional, complex distributions. We first provide a description of the foundations and mechanics of MCMC algorithms. This includes a discussion of the CliffordHammersley theorem, the Gibbs sampler, the MetropolisHastings algorithm, and theoretical convergence properties of MCMC algorithms. We next provide a tutorial on building MCMC algorithms for a range of continuoustime asset pricing models. We include detailed examples for equity price models, option pricing models, term structure models, and regimeswitching models. Finally, we discuss the issue of sequential Bayesian inference, both for parameters and state variables.
Weak convergence of Metropolis algorithms for noniid target distributions
, 2007
"... In this paper, we shall optimize the efficiency of Metropolis algorithms for multidimensional target distributions with scaling terms possibly depending on the dimension. We propose a method to determine the appropriate form for the scaling of the proposal distribution as a function of the dimension ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
In this paper, we shall optimize the efficiency of Metropolis algorithms for multidimensional target distributions with scaling terms possibly depending on the dimension. We propose a method to determine the appropriate form for the scaling of the proposal distribution as a function of the dimension, which leads to the proof of an asymptotic diffusion theorem. We show that when there does not exist any component with a scaling term significantly smaller than the others, the asymptotically optimal acceptance rate is the wellknown 0.234.