Results 1  10
of
25
An Adaptive Metropolis algorithm
 Bernoulli
, 1998
"... A proper choice of a proposal distribution for MCMC methods, e.g. for the MetropolisHastings algorithm, is well known to be a crucial factor for the convergence of the algorithm. In this paper we introduce an adaptive Metropolis Algorithm (AM), where the Gaussian proposal distribution is updated al ..."
Abstract

Cited by 95 (4 self)
 Add to MetaCart
A proper choice of a proposal distribution for MCMC methods, e.g. for the MetropolisHastings algorithm, is well known to be a crucial factor for the convergence of the algorithm. In this paper we introduce an adaptive Metropolis Algorithm (AM), where the Gaussian proposal distribution is updated along the process using the full information cumulated so far. Due to the adaptive nature of the process, the AM algorithm is nonMarkovian, but we establish here that it has the correct ergodic properties. We also include the results of our numerical tests, which indicate that the AM algorithm competes well with traditional MetropolisHastings algorithms, and demonstrate that AM provides an easy to use algorithm for practical computation. 1991 Mathematics Subject Classification: 65C05, 65U05. Keywords: adaptive MCMC, comparison, convergence, ergodicity, Markov Chain Monte Carlo, MetropolisHastings algorithm 1 Introduction It is generally acknowledged that the choice of an effective proposal...
Adaptive Markov Chain Monte Carlo through Regeneration
, 1998
"... this paper is organized as follows. In Section 2 we introduce the concept of regeneration and adaptation at regeneration, and provide theoretical support. In Section 3, the splitting techniques required for adaptation are reviewed. Section 4 contains four illustrations of adaptive MCMC. Some of the ..."
Abstract

Cited by 73 (4 self)
 Add to MetaCart
this paper is organized as follows. In Section 2 we introduce the concept of regeneration and adaptation at regeneration, and provide theoretical support. In Section 3, the splitting techniques required for adaptation are reviewed. Section 4 contains four illustrations of adaptive MCMC. Some of the proofs from Sections 2 and 3 are placed in the Appendix. 2 Regeneration: A Framework for Adaptation
Adaptive proposal distribution for random walk Metropolis algorithm
, 1999
"... this paper we also present a comprehensive test procedure and systematic performance criteria for comparing Adaptive Proposal algorithm with more traditional Metropolis algorithms. Keywords: MCMC, Adaptive MCMC, MetropolisHastings algorithm, convergence, experimental design 2 1 Introduction ..."
Abstract

Cited by 32 (2 self)
 Add to MetaCart
this paper we also present a comprehensive test procedure and systematic performance criteria for comparing Adaptive Proposal algorithm with more traditional Metropolis algorithms. Keywords: MCMC, Adaptive MCMC, MetropolisHastings algorithm, convergence, experimental design 2 1 Introduction
Evolutionary Monte Carlo: Applications to C_p Model Sampling and Change Point Problem
 STATISTICA SINICA
, 2000
"... Motivated by the success of genetic algorithms and simulated annealing in hard optimization problems, the authors propose a new Markov chain Monte Carlo (MCMC) algorithm so called an evolutionary Monte Carlo algorithm. This algorithm has incorporated several attractive features of genetic algorithms ..."
Abstract

Cited by 25 (5 self)
 Add to MetaCart
Motivated by the success of genetic algorithms and simulated annealing in hard optimization problems, the authors propose a new Markov chain Monte Carlo (MCMC) algorithm so called an evolutionary Monte Carlo algorithm. This algorithm has incorporated several attractive features of genetic algorithms and simulated annealing into the framework of MCMC. It works by simulating a population of Markov chains in parallel, where each chain is attached to a different temperature. The population is updated by mutation (Metropolis update), crossover (partial state swapping) and exchange operators (full state swapping). The algorithm is illustrated through examples of the Cpbased model selection and changepoint identification. The numerical results and the extensive comparisons show that evolutionary Monte Carlo is a promising approach for simulation and optimization.
Ordering, Slicing And Splitting Monte Carlo Markov Chains
, 1998
"... Markov chain Monte Carlo is a method of approximating the integral of a function f with respect to a distribution ß. A Markov chain that has ß as its stationary distribution is simulated producing samples X 1 ; X 2 ; : : : . The integral is approximated by taking the average of f(X n ) over the sam ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
Markov chain Monte Carlo is a method of approximating the integral of a function f with respect to a distribution ß. A Markov chain that has ß as its stationary distribution is simulated producing samples X 1 ; X 2 ; : : : . The integral is approximated by taking the average of f(X n ) over the sample path. The standard way to construct such Markov chains is the MetropolisHastings algorithm. The class P of all Markov chains having ß as their unique stationary distribution is very large, so it is important to have criteria telling when one chain performs better than another. The Peskun ordering is a partial ordering on P. If two Markov chains are Peskun ordered, then the better chain has smaller variance in the central limit theorem for every function f that has a variance. Peskun ordering is sufficient for this but not necessary. We study the implications of the Peskun ordering both in finite and general state spaces. Unfortunately there are many MetropolisHastings samplers that are...
Adaptive Chains
, 1998
"... Adaptive chains are chains that are able to learn from all previous elements in the chain. It is an extension of Markov chains. It is proved convergence of adaptive chains that satisfies a strong Doeblin condition (i.e., the transition density r from x i to y satisfies r(y; x 1 ; x 2 ; : : : ; x i ) ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Adaptive chains are chains that are able to learn from all previous elements in the chain. It is an extension of Markov chains. It is proved convergence of adaptive chains that satisfies a strong Doeblin condition (i.e., the transition density r from x i to y satisfies r(y; x 1 ; x 2 ; : : : ; x i ) a i ß(y) for all x 1 ; : : : ; x i ; y in the state space.) By using the previous iterations of the adaptive chain, it is possible to increase a i which will improve convergence compared with Markov chains. It is also proved a decreasion rate in the covariance between element x i and x i+j as j increases. The results may also be applied on regeneration chains where only the history before the last regeneration is used. Particularly interesting is the adaptive MetropolisHastings algorithm. Adaptive simulated annealing is also described and convergence is proved when the temperature decreases proportional with M= log i. The convergence is due to contraction properties of integral operators ...
Markov chain Monte Carlo and related topics
 Proceedings of the IX General Assembly (Page 451 & 454
, 1999
"... This article provides a brief review of recent developments in Markov chain Monte Carlo methodology. The methods discussed include the standard MetropolisHastings algorithm, the Gibbs sampler, and various special cases of interest to practitioners. It also devotes a section on strategies for impro ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
This article provides a brief review of recent developments in Markov chain Monte Carlo methodology. The methods discussed include the standard MetropolisHastings algorithm, the Gibbs sampler, and various special cases of interest to practitioners. It also devotes a section on strategies for improving mixing rate of MCMC samplers, e.g., simulated tempering, parallel tempering, parameter expansion, dynamic weighting, and multigrid Monte Carlo with its generalizations. Other related topics are the simulated annealing, the reversible jump method, and the multipletry Metropolis rule. Theoretical issues such as bounding the mixing rate, diagnosing convergence, and conducting
Adaptive radialbased direction sampling: some flexible and robust Monte Carlo integration methods
 J. Econometrics
, 2004
"... Adaptive radialbased direction sampling (ARDS) algorithms are specified for Bayesian analysis of models with nonelliptical, possibly, multimodal target distributions. A key step is a radialbased transformation to directions and distances. After the transformations a MetropolisHastings method or, ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Adaptive radialbased direction sampling (ARDS) algorithms are specified for Bayesian analysis of models with nonelliptical, possibly, multimodal target distributions. A key step is a radialbased transformation to directions and distances. After the transformations a MetropolisHastings method or, alternatively, an importance sampling method is applied to evaluate generated directions. Next, distances are generated from the exact target distribution by means of the numerical inverse transformation method. An adaptive procedure is applied to update the initial location and covariance matrix in order to sample directions in an efficient way. Tested on a set of canonical mixture models that feature multimodality, strong correlation, and skewness, the ARDS algorithms compare favourably with the standard MetropolisHastings and importance samplers in terms of flexibility and robustness. The empirical examples include a regression model with scale contamination and a mixture model for economic growth of the USA.
CONVERGENCE OF ADAPTIVE MARKOV CHAIN Monte Carlo Algorithms
, 2009
"... In the thesis, we study ergodicity of adaptive Markov Chain Monte Carlo methods (MCMC) based on two conditions (Diminishing Adaptation and Containment which together imply ergodicity), explain the advantages of adaptive MCMC, and apply the theoretical result for some applications. First we give some ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
In the thesis, we study ergodicity of adaptive Markov Chain Monte Carlo methods (MCMC) based on two conditions (Diminishing Adaptation and Containment which together imply ergodicity), explain the advantages of adaptive MCMC, and apply the theoretical result for some applications. First we give some examples to explain several facts: 1. Diminishing Adaptation alone may destroy ergodicity; 2. Containment is not necessary for ergodicity; 3. under some additional condition, Containment is necessary for ergodicity. Since Diminishing Adaptation is relatively easy to check and Containment is abstract, we focus on the sufficient conditions of Containment. In order to study Containment, we consider the quantitative bounds of the distance between samplers and targets in total variation norm. From early results, the quantitative bounds are connected with nested drift conditions for polynomial rates of convergence. For ergodicity of adaptive MCMC, assuming that all samplers simultaneously satisfy nested polynomial drift conditions, we find that either when the number of nested