Results 1  10
of
11
Markov chains for exploring posterior distributions
 Annals of Statistics
, 1994
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 753 (6 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
Using simulation methods for Bayesian econometric models: Inference, development and communication
 Econometric Review
, 1999
"... This paper surveys the fundamental principles of subjective Bayesian inference in econometrics and the implementation of those principles using posterior simulation methods. The emphasis is on the combination of models and the development of predictive distributions. Moving beyond conditioning on a ..."
Abstract

Cited by 202 (15 self)
 Add to MetaCart
This paper surveys the fundamental principles of subjective Bayesian inference in econometrics and the implementation of those principles using posterior simulation methods. The emphasis is on the combination of models and the development of predictive distributions. Moving beyond conditioning on a fixed number of completely specified models, the paper introduces subjective Bayesian tools for formal comparison of these models with as yet incompletely specified models. The paper then shows how posterior simulators can facilitate communication between investigators (for example, econometricians) on the one hand and remote clients (for example, decision makers) on the other, enabling clients to vary the prior distributions and functions of interest employed by investigators. A theme of the paper is the practicality of subjective Bayesian methods. To this end, the paper describes publicly available software for Bayesian inference, model development, and communication and provides illustrations using two simple econometric models. *This paper was originally prepared for the Australasian meetings of the Econometric Society in Melbourne, Australia,
General state space Markov chains and MCMC algorithm
 PROBABILITY SURVEYS
, 2004
"... This paper surveys various results about Markov chains on general (noncountable) state spaces. It begins with an introduction to Markov chain Monte Carlo (MCMC) algorithms, which provide the motivation and context for the theory which follows. Then, sufficient conditions for geometric and uniform e ..."
Abstract

Cited by 106 (26 self)
 Add to MetaCart
This paper surveys various results about Markov chains on general (noncountable) state spaces. It begins with an introduction to Markov chain Monte Carlo (MCMC) algorithms, which provide the motivation and context for the theory which follows. Then, sufficient conditions for geometric and uniform ergodicity are presented, along with quantitative bounds on the rate of convergence to stationarity. Many of these results are proved using direct coupling constructions based on minorisation and drift conditions. Necessary and sufficient conditions for Central Limit Theorems (CLTs) are also presented, in some cases proved via the Poisson Equation or direct regeneration constructions. Finally, optimal scaling and weak convergence results for MetropolisHastings algorithms are discussed. None of the results presented is new, though many of the proofs are. We also describe some Open Problems.
On the Markov chain central limit theorem. Probability Surveys
, 2004
"... The goal of this mainly expository paper is to describe conditions which guarantee a central limit theorem for functionals of general state space Markov chains with a view towards Markov chain Monte Carlo settings. Thus the focus is on the connections between drift and mixing conditions and their im ..."
Abstract

Cited by 40 (9 self)
 Add to MetaCart
The goal of this mainly expository paper is to describe conditions which guarantee a central limit theorem for functionals of general state space Markov chains with a view towards Markov chain Monte Carlo settings. Thus the focus is on the connections between drift and mixing conditions and their implications. In particular, we consider three commonly cited central limit theorems and discuss their relationship to classical results for mixing processes. Several motivating examples are given which range from toy onedimensional settings to complicated settings encountered in Markov chain Monte Carlo. 1
Bayesian Methods for Neural Networks
, 1999
"... Summary The application of the Bayesian learning paradigm to neural networks results in a flexible and powerful nonlinear modelling framework that can be used for regression, density estimation, prediction and classification. Within this framework, all sources of uncertainty are expressed and meas ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Summary The application of the Bayesian learning paradigm to neural networks results in a flexible and powerful nonlinear modelling framework that can be used for regression, density estimation, prediction and classification. Within this framework, all sources of uncertainty are expressed and measured by probabilities. This formulation allows for a probabilistic treatment of our a priori knowledge, domain specific knowledge, model selection schemes, parameter estimation methods and noise estimation techniques. Many researchers have contributed towards the development of the Bayesian learning approach for neural networks. This thesis advances this research by proposing several novel extensions in the areas of sequential learning, model selection, optimisation and convergence assessment. The first contribution is a regularisation strategy for sequential learning based on extended Kalman filtering and noise estimation via evidence maximisation. Using the expectation maximisation (EM) algorithm, a similar algorithm is derived for batch learning. Much of the thesis is, however, devoted to Monte Carlo simulation methods. A robust Bayesian method is proposed to estimate,
A regeneration proof of the central limit theorem for uniformly ergodic Markov chains
, 2006
"... Let (Xn) be a Markov chain on measurable space (E, E) with unique stationary distribution π. Let h: E → R be a measurable function with finite stationary mean π(h): = � E h(x)π(dx). Ibragimov and Linnik (1971) proved that if (Xn) is geometrically ergodic, then a central limit theorem (CLT) holds for ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Let (Xn) be a Markov chain on measurable space (E, E) with unique stationary distribution π. Let h: E → R be a measurable function with finite stationary mean π(h): = � E h(x)π(dx). Ibragimov and Linnik (1971) proved that if (Xn) is geometrically ergodic, then a central limit theorem (CLT) holds for h whenever π(h  2+δ) < ∞, δ> 0. Cogburn (1972) proved that if a Markov chain is uniformly ergodic, with π(h 2) < ∞ then a CLT holds for h. The first result was reproved in Roberts and Rosenthal (2004) using a regeneration approach; thus removing many of the technicalities of the original proof. This raised an open problem: to provide a proof of the second result using a regeneration approach. In this paper we provide a solution to this problem.
Ergodicity of Adaptive MCMC and its Applications
, 2008
"... ... Carlo algorithms (AMCMC) are most important methods of approximately sampling from complicated probability distributions and are widely used in statistics, computer science, chemistry, physics, etc. The core problem to use these algorithms is to build up asymptotic theories for them. In this the ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
... Carlo algorithms (AMCMC) are most important methods of approximately sampling from complicated probability distributions and are widely used in statistics, computer science, chemistry, physics, etc. The core problem to use these algorithms is to build up asymptotic theories for them. In this thesis, we show the Central Limit Theorem (CLT) for the uniformly ergodic Markov chain using the regeneration method. We exploit the weakest uniform drift conditions to ensure the ergodicity and WLLN of AMCMC. Further we answer the open problem 21 in Roberts and Rosenthal [48] through constructing a counter example and finding out some stronger condition which indicates the ergodic property of AMCMC. We find that the conditions (a) and (b) in [48] are not sufficient for WLLN holds when the functional is unbounded. We also prove the WLLN for unbounded functions with some stronger conditions. Finally we consider the practical aspects of adaptive MCMC (AMCMC). We try some toy examples to explain that the general adaptive random walk Metropolis is not efficient for sampling from multimodel targets. Therefore we discuss the mixed regional adaptation
Estimation of large families of Bayes factors from Markov chain output
 Statistica Sinica
, 2010
"... We consider situations in Bayesian analysis where the prior is indexed by a hyperparameter taking on a continuum of values. We distinguish some arbitrary value of the hyperparameter, and consider the problem of estimating the Bayes factor for the model indexed by the hyperparameter vs. the model spe ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We consider situations in Bayesian analysis where the prior is indexed by a hyperparameter taking on a continuum of values. We distinguish some arbitrary value of the hyperparameter, and consider the problem of estimating the Bayes factor for the model indexed by the hyperparameter vs. the model specified by the distinguished point, as the hyperparameter varies. We assume that we have Markov chain output from the posterior for a finite number of the priors, and develop a method for efficiently computing estimates of the entire family of Bayes factors. As an application of the ideas, we consider some commonly used hierarchical Bayesian models and show that the parametric assumptions in these models can be recast as assumptions regarding the prior. Therefore, our method can be used as a model selection criterion in a Bayesian framework. We illustrate our methodology through a detailed example involving Bayesian model selection. Key words and phrases: Bayes factors, control variates, ergodicity, importance sampling, Suppose we have a data vector Y whose distribution has density pθ, for some unknown θ ∈ Θ. Let {νh, h ∈ H} be a family of prior densities on θ that we are contemplating. The
ELECTRONIC COMMUNICATIONS in PROBABILITY A REGENERATION PROOF OF THE CENTRAL LIMIT THEOREM FOR UNIFORMLY ERGODIC MARKOV CHAINS
, 2007
"... Central limit theorems for functionals of general state space Markov chains are of crucial importance in sensible implementation of Markov chain Monte Carlo algorithms as well as of vital theoretical interest. Different approaches to proving this type of results under diverse assumptions led to a la ..."
Abstract
 Add to MetaCart
Central limit theorems for functionals of general state space Markov chains are of crucial importance in sensible implementation of Markov chain Monte Carlo algorithms as well as of vital theoretical interest. Different approaches to proving this type of results under diverse assumptions led to a large variety of CTL versions. However due to the recent development of the regeneration theory of Markov chains, many classical CLTs can be reproved using this intuitive probabilistic approach, avoiding technicalities of original proofs. In this paper we provide a characterization of CLTs for ergodic Markov chains via regeneration and then use the result to solve the open problem posed in [17]. We then discuss the difference between onestep and multiplestep small set condition. 1