Results 1  10
of
18
Basic Properties of Strong Mixing Conditions. A Survey and Some Open Questions
 PROBABILITY SURVEYS
, 2005
"... This is an update of, and a supplement to, the author’s earlier survey paper [18] on basic properties of strong mixing conditions. That paper appeared in 1986 in a book containing survey papers on various types of dependence conditions and the limit theory under them. The survey here will include pa ..."
Abstract

Cited by 45 (0 self)
 Add to MetaCart
This is an update of, and a supplement to, the author’s earlier survey paper [18] on basic properties of strong mixing conditions. That paper appeared in 1986 in a book containing survey papers on various types of dependence conditions and the limit theory under them. The survey here will include part (but not all) of the material in [18], and will also describe some relevant material that was not in that paper, especially some new discoveries and developments that have occurred since that paper was published. (Much of the new material described here involves “interlaced ” strong mixing conditions, in which the index sets are not restricted to “past ” and “future.”) At various places in this survey, open problems will be posed. There is a large literature on basic properties of strong mixing conditions. A survey such as this cannot do full justice to it. Here are a few references on important topics not covered in this survey. For the approximation of mixing sequences by martingale differences, see e.g. the book by Hall and Heyde [80]. For the direct approximation of mixing random variables by independent ones,
Exponential Convergence for the Stochastically Forced NavierStokes Equations and Other Partially Dissipative Dynamics
, 2002
"... We prove that the two dimensional NavierStokes equations possesses an exponentially attracting invariant measure. This result is in fact the consequence of a more general "Harrislike" ergodic theorem applicable to many dissipative stochastic PDEs and stochastic processes with memory. A s ..."
Abstract

Cited by 34 (9 self)
 Add to MetaCart
We prove that the two dimensional NavierStokes equations possesses an exponentially attracting invariant measure. This result is in fact the consequence of a more general "Harrislike" ergodic theorem applicable to many dissipative stochastic PDEs and stochastic processes with memory. A simple iterated map example is also presented to help build intuition and showcase the central ideas in a less encumbered setting. To analyze the iterated map, a general "Doeblinlike" theorem is proven. One of the main features of this paper is the novel coupling construction used to examine the ergodic theory of the nonMarkovian processes.
Spectral gaps in Wasserstein distances and the 2D stochastic NavierStokes equations
, 2006
"... We develop a general method that allows to show the existence of spectral gaps for Markov semigroups on Banach spaces. Unlike most previous work, the type of norm we consider for this analysis is neither a weighted supremum norm nor an Ł ptype norm, but involves the derivative of the observable as ..."
Abstract

Cited by 16 (8 self)
 Add to MetaCart
We develop a general method that allows to show the existence of spectral gaps for Markov semigroups on Banach spaces. Unlike most previous work, the type of norm we consider for this analysis is neither a weighted supremum norm nor an Ł ptype norm, but involves the derivative of the observable as well and hence can be seen as a type of 1–Wasserstein distance. This turns out to be a suitable approach for infinitedimensional spaces where the usual Harris or Doeblin conditions, which are geared to total variation convergence, regularly fail to hold. In the first part of this paper, we consider semigroups that have uniform behaviour which one can view as an extension of Doeblin’s condition. We then proceed to study situations where the behaviour is not so uniform, but the system has a suitable Lyapunov structure, leading to a type of Harris condition. We finally show that the latter condition is satisfied by the twodimensional stochastic NavierStokers equations, even in situations where the forcing is extremely degenerate. Using the convergence result, we show shat the stochastic NavierStokes equations ’ invariant measures depend continuously on the viscosity and the structure of the forcing. 1
A Metropolis Sampler for Polygonal Image Reconstruction
, 1995
"... We show how a stochastic model of polygonal objects can provide a Bayesian framework for the interpretation of colouring data in the plane. We describe a particular model and give a Markov Chain Monte Carlo (MCMC) algorithm for simulating the posterior distribution of the polygonal pattern. Two impo ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
We show how a stochastic model of polygonal objects can provide a Bayesian framework for the interpretation of colouring data in the plane. We describe a particular model and give a Markov Chain Monte Carlo (MCMC) algorithm for simulating the posterior distribution of the polygonal pattern. Two important observations arise from our implementation of the algorithm. First, it is computationally feasible to use MCMC to simulate the posterior distribution of a polygonal process for moderately large problems (ie, 10000 data points, with polygonal patterns involving around 120 edges). Our implementation, which we would describe as careful, but unsophisticated, produces satisfactory approximations to the mode of the posterior in about 5 minutes on a SUN Sparc 2. Independent samples from the posterior take a few seconds to generate. The second observation is that the Arak process, a particular type of polygonal process, makes a wonderful debugging tool for testing shape simulation software. Th...
Yet another look at Harris’ ergodic theorem for Markov chains
, 2008
"... C. Mattingly The aim of this note is to present an elementary proof of a variation of Harris’ ergodic theorem of Markov chains. This theorem, dating back to the fifties [Har56] essentially states that a Markov chain is uniquely ergodic if it admits a “small ” set (in a technical sense to be made pre ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
C. Mattingly The aim of this note is to present an elementary proof of a variation of Harris’ ergodic theorem of Markov chains. This theorem, dating back to the fifties [Har56] essentially states that a Markov chain is uniquely ergodic if it admits a “small ” set (in a technical sense to be made precise below) which is visited infinitely often. This gives an extension of the ideas of Doeblin to the unbounded state space setting. Often this is established by finding a Lyapunov function with “small ” level sets [Has80, MT93]. If the Lyapunov function is strong enough, one has a spectral gap in a weighted supremum norm [MT92, MT93]. In particular, its transition probabilities converge exponentially fast towards the unique invariant measure, and the constant in front of the exponential rate is controlled by the Lyapunov function [MT93]. Traditional proofs of this result rely on the decomposition of the Markov chain into excursions away from the small set and a careful analysis of the exponential tail of the length of these excursions [Num84, Cha89, MT92, MT93]. There have been other variations which have made use of Poisson equations or worked at getting explicit constants [KM05, DMR04, DMLM03]. The present proof is very direct, and relies instead on introducing a family of equivalent weighted norms indexed by a parameter β and to make an appropriate choice of this parameter that allows to combine in a very elementary way the two ingredients (existence of a Lyapunov function and irreducibility) that are crucial in obtaining a spectral gap. Use of a weighted totalvariation norm has been important since [MT92]. The original motivation of this proof was the authors ’ work on spectral gaps in Wasserstein metrics. The proof presented in this note is a version of our reasoning in the total variation setting which we used to guide the calculations in [HM08]. While we initially produced it for this purpose, we hope that it will be of interest in its own right. 1. Setting and main result Throughout this note, we fix a measurable space X and a Markov transition kernel P(x, ·) on X. We will use the notation P for the operators defined as usual on both the set of2 Martin Hairer and Jonathan Mattingly bounded measurable functions and the set of measures of finite mass by
Harris Recurrence of MetropolisWithinGibbs and Transdimensional MCMC Algorithms
, 2007
"... A φirreducible and aperiodic Markov chain with stationary probability distribution will converge to its stationary distribution from almost all starting points. The property of Harris recurrence allows us to replace “almost all ” by “all, ” which is potentially important when running Markov chain M ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
A φirreducible and aperiodic Markov chain with stationary probability distribution will converge to its stationary distribution from almost all starting points. The property of Harris recurrence allows us to replace “almost all ” by “all, ” which is potentially important when running Markov chain Monte Carlo algorithms. Fulldimensional Metropolis–Hastings algorithms are known to be Harris recurrent. In this paper, we consider conditions under which MetropoliswithinGibbs and transdimensional Markov chains are or are not Harris recurrent. We present a simple but natural twodimensional counterexample showing how Harris recurrence can fail, and also a variety of positive results which guarantee Harris recurrence. We also present some open problems. We close with a discussion of the practical implications for MCMC algorithms.
Comparison of BirthandDeath and MetropolisHastings Markov chain Monte Carlo for the Strauss process
, 1994
"... The MetropolisHastings sampler (MH) is a discrete time Markov chain with MetropolisHastings dynamics. The measure of interest occurs as the stationary measure of the chain. We show that a sampler with MH dynamics may be used when the dimension of the random variable is itself variable, as is the c ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
The MetropolisHastings sampler (MH) is a discrete time Markov chain with MetropolisHastings dynamics. The measure of interest occurs as the stationary measure of the chain. We show that a sampler with MH dynamics may be used when the dimension of the random variable is itself variable, as is the case in a spatial point process. The Birth and Death (BD) sampler is a continuous time spatial birth and death process used to sample spatial point processes in the past. We check that the two processes we have designed have the same equilibrium measure. In order to explore the relative strengths of the derived sampling algorithms, we consider the efficiency of MH and BD as samplers for the Strauss process. We give a new proof for the existence of a stationary measure in the continuous time case, in order to advertise a general tool (due to Kaspi and Mandelbaum) which may be useful in extending continuous time stochastic process to a wider range of sampling applications. The method emphasises...
Kernel estimation for realvalued markov chains
 Sankhya Ser. A
, 1998
"... SUMMARY. The purpose of this paper is to study the problem of estimation of the stationary density and the transition density of a realvalued recurrent Markov chain. By using techniques of regenerative processes we are able to significantly reduce the strong hypotheses on the Markov chain such as D ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
SUMMARY. The purpose of this paper is to study the problem of estimation of the stationary density and the transition density of a realvalued recurrent Markov chain. By using techniques of regenerative processes we are able to significantly reduce the strong hypotheses on the Markov chain such as Doeblin recurrence, stationarity, and mixing that were imposed in all the earlier works. We assume that the Markov chain satisfies a much weaker condition known as Harris recurrence. Our results hold for any initial distribution and we assume no mixing. 1.
An approach to the existence of unique invariant probabilities for Markov processes. Limit theorems in probability and statistics
 János Bolyai Math. Soc., I (Balatonlelle
, 1999
"... A notion of localized splitting is introduced as a further extension of the splitting notions for iterated monotone maps introduced earlier by Dubins and Freedman (1966) and more generally by Bhattacharya and Majumdar (1999). We will see that under quite general conditions, localized splitting theor ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
A notion of localized splitting is introduced as a further extension of the splitting notions for iterated monotone maps introduced earlier by Dubins and Freedman (1966) and more generally by Bhattacharya and Majumdar (1999). We will see that under quite general conditions, localized splitting theory is a natural extension of the Doeblin (1937) minorization theory, Harris (1956) recurrence theory, splitting theory of Nummelin (1978) and regeneration theory of Athreya and Ney (1978), under which we can prove the existence of a unique invariant probability. By way of introduction we also provide a natural coupling proof of the ergodic problem for Markov processes on general state spaces under Doeblin’s minorization condition which seems to have heretofore gone unnoticed. The paper is concluded with some new applications of splitting theory to random iterated quadratic maps. 1
Geometric ergodicity of a beadspring pair with stochastic Stokes forcing
, 2009
"... We consider a simple model for the fluctuating hydrodynamics of a flexible polymer in dilute solution, demonstrating geometric ergodicity for a pair of particles that interact with each other through a nonlinear spring potential while being advected by a stochastic Stokes fluid velocity field. This ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We consider a simple model for the fluctuating hydrodynamics of a flexible polymer in dilute solution, demonstrating geometric ergodicity for a pair of particles that interact with each other through a nonlinear spring potential while being advected by a stochastic Stokes fluid velocity field. This is a generalization of previous models which have used linear spring forces as well as whiteintime fluid velocity fields. We follow previous work combining control theoretic arguments, Lyapunov functions, and hypoelliptic diffusion theory to prove exponential convergence via a Harris chain argument. To this, we add the possibility of excluding certain “bad ” sets in phase space in which the assumptions are violated but from which the systems leaves with a controllable probability. This allows for the treatment of singular drifts, such as those derived from the LennardJones potential, which is an novel feature of this work. 1