Results 1  10
of
20
An Interruptible Algorithm for Perfect Sampling via Markov Chains
 Annals of Applied Probability
, 1998
"... For a large class of examples arising in statistical physics known as attractive spin systems (e.g., the Ising model), one seeks to sample from a probability distribution # on an enormously large state space, but elementary sampling is ruled out by the infeasibility of calculating an appropriate nor ..."
Abstract

Cited by 84 (7 self)
 Add to MetaCart
For a large class of examples arising in statistical physics known as attractive spin systems (e.g., the Ising model), one seeks to sample from a probability distribution # on an enormously large state space, but elementary sampling is ruled out by the infeasibility of calculating an appropriate normalizing constant. The same difficulty arises in computer science problems where one seeks to sample randomly from a large finite distributive lattice whose precise size cannot be ascertained in any reasonable amount of time. The Markov chain Monte Carlo (MCMC) approximate sampling approach to such a problem is to construct and run "for a long time" a Markov chain with longrun distribution #. But determining how long is long enough to get a good approximation can be both analytically and empirically difficult. Recently, Jim Propp and David Wilson have devised an ingenious and efficient algorithm to use the same Markov chains to produce perfect (i.e., exact) samples from #. However, the running t...
Perfect Simulation and Backward Coupling
 Comm. Statist. Stochastic Models
"... Algorithms for perfect or exact simulation of random samples from the invariant measure of a Markov chain have received considerable recent attention following the introduction of the "couplingfromthepast" (CFTP) technique of Propp and Wilson. Here we place such algorithms in the context of backw ..."
Abstract

Cited by 30 (2 self)
 Add to MetaCart
Algorithms for perfect or exact simulation of random samples from the invariant measure of a Markov chain have received considerable recent attention following the introduction of the "couplingfromthepast" (CFTP) technique of Propp and Wilson. Here we place such algorithms in the context of backward coupling of stochastically recursive sequences. We show that although general backward couplings can be constructed for chains with finite mean forward coupling times, and can even be thought of as extending the classical "Loynes schemes" from queueing theory, successful "vertical" CFTP algorithms such as those of Propp and Wilson can be constructed if and only if the chain is uniformly geometric ergodic. We also relate the convergence moments for backward coupling methods to those of forward coupling times: the former typically lose at most one moment compared to the latter. Work supported in part by NSF Grant DMS9504561 and by CRDF Grant RM1226 y Postal Address: Institute of Math...
Simulating The Invariant Measures Of Markov Chains Using Backward Coupling At Regeneration Times
 Prob. Eng. Inf. Sci
, 1998
"... We develop an algorithm for simulating approximate random samples from the invariant measure of a Markov chain using backward coupling of embedded regeneration times. Related methods have been used effectively for finite chains and for stochastically monotone chains: here we propose a method of impl ..."
Abstract

Cited by 17 (9 self)
 Add to MetaCart
We develop an algorithm for simulating approximate random samples from the invariant measure of a Markov chain using backward coupling of embedded regeneration times. Related methods have been used effectively for finite chains and for stochastically monotone chains: here we propose a method of implementation which avoids these restrictions by using a "cyclelength" truncation. We show that the coupling times have good theoretical properties and describe benefits and difficulties of implementing the methods in practice. 1 Introduction There has been considerable recent work on the development and application of algorithms that will enable the simulation of the invariant measure ß of a Markov chain, either exactly (that is, by drawing a random sample known to be from ß) or approximately, but with computable order of accuracy. These were sparked by the seminal paper of Propp and Wilson [18], and several variations and extensions of this idea have appeared in the literature including rece...
Ergodic Theorems for Markov chains represented by Iterated Function Systems
 BULL. POLISH ACAD. SCI. MATH
, 1998
"... We consider Markov chains represented in the form Xn+1 = f(Xn ; I n ), where fI n g is a sequence of independent, identically distributed (i.i.d.) random variables, and where f is a measurable function. Any Markov chain fXng on a Polish state space may be represented in this form i.e. can be conside ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
We consider Markov chains represented in the form Xn+1 = f(Xn ; I n ), where fI n g is a sequence of independent, identically distributed (i.i.d.) random variables, and where f is a measurable function. Any Markov chain fXng on a Polish state space may be represented in this form i.e. can be considered as arising from an iterated function system (IFS). A distributional ergodic theorem, including rates of convergence in the Kantorovich distance is proved for Markov chains under the condition that an IFS representation is "stochastically contractive" and "stochastically bounded". We apply this result to prove our main theorem giving upper bounds for distances between invariant probability measures for iterated function systems. We also give some examples indicating how ergodic theorems for Markov chains may be proved by finding contractive IFS representations. These ideas are applied to some Markov chains arising from iterated function systems with place dependent probabilities. Name o...
Perfect Sampling of Harris Recurrent Markov Chains
, 1998
"... We develop an algorithm for simulating "perfect" random samples from the invariant measure of a Harris recurrent Markov chain. The method uses backward coupling of embedded regeneration times, and works most effectively for finite chains and for stochastically monotone chains even on continuous spac ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
We develop an algorithm for simulating "perfect" random samples from the invariant measure of a Harris recurrent Markov chain. The method uses backward coupling of embedded regeneration times, and works most effectively for finite chains and for stochastically monotone chains even on continuous spaces, where paths may be sandwiched below "upper" and "lower" processes. Examples show that more naive approaches to constructing such bounding processes may be considerably biased, but that the algorithm can be simplified in certain cases to make it easier to run. We give explicit analytic bounds on the backward coupling times in the stochastically monotone case. An application of the simpler algorithm to storage models is given. 1 Introduction There has been considerable recent work on the development and application of algorithms that will enable the simulation of the invariant measure ß of a Markov chain, either exactly (that is, by drawing a random sample known to be from ß) or approximat...
Perfect Sampling From Independent MetropolisHastings Chains
 Journal of Statistical Planning and Inference
, 2000
"... "Perfect sampling" enables exact draws from the invariant measure of a Markov chain. We show that the independent MetropolisHastings chain has certain stochastic monotonicity properties that enable a perfect sampling algorithm to be implemented, at least when the candidate is overdispersed with ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
"Perfect sampling" enables exact draws from the invariant measure of a Markov chain. We show that the independent MetropolisHastings chain has certain stochastic monotonicity properties that enable a perfect sampling algorithm to be implemented, at least when the candidate is overdispersed with respect to the target distribution. We prove that the algorithm has an optimal geometric convergence rate, and applications show that it may converge extremely rapidly. 1 Introduction 1.1 Perfect Sampling The development of algorithms that enable "perfect" sampling of the invariant measure of a Markov chain, following work in the seminal paper of Propp and Wilson (Propp and Wilson, 1996), provides an important new set of tools for simulation approaches to inference. Given the availability of recent Markov chain Monte Carlo methodology, which allows many problems of interest in Bayesian and frequentist settings to be couched in terms of such invariant measures, perfect sampling is of pa...
Stochastic Recursive Equations With Applications to Queues With Dependent Vacations
 Annals of Operations Research
, 2001
"... We focus on a special class of nonlinear multidimensional stochastic recursive equations in which the coecients are stationary ergodic (not necessarily independent). Under appropriate conditions, an explicit ergodic stationary solution for these equations is obtained and the convergence to this stat ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
We focus on a special class of nonlinear multidimensional stochastic recursive equations in which the coecients are stationary ergodic (not necessarily independent). Under appropriate conditions, an explicit ergodic stationary solution for these equations is obtained and the convergence to this stationary regime is established. We use these results to analyze several queueing models with vacations. We obtain explicit solutions for several performance measures for the case of general nonindependent vacation processes. We finally extend some of these results to polling systems with general vacations.
Applications of Borovkov's Renovation Theory to NonStationary Stochastic Recursive Sequences and their Control
 Advances of Applied Probability 29
, 1996
"... We investigate in this paper the stability of nonstationary stochastic processes, arising typically in applications of control. The setting is known as stochastic recursive sequences, which allows us to construct on one probability space stochastic processes that correspond to different initial sta ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
We investigate in this paper the stability of nonstationary stochastic processes, arising typically in applications of control. The setting is known as stochastic recursive sequences, which allows us to construct on one probability space stochastic processes that correspond to different initial states and even different control policies. It does not require any Markovian assumptions. A natural criterion for stability for such processes is that the influence of the initial state disappears after some finite time; in other words, starting from different initial states, the process will couple after some finite time to the same limiting (not necessarily stationary nor ergodic) stochastic process. We investigate this as well as other types of coupling, and present conditions for them to occur uniformly in some class of control policies. We then use the coupling results to establish new theoretical aspects in the theory of non Markovian control.
Perfect Sampling of Ergodic Harris Chains
 Ann. Appl. Prob
, 2000
"... We develop an algorithm for simulating "perfect" random samples from the invariant measure of a Harris recurrent Markov chain. The method uses backward coupling of embedded regeneration times, and works most effectively for stochastically monotone chains, where paths may be sandwiched between "up ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We develop an algorithm for simulating "perfect" random samples from the invariant measure of a Harris recurrent Markov chain. The method uses backward coupling of embedded regeneration times, and works most effectively for stochastically monotone chains, where paths may be sandwiched between "upper" and "lower" processes. We give an approach to finding analytic bounds on the backward coupling times in the stochastically monotone case. An application to storage models is given. 1 Introduction There has been considerable recent work on the development and application of algorithms that will enable the simulation of the invariant measure of a Markov chain, either exactly (that is, by drawing a random sample known to be from ) or approximately, but with computable order of accuracy. These were sparked by the seminal paper of Propp and Wilson [18], and several variations and extensions of this idea have appeared since [7, 9, 10, 12, 11, 13, 14, 16, 17]. These ideas have proven effe...
Extended Renovation Theory and Limit Theorems for Stochastic Ordered Graphs
"... We extend Borovkov's renovation theory to obtain criteria for couplingconvergence of stochastic processes that do not necessarily obey stochastic recursions. The results are applied to an "infinite bin model", a particular system that is an abstraction of a stochastic ordered graph, i.e., a graph o ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
We extend Borovkov's renovation theory to obtain criteria for couplingconvergence of stochastic processes that do not necessarily obey stochastic recursions. The results are applied to an "infinite bin model", a particular system that is an abstraction of a stochastic ordered graph, i.e., a graph on the integers that has (i, j), i < j, as an edge, with probability p, independently from edge to edge. A question of interest is an estimate of the length L n of a longest path between two vertices at distance n. We give sharp bounds on C = lim n## (L n /n). This is done by first constructing the unique stationary version of the infinite bin model, using extended renovation theory. We also prove a functional law of large numbers and a functional central limit theorem for the infinite bin model. Finally, we discuss perfect simulation, in connection to extended renovation theory, and as a means for simulating the particular stochastic models considered in this paper.