Results 1  10
of
12
Exact Sampling with Coupled Markov Chains and Applications to Statistical Mechanics
, 1996
"... For many applications it is useful to sample from a finite set of objects in accordance with some particular distribution. One approach is to run an ergodic (i.e., irreducible aperiodic) Markov chain whose stationary distribution is the desired distribution on this set; after the Markov chain has ..."
Abstract

Cited by 406 (13 self)
 Add to MetaCart
For many applications it is useful to sample from a finite set of objects in accordance with some particular distribution. One approach is to run an ergodic (i.e., irreducible aperiodic) Markov chain whose stationary distribution is the desired distribution on this set; after the Markov chain has run for M steps, with M sufficiently large, the distribution governing the state of the chain approximates the desired distribution. Unfortunately it can be difficult to determine how large M needs to be. We describe a simple variant of this method that determines on its own when to stop, and that outputs samples in exact accordance with the desired distribution. The method uses couplings, which have also played a role in other sampling schemes; however, rather than running the coupled chains from the present into the future, one runs from a distant point in the past up until the present, where the distance into the past that one needs to go is determined during the running of the al...
Generating Random Spanning Trees More Quickly than the Cover Time
 PROCEEDINGS OF THE TWENTYEIGHTH ANNUAL ACM SYMPOSIUM ON THE THEORY OF COMPUTING
, 1996
"... ..."
How to Couple from the Past Using a ReadOnce Source of Randomness
, 1999
"... We give a new method for generating perfectly random samples from the stationary distribution of a Markov chain. The method is related to coupling from the past (CFTP), but only runs the Markov chain forwards in time, and never restarts it at previous times in the past. The method is also related ..."
Abstract

Cited by 34 (1 self)
 Add to MetaCart
We give a new method for generating perfectly random samples from the stationary distribution of a Markov chain. The method is related to coupling from the past (CFTP), but only runs the Markov chain forwards in time, and never restarts it at previous times in the past. The method is also related to an idea known as PASTA (Poisson arrivals see time averages) in the operations research literature. Because the new algorithm can be run using a readonce stream of randomness, we call it readonce CFTP. The memory and time requirements of readonce CFTP are on par with the requirements of the usual form of CFTP, and for a variety of applications the requirements may be noticeably less. Some perfect sampling algorithms for point processes are based on an extension of CFTP known as coupling into and from the past; for completeness, we give a readonce version of coupling into and from the past, but it remains unpractical. For these point process applications, we give an alternative...
Coupling from the Past: a User's Guide
, 1997
"... . The Markov chain Monte Carlo method is a general technique for obtaining samples from a probability distribution. In earlier work, we showed that for many applications one can modify the Markov chain Monte Carlo method so as to remove all bias in the output resulting from the biased choice of an i ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
. The Markov chain Monte Carlo method is a general technique for obtaining samples from a probability distribution. In earlier work, we showed that for many applications one can modify the Markov chain Monte Carlo method so as to remove all bias in the output resulting from the biased choice of an initial state for the chain; we have called this method Coupling From The Past (CFTP). Here we describe this method in a fashion that should make our ideas accessible to researchers from diverse areas. Our expository strategy is to avoid proofs and focus on sample applications. 1. Introduction In Markov chain Monte Carlo studies, one attempts to sample from a distribution ß by running a Markov chain whose unique steadystate distribution is ß. Ideally, one has proved a theorem that guarantees that the time for which one plans to run the chain is substantially greater than the mixing time of the chain, so that the distribution ~ ß that one's procedure actually samples from is known to be cl...
Mixing of Random Walks and Other Diffusions on a Graph
, 1995
"... We survey results on two diffusion processes on graphs: random walks and chipfiring (closely related to the "abelian sandpile" or "avalanche" model of selforganized criticality in statistical mechanics) . Many tools in the study of these processes are common, and results on one can be used to obta ..."
Abstract

Cited by 24 (3 self)
 Add to MetaCart
We survey results on two diffusion processes on graphs: random walks and chipfiring (closely related to the "abelian sandpile" or "avalanche" model of selforganized criticality in statistical mechanics) . Many tools in the study of these processes are common, and results on one can be used to obtain results on the other. We survey some classical tools in the study of mixing properties of random walks; then we introduce the notion of "access time" between two distributions on the nodes, and show that it has nice properties. Surveying and extending work of Aldous, we discuss several notions of mixing time of a random walk. Then we describe chipfiring games, and show how these new results on random walks can be used to improve earlier results. We also give a brief illustration how general results on chipfiring games can be applied in the study of avalanches. 1 Introduction A number of graphtheoretic models, involving various kinds of diffusion processes, lead to basically one and th...
Exact Mixing in an Unknown Markov Chain
 ELECTRONIC JOURNAL OF COMBINATORICS
, 1995
"... We give a simple stopping rule which will stop an unknown, irreducible nstate Markov chain at a state whose probability distribution is exactly the stationary distribution of the chain. The expected stopping time of the rule is bounded by a polynomial in the maximum mean hitting time of the chai ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
We give a simple stopping rule which will stop an unknown, irreducible nstate Markov chain at a state whose probability distribution is exactly the stationary distribution of the chain. The expected stopping time of the rule is bounded by a polynomial in the maximum mean hitting time of the chain. Our stopping rule can be made deterministic unless the chain itself has no random transitions.
How to Get an Exact Sample From a Generic Markov Chain and Sample a Random Spanning Tree From a Directed Graph, Both Within the Cover Time
 In Proceedings of the Seventh Annual ACMSIAM Symposium on Discrete Algorithms
, 1996
"... This paper shows how to obtain unbiased samples from an unknown Markov chain by observing it for O(T c ) steps, where T c is the cover time. This algorithm improves on several previous algorithms, and there is a matching lower bound. Using the techniques from the sampling algorithm, we also show how ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
This paper shows how to obtain unbiased samples from an unknown Markov chain by observing it for O(T c ) steps, where T c is the cover time. This algorithm improves on several previous algorithms, and there is a matching lower bound. Using the techniques from the sampling algorithm, we also show how to sample random directed spanning trees from a weighted directed graph, with arcs directed to a root, and probability proportional to the product of the edge weights. This tree sampling algorithm runs within 18 cover times of the associated random walk, and is more generally applicable than the algorithm of Broder and Aldous. 1 Introduction Random sampling of combinatorial objects has found numerous applications in computer science and statistics. Examples include approximate enumeration and estimation of the expected value of quantities. Usually there is a finite space X of objects, and a probability distribution ß on X , and we wish to sample object x 2 X with probability ß(x). One ef...
Exact Sampling with Markov Chains
 Ph.D. Dissertation, M.I.T., http://dimacs.rutgers.edu/∼dbwilson
, 1996
"... Random sampling has found numerous applications in computer science, statistics, and physics. The most widely applicable method of random sampling is to use a Markov chain whose steady state distribution is the probability distribution ß from which we wish to sample. After the Markov chain has been ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Random sampling has found numerous applications in computer science, statistics, and physics. The most widely applicable method of random sampling is to use a Markov chain whose steady state distribution is the probability distribution ß from which we wish to sample. After the Markov chain has been run for long enough, its state is approximately distributed according to ß. The principal problem with this approach is that it is often difficult to determine how long to run the Markov chain. In this thesis we present several algorithms that use Markov chains to return samples distributed exactly according to ß. The algorithms determine on their own how long to run the Markov chain. Two of the algorithms may be used with any Markov chain, but are useful only if the state space is not too large. Nonetheless, a spinoff of these two algorithms is a procedure for sampling random spanning trees of a directed graph that runs more quickly than the Aldous/Broder algorithm. Another of the exact sa...
Perfect Sampling: A Review and Applications to Signal Processing
, 2002
"... In recent years, Markov chain Monte Carlo (MCMC) sampling methods have gained much popularity among researchers in signal processing. The Gibbs and the MetropolisHastings algorithms, which are the two most popular MCMC methods, have already been employed in resolving a wide variety of signal proce ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
In recent years, Markov chain Monte Carlo (MCMC) sampling methods have gained much popularity among researchers in signal processing. The Gibbs and the MetropolisHastings algorithms, which are the two most popular MCMC methods, have already been employed in resolving a wide variety of signal processing problems. A drawback of these algorithms is that in general, they cannot guarantee that the samples are drawn exactly from a target distribution. More recently, new Markov chainbased methods have been proposed, and they produce samples that are guaranteed to come from the desired distribution. They are referred to as perfect samplers. In this paper, we review some of them, with the emphasis being given to the algorithm coupling from the past (CFTP). We also provide two signal processing examples where we apply perfect sampling. In the first, we use perfect sampling for restoration of binary images and, in the second, for multiuser detection of CDMA signals.
Efficient Stopping Rules for Markov Chains (Extended Abstract)
 Proc. 27th ACM Symp. on the Theory of Computing
, 1995
"... ) L' aszl' o Lov' asz Dept. of Computer Science, Yale University, New Haven CT 06510; lovasz@cs.yale.edu Peter Winkler AT&T Bell Laboratories 2D147, Murray Hill NJ 07974; pw@research.att.com Abstract Let M be the transition matrix, and oe the initial state distribution, for a discretetime fini ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
) L' aszl' o Lov' asz Dept. of Computer Science, Yale University, New Haven CT 06510; lovasz@cs.yale.edu Peter Winkler AT&T Bell Laboratories 2D147, Murray Hill NJ 07974; pw@research.att.com Abstract Let M be the transition matrix, and oe the initial state distribution, for a discretetime finitestate irreducible Markov chain. A stopping rule for M is an algorithm which observes the progress of the chain and then stops it at some random time \Gamma; the distribution of the final state is denoted by oe \Gamma . We give a useful characterization for stopping rules which are optimal for given target distribution ø , in the sense that oe \Gamma = ø and the expected stopping time E\Gamma is minimal. Four classes of optimal stopping rules are described, including a unique "threshold" rule which also minimizes max(\Gamma). The minimum value of E\Gamma, which we denote by H(oe;ø ), is easily computable from the hitting times of M . For applications in computing, the most important c...