Results 1  10
of
46
Markov Chain Algorithms for Planar Lattice Structures
, 1995
"... Consider the following Markov chain, whose states are all domino tilings of a 2n x 2n chessboard: starting from some arbitrary tiling, pick a 2 x 2 window uniformly at random. If the four squares appearing in this window are covered by two parallel dominoes, rotate the dominoes 90° in place. Repeat ..."
Abstract

Cited by 90 (10 self)
 Add to MetaCart
Consider the following Markov chain, whose states are all domino tilings of a 2n x 2n chessboard: starting from some arbitrary tiling, pick a 2 x 2 window uniformly at random. If the four squares appearing in this window are covered by two parallel dominoes, rotate the dominoes 90° in place. Repeat many times. This process is used in practice to generate a random tiling, and is a widely used tool in the study of the combinatorics of tilings and the behavior of dimer systems in statistical physics. Analogous Markov chains are used to randomly generate other structures on various twodimensional lattices. This paper presents techniques which prove for the first time that, in many interesting cases, a small number of random moves suffice to obtain a uniform distribution.
Mixing times of lozenge tiling and card shuffling Markov chains
, 1997
"... Abstract. We show how to combine Fourier analysis with coupling arguments to bound the mixing times of a variety of Markov chains. The mixing time is the number of steps a Markov chain takes to approach its equilibrium distribution. One application is to a class of Markov chains introduced by Luby, ..."
Abstract

Cited by 69 (1 self)
 Add to MetaCart
Abstract. We show how to combine Fourier analysis with coupling arguments to bound the mixing times of a variety of Markov chains. The mixing time is the number of steps a Markov chain takes to approach its equilibrium distribution. One application is to a class of Markov chains introduced by Luby, Randall, and Sinclair to generate random tilings of regions by lozenges. For an ℓ×ℓ region we bound the mixing time by O(ℓ 4 log ℓ), which improves on the previous bound of O(ℓ 7), and we show the new bound to be essentially tight. In another application we resolve a few questions raised by Diaconis and SaloffCoste by lower bounding the mixing time of various cardshuffling Markov chains. Our lower bounds are within a constant factor of their upper bounds. When we use our methods to modify a pathcoupling analysis of Bubley and Dyer, we obtain an O(n 3 log n) upper bound on the mixing time of the KarzanovKhachiyan Markov chain for linear extensions. 1.
On Markov chains for independent sets
 Journal of Algorithms
, 1997
"... Random independent sets in graphs arise, for example, in statistical physics, in the hardcore model of a gas. A new rapidly mixing Markov chain for independent sets is defined in this paper. We show that it is rapidly mixing for a wider range of values of the parameter than the LubyVigoda chain, ..."
Abstract

Cited by 66 (16 self)
 Add to MetaCart
Random independent sets in graphs arise, for example, in statistical physics, in the hardcore model of a gas. A new rapidly mixing Markov chain for independent sets is defined in this paper. We show that it is rapidly mixing for a wider range of values of the parameter than the LubyVigoda chain, the best previously known. Moreover the new chain is apparently more rapidly mixing than the LubyVigoda chain for larger values of (unless the maximum degree of the graph is 4). An extension of the chain to independent sets in hypergraphs is described. This chain gives an efficient method for approximately counting the number of independent sets of hypergraphs with maximum degree two, or with maximum degree three and maximum edge size three. Finally, we describe a method which allows one, under certain circumstances, to deduce the rapid mixing of one Markov chain from the rapid mixing of another, with the same state space and stationary distribution. This method is applied to two Markov ch...
Fast Convergence of the Glauber Dynamics for Sampling Independent Sets: Part II
, 1999
"... This work is a continuation of [4]. The focus is on the problem of sampling independent sets of a graph with maximum degree ffi. The weight of each independent set is expressed in terms of a fixed positive parameter 2 ffi\Gamma2 , where the weight of an indepednent set oe is joej . The Glaube ..."
Abstract

Cited by 41 (3 self)
 Add to MetaCart
This work is a continuation of [4]. The focus is on the problem of sampling independent sets of a graph with maximum degree ffi. The weight of each independent set is expressed in terms of a fixed positive parameter 2 ffi\Gamma2 , where the weight of an indepednent set oe is joej . The Glauber dynamics is a simple Markov chain Monte Carlo method for sampling from this distribution. In [4], we showed fast convergence of this dynamics for trianglefree graphs. This paper proves fast convergence for arbitrary graphs. Computer Science Division, University of California at Berkeley, and International Computer Science Institute. Supported in part by National Science Foundation Fellowship. 1 Introduction For a more general introduction and a discussion of related work we refer the reader to the companion work [4]. The aim of this work is given a graph G = (V; E) to efficiently sample from the probability measure ¯G defined on the set of indepedent sets\Omega =\Omega G of G weight...
Markov Chain Decomposition for Convergence Rate Analysis
"... In this paper we develop tools for analyzing the rate at which a reversible Markov chain converges to stationarity. Our techniques are useful when the Markov chain can be decomposed into pieces which are themselves easier to analyze. The main theorems relate the spectral gap of the original Markov c ..."
Abstract

Cited by 39 (9 self)
 Add to MetaCart
In this paper we develop tools for analyzing the rate at which a reversible Markov chain converges to stationarity. Our techniques are useful when the Markov chain can be decomposed into pieces which are themselves easier to analyze. The main theorems relate the spectral gap of the original Markov chains to the spectral gap of the pieces. In the first case the pieces are restrictions of the Markov chain to subsets of the state space; the second case treats a MetropolisHastings chain whose equilibrium distribution is a weighted average of equilibrium distributions of other MetropolisHastings chains on the same state space.
Sampling Adsorbing Staircase Walks Using a New Markov Chain Decomposition Method
 In Proceedings of the 41st Annual Symposium on Foundations of Computer Science
"... Staircase walks are lattice paths from (0; 0) to (2n; 0) which take diagonal steps and which never fall below the xaxis. A path hitting the xaxis k times is assigned a weight of k ; where ? 0 . A simple local Markov chain which connects the state space and converges to the Gibbs measure (which ..."
Abstract

Cited by 29 (5 self)
 Add to MetaCart
Staircase walks are lattice paths from (0; 0) to (2n; 0) which take diagonal steps and which never fall below the xaxis. A path hitting the xaxis k times is assigned a weight of k ; where ? 0 . A simple local Markov chain which connects the state space and converges to the Gibbs measure (which normalizes these weights) is known to be rapidly mixing when = 1 , and can easily be shown to be rapidly mixing when ! 1 . We give the first proof that this Markov chain is also mixing in the more interesting case of ? 1 , known in the statistical physics community as adsorbing staircase walks. The main new ingredient is a decomposition technique which allows us to analyze the Markov chain in pieces, applying different arguments to analyze each piece. 1. Introduction 1.1. The model Staircase walks (also called Dyck paths) are walks in ZZ 2 from (0; 0) to (n; n) which stay above the diagonal x = y . Rotating by 45 o , they correspond to walks from (0; 0) to (2n; 0) which take diagon...
Coupling from the Past: a User's Guide
, 1997
"... . The Markov chain Monte Carlo method is a general technique for obtaining samples from a probability distribution. In earlier work, we showed that for many applications one can modify the Markov chain Monte Carlo method so as to remove all bias in the output resulting from the biased choice of an i ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
. The Markov chain Monte Carlo method is a general technique for obtaining samples from a probability distribution. In earlier work, we showed that for many applications one can modify the Markov chain Monte Carlo method so as to remove all bias in the output resulting from the biased choice of an initial state for the chain; we have called this method Coupling From The Past (CFTP). Here we describe this method in a fashion that should make our ideas accessible to researchers from diverse areas. Our expository strategy is to avoid proofs and focus on sample applications. 1. Introduction In Markov chain Monte Carlo studies, one attempts to sample from a distribution ß by running a Markov chain whose unique steadystate distribution is ß. Ideally, one has proved a theorem that guarantees that the time for which one plans to run the chain is substantially greater than the mixing time of the chain, so that the distribution ~ ß that one's procedure actually samples from is known to be cl...
A Note on the Glauber Dynamics for Sampling Independent Sets
 Electronic Journal of Combinatorics
, 2001
"... This note considers the problem of sampling from the set of weighted independent sets of a graph with maximum degree #. For a positive fugacity #,theweight of an independent set # is # # . Luby and Vigoda proved that the Glauber dynamics, which only changes the configuration at a randomly chosen ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
This note considers the problem of sampling from the set of weighted independent sets of a graph with maximum degree #. For a positive fugacity #,theweight of an independent set # is # # . Luby and Vigoda proved that the Glauber dynamics, which only changes the configuration at a randomly chosen vertex in each step, has mixing time O(n log n)when#< 2 #2 for trianglefree graphs. We extend their approach to general graphs. 1
Random walks on combinatorial objects
 Surveys in Combinatorics 1999
, 1999
"... Summary Approximate sampling from combinatoriallydefined sets, using the Markov chain Monte Carlo method, is discussed from the perspective of combinatorial algorithms. We also examine the associated problem of discrete integration over such sets. Recent work is reviewed, and we reexamine the unde ..."
Abstract

Cited by 23 (8 self)
 Add to MetaCart
Summary Approximate sampling from combinatoriallydefined sets, using the Markov chain Monte Carlo method, is discussed from the perspective of combinatorial algorithms. We also examine the associated problem of discrete integration over such sets. Recent work is reviewed, and we reexamine the underlying formal foundational framework in the light of this. We give a detailed treatment of the coupling technique, a classical method for analysing the convergence rates of Markov chains. The related topic of perfect sampling is examined. In perfect sampling, the goal is to sample exactly from the target set. We conclude with a discussion of negative results in this area. These are results which imply that there are no polynomial time algorithms of a particular type for a particular problem. 1