Results 1  10
of
110
The Markov Chain Monte Carlo method: an approach to approximate counting and integration
, 1996
"... In the area of statistical physics, Monte Carlo algorithms based on Markov chain simulation have been in use for many years. The validity of these algorithms depends crucially on the rate of convergence to equilibrium of the Markov chain being simulated. Unfortunately, the classical theory of stocha ..."
Abstract

Cited by 238 (12 self)
 Add to MetaCart
In the area of statistical physics, Monte Carlo algorithms based on Markov chain simulation have been in use for many years. The validity of these algorithms depends crucially on the rate of convergence to equilibrium of the Markov chain being simulated. Unfortunately, the classical theory of stochastic processes hardly touches on the sort of nonasymptotic analysis required in this application. As a consequence, it had previously not been possible to make useful, mathematically rigorous statements about the quality of the estimates obtained. Within the last ten years, analytical tools have been devised with the aim of correcting this deficiency. As well as permitting the analysis of Monte Carlo algorithms for classical problems in statistical physics, the introduction of these tools has spurred the development of new approximation algorithms for a wider class of problems in combinatorial enumeration and optimization. The “Markov chain Monte Carlo ” method has been applied to a variety of such problems, and often provides the only known efficient (i.e., polynomial time) solution technique.
Finite Markov Chains and Algorithmic Applications
 IN LONDON MATHEMATICAL SOCIETY STUDENT TEXTS
, 2001
"... ..."
On Markov chains for independent sets
 Journal of Algorithms
, 1997
"... Random independent sets in graphs arise, for example, in statistical physics, in the hardcore model of a gas. A new rapidly mixing Markov chain for independent sets is defined in this paper. We show that it is rapidly mixing for a wider range of values of the parameter than the LubyVigoda chain, ..."
Abstract

Cited by 66 (16 self)
 Add to MetaCart
Random independent sets in graphs arise, for example, in statistical physics, in the hardcore model of a gas. A new rapidly mixing Markov chain for independent sets is defined in this paper. We show that it is rapidly mixing for a wider range of values of the parameter than the LubyVigoda chain, the best previously known. Moreover the new chain is apparently more rapidly mixing than the LubyVigoda chain for larger values of (unless the maximum degree of the graph is 4). An extension of the chain to independent sets in hypergraphs is described. This chain gives an efficient method for approximately counting the number of independent sets of hypergraphs with maximum degree two, or with maximum degree three and maximum edge size three. Finally, we describe a method which allows one, under certain circumstances, to deduce the rapid mixing of one Markov chain from the rapid mixing of another, with the same state space and stationary distribution. This method is applied to two Markov ch...
Analyzing Glauber Dynamics by Comparison of Markov Chains
 Journal of Mathematical Physics
, 1999
"... A popular technique for studying random properties of a combinatorial set is to design a Markov chain Monte Carlo algorithm. For many problems there are natural Markov chains connecting the set of allowable configurations which are based on local moves, or "Glauber dynamics." Typically the ..."
Abstract

Cited by 59 (15 self)
 Add to MetaCart
A popular technique for studying random properties of a combinatorial set is to design a Markov chain Monte Carlo algorithm. For many problems there are natural Markov chains connecting the set of allowable configurations which are based on local moves, or "Glauber dynamics." Typically these single site update algorithms are difficult to analyze, so often the Markov chain is modified to update several sites simultaneously. Recently there has been progress in analyzing these more complicated algorithms for several important combinatorial problems. In this work we use the comparison technique of Diaconis and SaloffCoste to show that several of the natural single point update algorithms are efficient. The strategy is to relate the mixing rate of these algorithms to the corresponding nonlocal algorithms which have already been analyzed. This allows us to give polynomial bounds for single point update algorithms for problems such as generating planar tilings and random triangulations of c...
The SwendsenWang process does not always mix rapidly
 Proc. 29th ACM Symp. on Theory of Computing
, 1997
"... The SwendsenWang process provides one possible dynamics for the Qstate Potts model in statistical physics. Computer simulations of this process are widely used to estimate the expectations of various observables (random variables) of a Potts system in the equilibrium (or Gibbs) distribution. The l ..."
Abstract

Cited by 41 (3 self)
 Add to MetaCart
The SwendsenWang process provides one possible dynamics for the Qstate Potts model in statistical physics. Computer simulations of this process are widely used to estimate the expectations of various observables (random variables) of a Potts system in the equilibrium (or Gibbs) distribution. The legitimacy of such simulations depends on the rate of convergence of the process to equilibrium, often known as the mixing rate. Empirical observations suggest that the SwendsenWang process mixes rapidly in many instances of practical interest. In spite of this, we show that there are occasions on which the SwendsenWang process requires exponential time (in the size of the system) to approach equilibrium.
A more rapidly mixing Markov chain for graph colourings
, 1997
"... We define a new Markov chain on (proper) kcolourings of graphs, and relate its convergence properties to the maximum degree \Delta of the graph. The chain is shown to have bounds on convergence time appreciably better than those for the wellknown Jerrum/SalasSokal chain in most circumstances. For ..."
Abstract

Cited by 40 (11 self)
 Add to MetaCart
We define a new Markov chain on (proper) kcolourings of graphs, and relate its convergence properties to the maximum degree \Delta of the graph. The chain is shown to have bounds on convergence time appreciably better than those for the wellknown Jerrum/SalasSokal chain in most circumstances. For the case k = 2\Delta, we provide a dramatic decrease in running time. We also show improvements whenever the graph is regular, or fewer than 3\Delta colours are used. The results are established using the method of path coupling. We indicate that our analysis is tight by showing that the couplings used are optimal in a sense which we define. 1 Introduction Markov chains on the set of proper colourings of graphs have been studied in computer science [9] and statistical physics [13]. In both applications, the rapidity of convergence of the chain is the main focus of interest, though for somewhat different reasons. The papers [9, 13] introduced a simple Markov chain, which we shall refer to a...
Randomly Colouring Graphs With Lower Bounds on Girth and Maximum Degree
, 2001
"... We consider the problem of generating a random qcolouring of a graph G = (V, E). ..."
Abstract

Cited by 34 (6 self)
 Add to MetaCart
We consider the problem of generating a random qcolouring of a graph G = (V, E).
Exact Sampling and Approximate Counting Techniques
"... We present two algorithms for uniformly sampling from the proper colorings of a graph using k colors. We use exact sampling from the stationary distribution of a Markov chain with states that are the kcolorings of a graph with maximum degree ¢. As opposed to approximate sampling algorithms based on ..."
Abstract

Cited by 32 (10 self)
 Add to MetaCart
We present two algorithms for uniformly sampling from the proper colorings of a graph using k colors. We use exact sampling from the stationary distribution of a Markov chain with states that are the kcolorings of a graph with maximum degree ¢. As opposed to approximate sampling algorithms based on rapid mixing, these algorithms have termination criteria that allow them to stop on some inputs much more quickly than in the worst case running time bound. For the first algorithm we show that when ¡¤£, the algorithm has an upper limit on the expected running time that is polynomial. For the second algorithm we show that for ¡�£�� where ¢� � � is an integer that satisfies ����£�� the running time is polynomial. Previously, Jerrum showed that it was possible to approximately sample uniformly in polynomial time from the set of ¡colorings when ¡�� but our algorithm is the first polynomial time exact sampling algorithm for this problem. Using approximate sampling, Jerrum also showed how to approximately count the number of ¡colorings. We give a new procedure for approximately counting the number of ¡colorings that improves the running time of the procedure of Jerrum by a factor of is the number of nodes in �� � when ¥����, where �� ¢ � the graph to be colored � and is the number of edges. In addition, we present an improved analysis of the chain of Luby and Vigoda for exact sampling from the independent sets of a graph. Finally, we present the first polynomial time method for exactly sampling from the sink free orientations of a graph. Bubley and Dyer showed how to approximately sample from this state space ��¥���������¥��������� � in time, our algorithm ��¥��¦�� � takes expected time.
Mathematical foundations of the Markov chain Monte Carlo method
 in Probabilistic Methods for Algorithmic Discrete Mathematics
, 1998
"... 7.2 was jointly undertaken with Vivek Gore, and is published here for the first time. I also thank an anonymous referee for carefully reading and providing helpful comments on a draft of this chapter. 1. Introduction The classical Monte Carlo method is an approach to estimating quantities that a ..."
Abstract

Cited by 30 (1 self)
 Add to MetaCart
7.2 was jointly undertaken with Vivek Gore, and is published here for the first time. I also thank an anonymous referee for carefully reading and providing helpful comments on a draft of this chapter. 1. Introduction The classical Monte Carlo method is an approach to estimating quantities that are hard to compute exactly. The quantity z of interest is expressed as the expectation z = ExpZ of a random variable (r.v.) Z for which some efficient sampling procedure is available. By taking the mean of some sufficiently large set of independent samples of Z, one may obtain an approximation to z. For example, suppose S = \Phi (x; y) 2 [0; 1] 2 : p i (x; y) 0; for all i \Psi<F12
Mixing Properties of the SwendsenWang Process on the Complete Graph and Narrow Grids
 IN PROCEEDINGS OF DIMACS WORKSHOP ON STATISTICAL PHYSICS METHODS IN DISCRETE PROBABILITY, COMBINATORICS AND THEORETICAL COMPUTER SCIENCE
, 2000
"... We consider the mixing properties o the SwendsenWang process or the 2state Potts model or Ising model, on the complete n vertex graph Kn and for the Qstate model on an a x n grid where a is bounded as n  . ..."
Abstract

Cited by 27 (0 self)
 Add to MetaCart
We consider the mixing properties o the SwendsenWang process or the 2state Potts model or Ising model, on the complete n vertex graph Kn and for the Qstate model on an a x n grid where a is bounded as n  .