Results 1  10
of
44
The Markov Chain Monte Carlo method: an approach to approximate counting and integration
, 1996
"... In the area of statistical physics, Monte Carlo algorithms based on Markov chain simulation have been in use for many years. The validity of these algorithms depends crucially on the rate of convergence to equilibrium of the Markov chain being simulated. Unfortunately, the classical theory of stocha ..."
Abstract

Cited by 286 (12 self)
 Add to MetaCart
(Show Context)
In the area of statistical physics, Monte Carlo algorithms based on Markov chain simulation have been in use for many years. The validity of these algorithms depends crucially on the rate of convergence to equilibrium of the Markov chain being simulated. Unfortunately, the classical theory of stochastic processes hardly touches on the sort of nonasymptotic analysis required in this application. As a consequence, it had previously not been possible to make useful, mathematically rigorous statements about the quality of the estimates obtained. Within the last ten years, analytical tools have been devised with the aim of correcting this deficiency. As well as permitting the analysis of Monte Carlo algorithms for classical problems in statistical physics, the introduction of these tools has spurred the development of new approximation algorithms for a wider class of problems in combinatorial enumeration and optimization. The “Markov chain Monte Carlo ” method has been applied to a variety of such problems, and often provides the only known efficient (i.e., polynomial time) solution technique.
Fastest mixing markov chain on a graph
 SIAM Review
"... Author names in alphabetical order. Submitted to SIAM Review, problems and techniques section. We consider a symmetric random walk on a connected graph, where each edge is labeled with the probability of transition between the two adjacent vertices. The associated Markov chain has a uniform equilibr ..."
Abstract

Cited by 157 (16 self)
 Add to MetaCart
(Show Context)
Author names in alphabetical order. Submitted to SIAM Review, problems and techniques section. We consider a symmetric random walk on a connected graph, where each edge is labeled with the probability of transition between the two adjacent vertices. The associated Markov chain has a uniform equilibrium distribution; the rate of convergence to this distribution, i.e., the mixing rate of the Markov chain, is determined by the second largest (in magnitude) eigenvalue of the transition matrix. In this paper we address the problem of assigning probabilities to the edges of the graph in such a way as to minimize the second largest magnitude eigenvalue, i.e., the problem of ¯nding the fastest mixing Markov chain on the graph. We show that this problem can be formulated as a convex optimization problem, which can in turn be expressed as a semide¯nite program (SDP). This allows us to easily compute the (globally) fastest mixing Markov chain for any graph with a modest number of edges (say, 1000) using standard numerical methods for SDPs. Larger problems can be solved by
Some Applications of Laplace Eigenvalues of Graphs
 GRAPH SYMMETRY: ALGEBRAIC METHODS AND APPLICATIONS, VOLUME 497 OF NATO ASI SERIES C
, 1997
"... In the last decade important relations between Laplace eigenvalues and eigenvectors of graphs and several other graph parameters were discovered. In these notes we present some of these results and discuss their consequences. Attention is given to the partition and the isoperimetric properties of ..."
Abstract

Cited by 129 (0 self)
 Add to MetaCart
In the last decade important relations between Laplace eigenvalues and eigenvectors of graphs and several other graph parameters were discovered. In these notes we present some of these results and discuss their consequences. Attention is given to the partition and the isoperimetric properties of graphs, the maxcut problem and its relation to semidefinite programming, rapid mixing of Markov chains, and to extensions of the results to infinite graphs.
Markov Chain Algorithms for Planar Lattice Structures
, 1995
"... Consider the following Markov chain, whose states are all domino tilings of a 2n x 2n chessboard: starting from some arbitrary tiling, pick a 2 x 2 window uniformly at random. If the four squares appearing in this window are covered by two parallel dominoes, rotate the dominoes 90° in place. Repeat ..."
Abstract

Cited by 110 (11 self)
 Add to MetaCart
Consider the following Markov chain, whose states are all domino tilings of a 2n x 2n chessboard: starting from some arbitrary tiling, pick a 2 x 2 window uniformly at random. If the four squares appearing in this window are covered by two parallel dominoes, rotate the dominoes 90° in place. Repeat many times. This process is used in practice to generate a random tiling, and is a widely used tool in the study of the combinatorics of tilings and the behavior of dimer systems in statistical physics. Analogous Markov chains are used to randomly generate other structures on various twodimensional lattices. This paper presents techniques which prove for the first time that, in many interesting cases, a small number of random moves suffice to obtain a uniform distribution.
Testing that distributions are close
 In IEEE Symposium on Foundations of Computer Science
, 2000
"... Given two distributions over an n element set, we wish to check whether these distributions are statistically close by only sampling. We give a sublinear algorithm which uses O(n 2/3 ɛ −4 log n) independent samples from each distribution, runs in time linear in the sample size, makes no assumptions ..."
Abstract

Cited by 101 (18 self)
 Add to MetaCart
(Show Context)
Given two distributions over an n element set, we wish to check whether these distributions are statistically close by only sampling. We give a sublinear algorithm which uses O(n 2/3 ɛ −4 log n) independent samples from each distribution, runs in time linear in the sample size, makes no assumptions about the structure of the distributions, and distinguishes the cases ɛ when the distance between the distributions is small (less than max ( 2 32 3 √ n, ɛ 4 √)) or large (more n than ɛ) in L1distance. We also give an Ω(n 2/3 ɛ −2/3) lower bound. Our algorithm has applications to the problem of checking whether a given Markov process is rapidly mixing. We develop sublinear algorithms for this problem as well.
The Complexity of Counting in Sparse, Regular, and Planar Graphs
 SIAM Journal on Computing
, 1997
"... We show that a number of graphtheoretic counting problems remain NPhard, indeed #Pcomplete, in very restricted classes of graphs. In particular, it is shown that the problems of counting matchings, vertex covers, independent sets, and extremal variants of these all remain hard when restricted to ..."
Abstract

Cited by 90 (0 self)
 Add to MetaCart
(Show Context)
We show that a number of graphtheoretic counting problems remain NPhard, indeed #Pcomplete, in very restricted classes of graphs. In particular, it is shown that the problems of counting matchings, vertex covers, independent sets, and extremal variants of these all remain hard when restricted to planar bipartite graphs of bounded degree or regular graphs of constant degree. To achieve these results, a new interpolationbased reduction technique which preserves properties such as constant degree is introduced. In addition, the problem of approximately counting minimum cardinality vertex covers is shown to remain NPhard even when restricted to graphs of maximal degree 3. Previously, restrictedcase complexity results for counting problems were elusive; we believe our techniques may help obtain similar results for many other counting problems. 1 Introduction Ever since the introduction of NPcompleteness in the early 1970's, the primary focus of complexity theory has been on decision ...
A Randomized Fully Polynomial Time Approximation Scheme for the All Terminal Network Reliability Problem
, 1997
"... The classic allterminal network reliability problem posits a graph, each of whose edges fails (disappears) independently with some given probability. The goal is to determine the probability that the network becomes disconnected due to edge failures. The practical applications of this question to c ..."
Abstract

Cited by 86 (2 self)
 Add to MetaCart
(Show Context)
The classic allterminal network reliability problem posits a graph, each of whose edges fails (disappears) independently with some given probability. The goal is to determine the probability that the network becomes disconnected due to edge failures. The practical applications of this question to communication networks are obvious, and the problem hasthereforebeenthesubjectofagreatdealofstudy. Sinceitis]Pcomplete, andthusbelievedhardtosolveexactly, a great deal of researchhasbeendevotedtoestimatingthefailureprobability. Acomprehensivesurveycanbefoundin[Col87]. Therstauthorrecentlypresentedanalgorithmfor approximatingtheprobabilityofnetworkdisconnection underrandomedgefailures. In this paper, we report onourexperienceimplementingthisalgorithm.Our implementationshowsthatthealgorithmispractical onnetworksofmoderatesize, and indeedworksbetter thanthetheoreticalboundspredict. Part of this improvementarisesfromheuristicmodicationstothe theoreticalalgorithm, whileanotherpartsuggests that thetheoreticalrunningtimeanalysisofthealgorithm might not be tight. Based on our observation of the implementation, wewereabletodeviseanalyticexplanationsofatleast someoftheimprovedperformance. As one example, we formallyprovetheaccuracyofasimpleheuristic approximationforthereliability. Wealsodiscussother questionsraisedbytheimplementationwhichmightbe susceptibletoanalysis.
Analyzing Glauber Dynamics by Comparison of Markov Chains
 Journal of Mathematical Physics
, 1999
"... A popular technique for studying random properties of a combinatorial set is to design a Markov chain Monte Carlo algorithm. For many problems there are natural Markov chains connecting the set of allowable configurations which are based on local moves, or "Glauber dynamics." Typically the ..."
Abstract

Cited by 71 (16 self)
 Add to MetaCart
(Show Context)
A popular technique for studying random properties of a combinatorial set is to design a Markov chain Monte Carlo algorithm. For many problems there are natural Markov chains connecting the set of allowable configurations which are based on local moves, or "Glauber dynamics." Typically these single site update algorithms are difficult to analyze, so often the Markov chain is modified to update several sites simultaneously. Recently there has been progress in analyzing these more complicated algorithms for several important combinatorial problems. In this work we use the comparison technique of Diaconis and SaloffCoste to show that several of the natural single point update algorithms are efficient. The strategy is to relate the mixing rate of these algorithms to the corresponding nonlocal algorithms which have already been analyzed. This allows us to give polynomial bounds for single point update algorithms for problems such as generating planar tilings and random triangulations of c...
The SwendsenWang process does not always mix rapidly
 Proc. 29th ACM Symp. on Theory of Computing
, 1997
"... The SwendsenWang process provides one possible dynamics for the Qstate Potts model in statistical physics. Computer simulations of this process are widely used to estimate the expectations of various observables (random variables) of a Potts system in the equilibrium (or Gibbs) distribution. The l ..."
Abstract

Cited by 51 (4 self)
 Add to MetaCart
(Show Context)
The SwendsenWang process provides one possible dynamics for the Qstate Potts model in statistical physics. Computer simulations of this process are widely used to estimate the expectations of various observables (random variables) of a Potts system in the equilibrium (or Gibbs) distribution. The legitimacy of such simulations depends on the rate of convergence of the process to equilibrium, often known as the mixing rate. Empirical observations suggest that the SwendsenWang process mixes rapidly in many instances of practical interest. In spite of this, we show that there are occasions on which the SwendsenWang process requires exponential time (in the size of the system) to approach equilibrium.