Results 1  10
of
112
The Power of Two Random Choices: A Survey of Techniques and Results
 in Handbook of Randomized Computing
, 2000
"... ITo motivate this survey, we begin with a simple problem that demonstrates a powerful fundamental idea. Suppose that n balls are thrown into n bins, with each ball choosing a bin independently and uniformly at random. Then the maximum load, or the largest number of balls in any bin, is approximately ..."
Abstract

Cited by 100 (2 self)
 Add to MetaCart
ITo motivate this survey, we begin with a simple problem that demonstrates a powerful fundamental idea. Suppose that n balls are thrown into n bins, with each ball choosing a bin independently and uniformly at random. Then the maximum load, or the largest number of balls in any bin, is approximately log n= log log n with high probability. Now suppose instead that the balls are placed sequentially, and each ball is placed in the least loaded of d 2 bins chosen independently and uniformly at random. Azar, Broder, Karlin, and Upfal showed that in this case, the maximum load is log log n= log d + (1) with high probability [ABKU99]. The important implication of this result is that even a small amount of choice can lead to drastically different results in load balancing. Indeed, having just two random choices (i.e.,...
Mixing times of lozenge tiling and card shuffling Markov chains
, 1997
"... Abstract. We show how to combine Fourier analysis with coupling arguments to bound the mixing times of a variety of Markov chains. The mixing time is the number of steps a Markov chain takes to approach its equilibrium distribution. One application is to a class of Markov chains introduced by Luby, ..."
Abstract

Cited by 69 (1 self)
 Add to MetaCart
Abstract. We show how to combine Fourier analysis with coupling arguments to bound the mixing times of a variety of Markov chains. The mixing time is the number of steps a Markov chain takes to approach its equilibrium distribution. One application is to a class of Markov chains introduced by Luby, Randall, and Sinclair to generate random tilings of regions by lozenges. For an ℓ×ℓ region we bound the mixing time by O(ℓ 4 log ℓ), which improves on the previous bound of O(ℓ 7), and we show the new bound to be essentially tight. In another application we resolve a few questions raised by Diaconis and SaloffCoste by lower bounding the mixing time of various cardshuffling Markov chains. Our lower bounds are within a constant factor of their upper bounds. When we use our methods to modify a pathcoupling analysis of Bubley and Dyer, we obtain an O(n 3 log n) upper bound on the mixing time of the KarzanovKhachiyan Markov chain for linear extensions. 1.
On Markov chains for independent sets
 Journal of Algorithms
, 1997
"... Random independent sets in graphs arise, for example, in statistical physics, in the hardcore model of a gas. A new rapidly mixing Markov chain for independent sets is defined in this paper. We show that it is rapidly mixing for a wider range of values of the parameter than the LubyVigoda chain, ..."
Abstract

Cited by 66 (16 self)
 Add to MetaCart
Random independent sets in graphs arise, for example, in statistical physics, in the hardcore model of a gas. A new rapidly mixing Markov chain for independent sets is defined in this paper. We show that it is rapidly mixing for a wider range of values of the parameter than the LubyVigoda chain, the best previously known. Moreover the new chain is apparently more rapidly mixing than the LubyVigoda chain for larger values of (unless the maximum degree of the graph is 4). An extension of the chain to independent sets in hypergraphs is described. This chain gives an efficient method for approximately counting the number of independent sets of hypergraphs with maximum degree two, or with maximum degree three and maximum edge size three. Finally, we describe a method which allows one, under certain circumstances, to deduce the rapid mixing of one Markov chain from the rapid mixing of another, with the same state space and stationary distribution. This method is applied to two Markov ch...
Analyzing Glauber Dynamics by Comparison of Markov Chains
 Journal of Mathematical Physics
, 1999
"... A popular technique for studying random properties of a combinatorial set is to design a Markov chain Monte Carlo algorithm. For many problems there are natural Markov chains connecting the set of allowable configurations which are based on local moves, or "Glauber dynamics." Typically these single ..."
Abstract

Cited by 59 (15 self)
 Add to MetaCart
A popular technique for studying random properties of a combinatorial set is to design a Markov chain Monte Carlo algorithm. For many problems there are natural Markov chains connecting the set of allowable configurations which are based on local moves, or "Glauber dynamics." Typically these single site update algorithms are difficult to analyze, so often the Markov chain is modified to update several sites simultaneously. Recently there has been progress in analyzing these more complicated algorithms for several important combinatorial problems. In this work we use the comparison technique of Diaconis and SaloffCoste to show that several of the natural single point update algorithms are efficient. The strategy is to relate the mixing rate of these algorithms to the corresponding nonlocal algorithms which have already been analyzed. This allows us to give polynomial bounds for single point update algorithms for problems such as generating planar tilings and random triangulations of c...
BALANCED ALLOCATIONS: THE HEAVILY LOADED CASE
, 2006
"... We investigate ballsintobins processes allocating m balls into n bins based on the multiplechoice paradigm. In the classical singlechoice variant each ball is placed into a bin selected uniformly at random. In a multiplechoice process each ball can be placed into one out of d ≥ 2 randomly selec ..."
Abstract

Cited by 58 (8 self)
 Add to MetaCart
We investigate ballsintobins processes allocating m balls into n bins based on the multiplechoice paradigm. In the classical singlechoice variant each ball is placed into a bin selected uniformly at random. In a multiplechoice process each ball can be placed into one out of d ≥ 2 randomly selected bins. It is known that in many scenarios having more than one choice for each ball can improve the load balance significantly. Formal analyses of this phenomenon prior to this work considered mostly the lightly loaded case, that is, when m ≈ n. In this paper we present the first tight analysis in the heavily loaded case, that is, when m ≫ n rather than m ≈ n. The best previously known results for the multiplechoice processes in the heavily loaded case were obtained using majorization by the singlechoice process. This yields an upper bound of the maximum load of bins of m/n + O ( √ m ln n/n) with high probability. We show, however, that the multiplechoice processes are fundamentally different from the singlechoice variant in that they have “short memory. ” The great consequence of this property is that the deviation of the multiplechoice processes from the optimal allocation (that is, the allocation in which each bin has either ⌊m/n ⌋ or ⌈m/n ⌉ balls) does not increase with the number of balls as in the case of the singlechoice process. In particular, we investigate the allocation obtained by two different multiplechoice allocation schemes,
The SwendsenWang process does not always mix rapidly
 Proc. 29th ACM Symp. on Theory of Computing
, 1997
"... The SwendsenWang process provides one possible dynamics for the Qstate Potts model in statistical physics. Computer simulations of this process are widely used to estimate the expectations of various observables (random variables) of a Potts system in the equilibrium (or Gibbs) distribution. The l ..."
Abstract

Cited by 40 (3 self)
 Add to MetaCart
The SwendsenWang process provides one possible dynamics for the Qstate Potts model in statistical physics. Computer simulations of this process are widely used to estimate the expectations of various observables (random variables) of a Potts system in the equilibrium (or Gibbs) distribution. The legitimacy of such simulations depends on the rate of convergence of the process to equilibrium, often known as the mixing rate. Empirical observations suggest that the SwendsenWang process mixes rapidly in many instances of practical interest. In spite of this, we show that there are occasions on which the SwendsenWang process requires exponential time (in the size of the system) to approach equilibrium.
A more rapidly mixing Markov chain for graph colourings
, 1997
"... We define a new Markov chain on (proper) kcolourings of graphs, and relate its convergence properties to the maximum degree \Delta of the graph. The chain is shown to have bounds on convergence time appreciably better than those for the wellknown Jerrum/SalasSokal chain in most circumstances. For ..."
Abstract

Cited by 39 (11 self)
 Add to MetaCart
We define a new Markov chain on (proper) kcolourings of graphs, and relate its convergence properties to the maximum degree \Delta of the graph. The chain is shown to have bounds on convergence time appreciably better than those for the wellknown Jerrum/SalasSokal chain in most circumstances. For the case k = 2\Delta, we provide a dramatic decrease in running time. We also show improvements whenever the graph is regular, or fewer than 3\Delta colours are used. The results are established using the method of path coupling. We indicate that our analysis is tight by showing that the couplings used are optimal in a sense which we define. 1 Introduction Markov chains on the set of proper colourings of graphs have been studied in computer science [9] and statistical physics [13]. In both applications, the rapidity of convergence of the chain is the main focus of interest, though for somewhat different reasons. The papers [9, 13] introduced a simple Markov chain, which we shall refer to a...
Randomly Colouring Graphs With Lower Bounds on Girth and Maximum Degree
, 2001
"... We consider the problem of generating a random qcolouring of a graph G = (V, E). ..."
Abstract

Cited by 34 (6 self)
 Add to MetaCart
We consider the problem of generating a random qcolouring of a graph G = (V, E).
Mathematical foundations of the Markov chain Monte Carlo method
 in Probabilistic Methods for Algorithmic Discrete Mathematics
, 1998
"... 7.2 was jointly undertaken with Vivek Gore, and is published here for the first time. I also thank an anonymous referee for carefully reading and providing helpful comments on a draft of this chapter. 1. Introduction The classical Monte Carlo method is an approach to estimating quantities that a ..."
Abstract

Cited by 30 (1 self)
 Add to MetaCart
7.2 was jointly undertaken with Vivek Gore, and is published here for the first time. I also thank an anonymous referee for carefully reading and providing helpful comments on a draft of this chapter. 1. Introduction The classical Monte Carlo method is an approach to estimating quantities that are hard to compute exactly. The quantity z of interest is expressed as the expectation z = ExpZ of a random variable (r.v.) Z for which some efficient sampling procedure is available. By taking the mean of some sufficiently large set of independent samples of Z, one may obtain an approximation to z. For example, suppose S = \Phi (x; y) 2 [0; 1] 2 : p i (x; y) 0; for all i \Psi<F12
Sampling Adsorbing Staircase Walks Using a New Markov Chain Decomposition Method
 In Proceedings of the 41st Annual Symposium on Foundations of Computer Science
"... Staircase walks are lattice paths from (0; 0) to (2n; 0) which take diagonal steps and which never fall below the xaxis. A path hitting the xaxis k times is assigned a weight of k ; where ? 0 . A simple local Markov chain which connects the state space and converges to the Gibbs measure (which ..."
Abstract

Cited by 29 (5 self)
 Add to MetaCart
Staircase walks are lattice paths from (0; 0) to (2n; 0) which take diagonal steps and which never fall below the xaxis. A path hitting the xaxis k times is assigned a weight of k ; where ? 0 . A simple local Markov chain which connects the state space and converges to the Gibbs measure (which normalizes these weights) is known to be rapidly mixing when = 1 , and can easily be shown to be rapidly mixing when ! 1 . We give the first proof that this Markov chain is also mixing in the more interesting case of ? 1 , known in the statistical physics community as adsorbing staircase walks. The main new ingredient is a decomposition technique which allows us to analyze the Markov chain in pieces, applying different arguments to analyze each piece. 1. Introduction 1.1. The model Staircase walks (also called Dyck paths) are walks in ZZ 2 from (0; 0) to (n; n) which stay above the diagonal x = y . Rotating by 45 o , they correspond to walks from (0; 0) to (2n; 0) which take diagon...