## Conductance and Rapidly Mixing Markov Chains (2003)

### BibTeX

@MISC{King03conductanceand,

author = {Jamie King},

title = {Conductance and Rapidly Mixing Markov Chains},

year = {2003}

}

### OpenURL

### Abstract

Conductance is a measure of a Markov chain that quantifies its tendency to circulate around its states. A Markov chain with low conductance will tend to get ‘stuck ’ in a subset of its states whereas one with high conductance will jump around its state space more freely. The mixing time of a Markov chain is the number of steps required for the chain to approach its stationary distribution. There is an inverse correlation between conductance and mixing time. Rapidly mixing Markov chains have very powerful applications, most notably in approximation schemes. It is therefore desirable to prove certain Markov chains to be rapidly mixing, and bounds involving conductance can help us do that. This survey covers many useful bounds involving conductance and gives several specific examples of applications of rapidly mixing Markov chains. 1

### Citations

410 | Exact sampling with coupled Markov chains and applications to statistical mechanics. Random Structures and Algorithms 9
- Propp, Wilson
- 1996
(Show Context)
Citation Context ...ce such a Markov chain has been devised, it is simply a matter of running the chain until the state distribution approaches π. This is known as the Markov chain Monte Carlo method. 9sPropp and Wilson =-=[PW96]-=- provide several techniques for reducing the number of steps τ for which the Markov chain must run, even when τ is initially unknown. Efficient stopping rules such as those discussed in [AD86] and [LW... |

329 | Eigenvalues and expanders
- Alon
- 1986
(Show Context)
Citation Context ... will show later, faster mixing. This is in accordance with the result we saw before that a second eigenvalue far from 1 leads to a small relative pointwise distance and therefore faster mixing. Alon =-=[Alo86]-=- discusses bounds involving eigenvalues in much greater detail. Combining and manipulating our bounds, we can obtain the following characterisation of ∆(t) intermsofΦ: (1 − 2Φ) t ≤ ∆(t) ≤ (1 − Φ2 /2) ... |

299 | Approximating the permanent
- Jerrum, Sinclair
- 1989
(Show Context)
Citation Context ...s a weighted average of worst conductances from different sized subsets. A very strong tool for bounding Markov chain conductance introduced by Jerrum and Sinclair in several papers including [JS88], =-=[JS89]-=-, and [SJ89], is the notion of canonical paths. A set of canonical paths is essentially a family of simple paths in the underlying graph of a Markov chain that includes a path between each pair of dis... |

277 | Geometric bounds for eigenvalues of Markov chains
- Diaconis, Stroock
- 1991
(Show Context)
Citation Context ...t finding a decent set of canonical paths can improve our lower bound of Φ, thus improving our upper bound on mixing time. This bound leads directly to the bound λ2 ≤ 1 − 1 . 8ρ2 Diaconis and Stroock =-=[DS91]-=- had previously obtained a better bound for λ2 by taking into account the lengths of the paths γij in their calculation of ρ; Sinclair improved on their bound by changing the way path length was consi... |

259 |
Approximate counting, uniform generation and rapidly mixing Markov chains, Inform. and Comput
- Sinclair, Jerrum
- 1989
(Show Context)
Citation Context ...of interest. The greater the distance between λ2 and 1, the faster a Markov chain mixes. This distance is often referred to as the spectral gap. The following inequality is due to Sinclair and Jerrum =-=[SJ89]-=-: ∆(t) ≤ λt2 . The second eigenvalue λ2 of a Markov chain is also guaranteed to satisfy the following bound involving its conductance: π0 (1 − 2Φ) ≤ λ2 ≤ (1 − Φ2 2 ). 6sFrom these inequalities we can ... |

224 |
Random walks on finite groups and rapidly mixing Markov chains. Seminar on probability
- Aldous
- 1983
(Show Context)
Citation Context ...ols and gives motivation for proving that certain chains are rapidly mixing. Many of the early results in the field of rapidly mixing Markov chains are due to Aldous in the early to mid 80’s [Ald82], =-=[Ald83]-=-. At this point we will introduce the notation used in this survey. We will use V to denote the set of states of a Markov chain. We also have the transition matrix P = {pij}i,j∈V of a Markov chain suc... |

178 | Improved bounds for mixing rates of Markov chains and multicommodity flow
- Sinclair
- 1992
(Show Context)
Citation Context ... πSsuseful measure of the distance from the stationary distribution. In particular, giving an upper bound for ∆(t) can demonstrate that a Markov chain mixes fast enough for certain purposes. Sinclair =-=[Sin92]-=- formally defines a related rate of convergence τi(ɛ) =min{t : ∀ t ′ ,t ′ ≥ t → ∆i(t ′ ) ≤ ɛ} where a subscript i denotes that i is the initial state. The more general and useful rate of convergence τ... |

147 | A randomized polynomial time algorithm for approximating the volume of convex bodies
- Dyer, Frieze, et al.
- 1989
(Show Context)
Citation Context ...e volume. The only known approximation scheme that is polynomial in both n and 1/ɛ, whereɛ is the maximum acceptable error, comes from the Markov chain Monte Carlo technique. Dyer, Frieze, and Kannan =-=[DFK91]-=- give an approximation algorithm that runs in O(n 19 ) time. Kannan, Lovász, and Simonovits [KLS97] give a much more efficient algorithm that runs in O(n 5 ) time. Both algorithms use the Markov chain... |

91 |
Conductance and the rapid mixing property for Markov chains: the approximation of the permanent resolved
- Jerrum, Sinclair
- 1988
(Show Context)
Citation Context ... is sometimes useful to define the mixing time of a Markov chain as the number of steps required in order for the Markov chain to come close enough to its stationary distribution. Jerrum and Sinclair =-=[JS88]-=- define the relative pointwise distance after t steps by ⎧ ⎨|p ∆(t) =max i,j∈V ⎩ (t) ⎫ ij − πj| ⎬ ⎭ . where p (t) ij is the t-step transition probability from i to j, equalto[P t ]ij. ∆(t) therefore g... |

75 |
Random walks and an O ∗ (n 5 ) volume algorithm for convex bodies. Random Structures and Algorithms
- Kannan, Lovász, et al.
- 1997
(Show Context)
Citation Context ...aximum acceptable error, comes from the Markov chain Monte Carlo technique. Dyer, Frieze, and Kannan [DFK91] give an approximation algorithm that runs in O(n 19 ) time. Kannan, Lovász, and Simonovits =-=[KLS97]-=- give a much more efficient algorithm that runs in O(n 5 ) time. Both algorithms use the Markov chain Monte Carlo method to sample points likely to be within the 10sbody by randomly walking in the bod... |

56 | Some inequalities for reversible Markov chains
- Aldous
- 1982
(Show Context)
Citation Context ...torial tools and gives motivation for proving that certain chains are rapidly mixing. Many of the early results in the field of rapidly mixing Markov chains are due to Aldous in the early to mid 80’s =-=[Ald82]-=-, [Ald83]. At this point we will introduce the notation used in this survey. We will use V to denote the set of states of a Markov chain. We also have the transition matrix P = {pij}i,j∈V of a Markov ... |

45 | Faster mixing via average conductance
- Lovász, Kannan
- 1999
(Show Context)
Citation Context ...in much greater detail. Combining and manipulating our bounds, we can obtain the following characterisation of ∆(t) intermsofΦ: (1 − 2Φ) t ≤ ∆(t) ≤ (1 − Φ2 /2) t From these results, Lovász and Kannan =-=[LK99]-=- derive the bound on mixing time of H≤32 log(1/π0) 1 , Φ2 where they specifically attribute the factor of log(1/π0) to the starting configuration and note that, if the initial distribution σ is close ... |

5 |
Persi Diaconis, Shuffling cards and stopping times
- Aldous
- 1986
(Show Context)
Citation Context ... Wilson [PW96] provide several techniques for reducing the number of steps τ for which the Markov chain must run, even when τ is initially unknown. Efficient stopping rules such as those discussed in =-=[AD86]-=- and [LW95] can be applied to reduce the expected number of steps. Approximating the Permanent The permanent of a matrix is a measure somewhat like the determinant, and only subtly different in defini... |

1 | Efficient stopping rules for Markov chains (extended abstract
- Lovász, Winkler
- 1995
(Show Context)
Citation Context ... the transition probabilities. The task of finding efficient stopping rules for particular graphs is a large area in Markov chain analysis. Some efficient rules are discussed by Lovász and Winkler in =-=[LW95]-=-. 4sThe hitting time from two state distributions σ and τ of a Markov chain is the minimum expected stopping time over all stopping rules that, beginning at σ, stop in the exact distribution of τ. In ... |