Results 1  10
of
62
The Markov Chain Monte Carlo method: an approach to approximate counting and integration
, 1996
"... In the area of statistical physics, Monte Carlo algorithms based on Markov chain simulation have been in use for many years. The validity of these algorithms depends crucially on the rate of convergence to equilibrium of the Markov chain being simulated. Unfortunately, the classical theory of stocha ..."
Abstract

Cited by 241 (12 self)
 Add to MetaCart
In the area of statistical physics, Monte Carlo algorithms based on Markov chain simulation have been in use for many years. The validity of these algorithms depends crucially on the rate of convergence to equilibrium of the Markov chain being simulated. Unfortunately, the classical theory of stochastic processes hardly touches on the sort of nonasymptotic analysis required in this application. As a consequence, it had previously not been possible to make useful, mathematically rigorous statements about the quality of the estimates obtained. Within the last ten years, analytical tools have been devised with the aim of correcting this deficiency. As well as permitting the analysis of Monte Carlo algorithms for classical problems in statistical physics, the introduction of these tools has spurred the development of new approximation algorithms for a wider class of problems in combinatorial enumeration and optimization. The “Markov chain Monte Carlo ” method has been applied to a variety of such problems, and often provides the only known efficient (i.e., polynomial time) solution technique.
Isoperimetric Problems for Convex Bodies and a Localization Lemma
, 1995
"... We study the smallest number /(K) such that a given convex body K in IR n can be cut into two parts K 1 and K 2 by a surface with an (n \Gamma 1)dimensional measure /(K)vol(K 1 ) \Delta vol(K 2 )=vol(K). Let M 1 (K) be the average distance of a point of K from its center of gravity. We prove for ..."
Abstract

Cited by 79 (8 self)
 Add to MetaCart
We study the smallest number /(K) such that a given convex body K in IR n can be cut into two parts K 1 and K 2 by a surface with an (n \Gamma 1)dimensional measure /(K)vol(K 1 ) \Delta vol(K 2 )=vol(K). Let M 1 (K) be the average distance of a point of K from its center of gravity. We prove for the "isoperimetric coefficient" that /(K) ln 2 M 1 (K) ; and give other upper and lower bounds. We conjecture that our upper bound is best possible up to a constant. Our main tool is a general "Localization Lemma" that reduces integral inequalities over the ndimensional space to integral inequalities in a single variable. This lemma was first proved by two of the authors in an earlier paper, but here we give various extensions and variants that make its application smoother. We illustrate the usefulness of the lemma by showing how a number of wellknown results can be proved using it.
Mixing times of lozenge tiling and card shuffling Markov chains
, 1997
"... Abstract. We show how to combine Fourier analysis with coupling arguments to bound the mixing times of a variety of Markov chains. The mixing time is the number of steps a Markov chain takes to approach its equilibrium distribution. One application is to a class of Markov chains introduced by Luby, ..."
Abstract

Cited by 71 (1 self)
 Add to MetaCart
Abstract. We show how to combine Fourier analysis with coupling arguments to bound the mixing times of a variety of Markov chains. The mixing time is the number of steps a Markov chain takes to approach its equilibrium distribution. One application is to a class of Markov chains introduced by Luby, Randall, and Sinclair to generate random tilings of regions by lozenges. For an ℓ×ℓ region we bound the mixing time by O(ℓ 4 log ℓ), which improves on the previous bound of O(ℓ 7), and we show the new bound to be essentially tight. In another application we resolve a few questions raised by Diaconis and SaloffCoste by lower bounding the mixing time of various cardshuffling Markov chains. Our lower bounds are within a constant factor of their upper bounds. When we use our methods to modify a pathcoupling analysis of Bubley and Dyer, we obtain an O(n 3 log n) upper bound on the mixing time of the KarzanovKhachiyan Markov chain for linear extensions. 1.
Markov Chains and Polynomial time Algorithms
, 1994
"... This paper outlines the use of rapidly mixing Markov Chains in randomized polynomial time algorithms to solve approximately certain counting problems. They fall into two classes: combinatorial problems like counting the number of perfect matchings in certain graphs and geometric ones like computing ..."
Abstract

Cited by 45 (0 self)
 Add to MetaCart
This paper outlines the use of rapidly mixing Markov Chains in randomized polynomial time algorithms to solve approximately certain counting problems. They fall into two classes: combinatorial problems like counting the number of perfect matchings in certain graphs and geometric ones like computing the volumes of convex sets.
Faster Mixing via average Conductance
"... The notion of conductance introduced by Jerrum and Sinclair [JS] has been widely used to prove rapid mixing of Markov chains. Here we introduce a variant of this  instead of measuring the conductance of the worst subset of states, we show that it is enough to bound a certain weighted average conduc ..."
Abstract

Cited by 44 (3 self)
 Add to MetaCart
The notion of conductance introduced by Jerrum and Sinclair [JS] has been widely used to prove rapid mixing of Markov chains. Here we introduce a variant of this  instead of measuring the conductance of the worst subset of states, we show that it is enough to bound a certain weighted average conductance (where the average is taken over subsets of states with different sizes.) In the case of convex bodies, we show that this average conductance is better than the known bounds for the worst case; this helps us save a factor of O(n) which is incurred in all proofs as a "penalty" for a "bad start" (i.e., because the starting distribution may be arbitrary).
HitandRun from a Corner
"... We show that the hitandrun random walk mixes rapidly starting from any interior point of a convex body. This is the first random walk known to have this property. In contrast, the ball walk can take exponentially many steps from some starting points. ..."
Abstract

Cited by 41 (6 self)
 Add to MetaCart
We show that the hitandrun random walk mixes rapidly starting from any interior point of a convex body. This is the first random walk known to have this property. In contrast, the ball walk can take exponentially many steps from some starting points.
HitandRun Mixes Fast
 Math. Prog
, 1998
"... It is shown that the "hitandrun" algorithm for sampling from a convex body K (introduced by R.L. Smith) mixes in time O # (n 2 R 2 /r 2 ), where R and r are the radii of the inscribed and circumscribed balls of K. Thus after appropriate preprocessing, hitandrun produces an approx ..."
Abstract

Cited by 40 (7 self)
 Add to MetaCart
It is shown that the "hitandrun" algorithm for sampling from a convex body K (introduced by R.L. Smith) mixes in time O # (n 2 R 2 /r 2 ), where R and r are the radii of the inscribed and circumscribed balls of K. Thus after appropriate preprocessing, hitandrun produces an approximately uniformly distributed sample point in time O # (n 3 ), which matches the best known bound for other sampling algorithms. We show that the bound is best possible in terms of R, r and n. 1 Introduction There are many computational tasks that require sampling from a convex body K in a highdimensional space R n (i.e., generating an approximately uniformly distributed random point in K). The generic method to do so is to define an ergodic random walk on the points of K whose stationary distribution is uniform, and follow this random walk for an appropriately large number of steps; the point obtained this way will be approximately stationary, i.e., approximately uniform. The crucial issue is t...
Computational complexity of stochastic programming problems
, 2005
"... Stochastic programming is the subfield of mathematical programming that considers optimization in the presence of uncertainty. During the last four decades a vast quantity of literature on the subject has appeared. Developments in the theory of computational complexity allow us to establish the theo ..."
Abstract

Cited by 38 (1 self)
 Add to MetaCart
Stochastic programming is the subfield of mathematical programming that considers optimization in the presence of uncertainty. During the last four decades a vast quantity of literature on the subject has appeared. Developments in the theory of computational complexity allow us to establish the theoretical complexity of a variety of stochastic programming problems studied in this literature. Under the assumption that the stochastic parameters are independently distributed, we show that twostage stochastic programming problems are ♯Phard. Under the same assumption we show that certain multistage stochastic programming problems are PSPACEhard. The problems we consider are nonstandard in that distributions of stochastic parameters in later stages depend on decisions made in earlier stages.