Results 1  10
of
157
The Markov Chain Monte Carlo method: an approach to approximate counting and integration
, 1996
"... In the area of statistical physics, Monte Carlo algorithms based on Markov chain simulation have been in use for many years. The validity of these algorithms depends crucially on the rate of convergence to equilibrium of the Markov chain being simulated. Unfortunately, the classical theory of stocha ..."
Abstract

Cited by 286 (12 self)
 Add to MetaCart
(Show Context)
In the area of statistical physics, Monte Carlo algorithms based on Markov chain simulation have been in use for many years. The validity of these algorithms depends crucially on the rate of convergence to equilibrium of the Markov chain being simulated. Unfortunately, the classical theory of stochastic processes hardly touches on the sort of nonasymptotic analysis required in this application. As a consequence, it had previously not been possible to make useful, mathematically rigorous statements about the quality of the estimates obtained. Within the last ten years, analytical tools have been devised with the aim of correcting this deficiency. As well as permitting the analysis of Monte Carlo algorithms for classical problems in statistical physics, the introduction of these tools has spurred the development of new approximation algorithms for a wider class of problems in combinatorial enumeration and optimization. The “Markov chain Monte Carlo ” method has been applied to a variety of such problems, and often provides the only known efficient (i.e., polynomial time) solution technique.
Nearlylinear time algorithms for graph partitioning, graph sparsification, and solving linear systems (Extended Abstract)
 STOC'04
, 2004
"... We present algorithms for solving symmetric, diagonallydominant linear systems to accuracy ɛ in time linear in their number of nonzeros and log(κf (A)/ɛ), where κf (A) isthe condition number of the matrix defining the linear system. Our algorithm applies the preconditioned Chebyshev iteration with ..."
Abstract

Cited by 223 (11 self)
 Add to MetaCart
We present algorithms for solving symmetric, diagonallydominant linear systems to accuracy ɛ in time linear in their number of nonzeros and log(κf (A)/ɛ), where κf (A) isthe condition number of the matrix defining the linear system. Our algorithm applies the preconditioned Chebyshev iteration with preconditioners designed using nearlylinear time algorithms for graph sparsification and graph partitioning.
The BrunnMinkowski inequality
 Bull. Amer. Math. Soc. (N.S
, 2002
"... Abstract. In 1978, Osserman [124] wrote an extensive survey on the isoperimetric inequality. The BrunnMinkowski inequality can be proved in a page, yet quickly yields the classical isoperimetric inequality for important classes of subsets of R n, and deserves to be better known. This guide explains ..."
Abstract

Cited by 184 (9 self)
 Add to MetaCart
Abstract. In 1978, Osserman [124] wrote an extensive survey on the isoperimetric inequality. The BrunnMinkowski inequality can be proved in a page, yet quickly yields the classical isoperimetric inequality for important classes of subsets of R n, and deserves to be better known. This guide explains the relationship between the BrunnMinkowski inequality and other inequalities in geometry and analysis, and some applications. 1.
Isoperimetric Problems for Convex Bodies and a Localization Lemma
, 1995
"... We study the smallest number /(K) such that a given convex body K in IR n can be cut into two parts K 1 and K 2 by a surface with an (n \Gamma 1)dimensional measure /(K)vol(K 1 ) \Delta vol(K 2 )=vol(K). Let M 1 (K) be the average distance of a point of K from its center of gravity. We prove for ..."
Abstract

Cited by 129 (7 self)
 Add to MetaCart
We study the smallest number /(K) such that a given convex body K in IR n can be cut into two parts K 1 and K 2 by a surface with an (n \Gamma 1)dimensional measure /(K)vol(K 1 ) \Delta vol(K 2 )=vol(K). Let M 1 (K) be the average distance of a point of K from its center of gravity. We prove for the "isoperimetric coefficient" that /(K) ln 2 M 1 (K) ; and give other upper and lower bounds. We conjecture that our upper bound is best possible up to a constant. Our main tool is a general "Localization Lemma" that reduces integral inequalities over the ndimensional space to integral inequalities in a single variable. This lemma was first proved by two of the authors in an earlier paper, but here we give various extensions and variants that make its application smoother. We illustrate the usefulness of the lemma by showing how a number of wellknown results can be proved using it.
Some Applications of Laplace Eigenvalues of Graphs
 GRAPH SYMMETRY: ALGEBRAIC METHODS AND APPLICATIONS, VOLUME 497 OF NATO ASI SERIES C
, 1997
"... In the last decade important relations between Laplace eigenvalues and eigenvectors of graphs and several other graph parameters were discovered. In these notes we present some of these results and discuss their consequences. Attention is given to the partition and the isoperimetric properties of ..."
Abstract

Cited by 129 (0 self)
 Add to MetaCart
In the last decade important relations between Laplace eigenvalues and eigenvectors of graphs and several other graph parameters were discovered. In these notes we present some of these results and discuss their consequences. Attention is given to the partition and the isoperimetric properties of graphs, the maxcut problem and its relation to semidefinite programming, rapid mixing of Markov chains, and to extensions of the results to infinite graphs.
A chernoff bound for random walks on expander graphs
 In IEEE Symposium on Foundations of Computer Science
, 1993
"... ..."
(Show Context)
What do we know about the Metropolis algorithm
 J. Comput. System. Sci
, 1998
"... The Metropolis algorithm is a widely used procedure for sampling from a specified distribution on a large finite set. We survey what is rigorously known about running times. This includes work from statistical physics, computer science, probability and statistics. Some new results are given ae an il ..."
Abstract

Cited by 89 (13 self)
 Add to MetaCart
The Metropolis algorithm is a widely used procedure for sampling from a specified distribution on a large finite set. We survey what is rigorously known about running times. This includes work from statistical physics, computer science, probability and statistics. Some new results are given ae an illustration of the geometric theory of Markov chains. 1. Introduction. Let % be a finite set and m(~)> 0 a probability distribution on %. The Metropolis algorithm is a procedure for drawing samples from X. It was introduced by Metropolis, Rosenbluth, Rosenbluth, Teller, and Teller [1953]. The algorithm requires the user to specify a connected, aperiodic Markov chain 1<(z, y) on %. This chain need not be symmetric but if K(z, y)>0, one needs 1<(Y, z)>0. The chain K is modified
Solving convex programs by random walks
 Journal of the ACM
, 2002
"... Minimizing a convex function over a convex set in ndimensional space is a basic, general problem with many interesting special cases. Here, we present a simple new algorithm for convex optimization based on sampling by a random walk. It extends naturally to minimizing quasiconvex functions and to ..."
Abstract

Cited by 74 (12 self)
 Add to MetaCart
(Show Context)
Minimizing a convex function over a convex set in ndimensional space is a basic, general problem with many interesting special cases. Here, we present a simple new algorithm for convex optimization based on sampling by a random walk. It extends naturally to minimizing quasiconvex functions and to other generalizations.
HitandRun from a Corner
 STOC'04
, 2004
"... We show that the hitandrun random walk mixes rapidly starting from any interior point of a convex body. This is the first random walk known to have this property. In contrast, the ball walk can take exponentially many steps from some starting points. ..."
Abstract

Cited by 67 (8 self)
 Add to MetaCart
(Show Context)
We show that the hitandrun random walk mixes rapidly starting from any interior point of a convex body. This is the first random walk known to have this property. In contrast, the ball walk can take exponentially many steps from some starting points.