Results 1  10
of
59
The Markov Chain Monte Carlo method: an approach to approximate counting and integration
, 1996
"... In the area of statistical physics, Monte Carlo algorithms based on Markov chain simulation have been in use for many years. The validity of these algorithms depends crucially on the rate of convergence to equilibrium of the Markov chain being simulated. Unfortunately, the classical theory of stocha ..."
Abstract

Cited by 241 (12 self)
 Add to MetaCart
In the area of statistical physics, Monte Carlo algorithms based on Markov chain simulation have been in use for many years. The validity of these algorithms depends crucially on the rate of convergence to equilibrium of the Markov chain being simulated. Unfortunately, the classical theory of stochastic processes hardly touches on the sort of nonasymptotic analysis required in this application. As a consequence, it had previously not been possible to make useful, mathematically rigorous statements about the quality of the estimates obtained. Within the last ten years, analytical tools have been devised with the aim of correcting this deficiency. As well as permitting the analysis of Monte Carlo algorithms for classical problems in statistical physics, the introduction of these tools has spurred the development of new approximation algorithms for a wider class of problems in combinatorial enumeration and optimization. The “Markov chain Monte Carlo ” method has been applied to a variety of such problems, and often provides the only known efficient (i.e., polynomial time) solution technique.
The Power of Amnesia: Learning Probabilistic Automata with Variable Memory Length
 Machine Learning
, 1996
"... . We propose and analyze a distribution learning algorithm for variable memory length Markov processes. These processes can be described by a subclass of probabilistic finite automata which we name Probabilistic Suffix Automata (PSA). Though hardness results are known for learning distributions gene ..."
Abstract

Cited by 187 (17 self)
 Add to MetaCart
(Show Context)
. We propose and analyze a distribution learning algorithm for variable memory length Markov processes. These processes can be described by a subclass of probabilistic finite automata which we name Probabilistic Suffix Automata (PSA). Though hardness results are known for learning distributions generated by general probabilistic automata, we prove that the algorithm we present can efficiently learn distributions generated by PSAs. In particular, we show that for any target PSA, the KLdivergence between the distribution generated by the target and the distribution generated by the hypothesis the learning algorithm outputs, can be made small with high confidence in polynomial time and sample complexity. The learning algorithm is motivated by applications in humanmachine interaction. Here we present two applications of the algorithm. In the first one we apply the algorithm in order to construct a model of the English language, and use this model to correct corrupted text. In the second ...
Improved bounds for mixing rates of Markov chains and multicommodity flow
 Combinatorics, Probability and Computing
, 1992
"... The paper is concerned with tools for the quantitative analysis of finite Markov chains whose states are combinatorial structures. Chains of this kind have algorithmic applications in many areas, including random sampling, approximate counting, statistical physics and combinatorial optimisation. The ..."
Abstract

Cited by 176 (8 self)
 Add to MetaCart
(Show Context)
The paper is concerned with tools for the quantitative analysis of finite Markov chains whose states are combinatorial structures. Chains of this kind have algorithmic applications in many areas, including random sampling, approximate counting, statistical physics and combinatorial optimisation. The efficiency of the resulting algorithms depends crucially on the mixing rate of the chain, i.e., the time taken for it to reach its stationary or equilibrium distribution. The paper presents a new upper bound on the mixing rate, based on the solution to a multicommodity flow problem in the Markov chain viewed as a graph. The bound gives sharper estimates for the mixing rate of several important complex Markov chains. As a result, improved bounds are obtained for the runtimes of randomised approximation algorithms for various problems, including computing the permanent of a 01 matrix, counting matchings in graphs, and computing the partition function of a ferromagnetic Ising system. Moreove...
Spectral Partitioning Works: Planar graphs and finite element meshes
 In IEEE Symposium on Foundations of Computer Science
, 1996
"... Spectral partitioning methods use the Fiedler vectorthe eigenvector of the secondsmallest eigenvalue of the Laplacian matrixto find a small separator of a graph. These methods are important components of many scientific numerical algorithms and have been demonstrated by experiment to work extr ..."
Abstract

Cited by 153 (8 self)
 Add to MetaCart
(Show Context)
Spectral partitioning methods use the Fiedler vectorthe eigenvector of the secondsmallest eigenvalue of the Laplacian matrixto find a small separator of a graph. These methods are important components of many scientific numerical algorithms and have been demonstrated by experiment to work extremely well. In this paper, we show that spectral partitioning methods work well on boundeddegree planar graphs and finite element meshes the classes of graphs to which they are usually applied. While naive spectral bisection does not necessarily work, we prove that spectral partitioning techniques can be used to produce separators whose ratio of vertices removed to edges cut is O( p n) for boundeddegree planar graphs and twodimensional meshes and O i n 1=d j for wellshaped ddimensional meshes. The heart of our analysis is an upper bound on the secondsmallest eigenvalues of the Laplacian matrices of these graphs. 1. Introduction Spectral partitioning has become one of the mos...
Balanced Matroids
"... We introduce the notion of "balance", and say that a matroid is balanced if the matroid and all its minors satisfy the property that, for a randomly chosen basis, the presence of an element can only make any other element less likely. We establish strong expansion properties for ..."
Abstract

Cited by 75 (3 self)
 Add to MetaCart
We introduce the notion of &quot;balance&quot;, and say that a matroid is balanced if the matroid and all its minors satisfy the property that, for a randomly chosen basis, the presence of an element can only make any other element less likely. We establish strong expansion properties for the basesexchange graph of balanced matroids; consequently, the set of bases of a balanced matroid can be sampled and approximately counted using rapidly mixing Markov chains. Thus, the general problem of approximately counting bases (known to be #Pcomplete) is reduced to that of showing balance. Specific classes for which balance is known to hold include graphic and regular matroids.
Property Testing
 Handbook of Randomized Computing, Vol. II
, 2000
"... this technical aspect (as in the boundeddegree model the closest graph having the property must have at most dN edges and degree bound d as well). ..."
Abstract

Cited by 73 (11 self)
 Add to MetaCart
(Show Context)
this technical aspect (as in the boundeddegree model the closest graph having the property must have at most dN edges and degree bound d as well).
The power of team exploration: Two robots can learn unlabeled directed graphs
 In Proceedings of the Thirty Fifth Annual Symposium on Foundations of Computer Science
, 1994
"... We show that two cooperating robots can learn exactly any stronglyconnected directed graph with n indistinguishable nodes in expected tame polynomial in n. We introduce a new type of homing sequence for two robots which helps the robots recognize certain previouslyseen nodes. We then present an ..."
Abstract

Cited by 63 (4 self)
 Add to MetaCart
(Show Context)
We show that two cooperating robots can learn exactly any stronglyconnected directed graph with n indistinguishable nodes in expected tame polynomial in n. We introduce a new type of homing sequence for two robots which helps the robots recognize certain previouslyseen nodes. We then present an algorithm in which the robots learn the graph and the homing sequence simultaneously by wandering actively through the graph. Unlike most previous learning results usang homing sequences, our algorithm does not require a teacher to provide counterexamples. Furthermore, the algorithm can use efficiently any additional information available that distinguishes nodes. We also present an algorithm in which the robots learn by taking random walks. The rate at which a random walk converges to the stationary distribution is characterized by the conductance of the graph. Our randomwalk algorithm learns in expected time polynomial in n and in the inverse of the conductance and is more eficient than the homingsequence algorithm for highconductance graphs. 1
TIGHT ANALYSES OF TWO LOCAL LOAD BALANCING ALGORITHMS
 SIAM J. COMPUT.
, 1999
"... This paper presents an analysis of the following load balancing algorithm. At each step, each node in a network examines the number of tokens at each of its neighbors and sends a token to each neighbor with at least 2d + 1 fewer tokens, where d is the maximum degree of any node in the network. We ..."
Abstract

Cited by 50 (5 self)
 Add to MetaCart
This paper presents an analysis of the following load balancing algorithm. At each step, each node in a network examines the number of tokens at each of its neighbors and sends a token to each neighbor with at least 2d + 1 fewer tokens, where d is the maximum degree of any node in the network. We show that within O(∆/α) steps, the algorithm reduces the maximum difference in tokens between any two nodes to at most O((d 2 log n)/α), where ∆ is the global imbalance in tokens (i.e., the maximum difference between the number of tokens at any node initially and the average number of tokens), n is the number of nodes in the network, and α is the edge expansion of the network. The time bound is tight in the sense that for any graph with edge expansion α, and for any value ∆, there exists an initial distribution of tokens with imbalance ∆ for which the time to reduce the imbalance to even ∆/2 is at least Ω(∆/α). The bound on the final imbalance is tight in the sense that there exists a class of networks that can be locally balanced everywhere (i.e., the maximum difference in tokens between any two neighbors is at most 2d), while the global imbalance remains Ω((d 2 log n)/α). Furthermore, we show that upon reaching a state with a global imbalance of O((d 2 log n)/α), the time for this algorithm to locally balance the network can be as large as Ω(n 1/2). We extend our analysis to a variant of this algorithm for dynamic and asynchronous
Local Divergence of Markov Chains and the Analysis of Iterative LoadBalancing Schemes
 IN PROCEEDINGS OF THE 39TH IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS ’98
, 1998
"... We develop a general technique for the quantitative analysis of iterative distributed load balancing schemes. We illustrate the technique by studying two simple, intuitively appealing models that are prevalent in the literature: the diffusive paradigm, and periodic balancing circuits (or the dimensi ..."
Abstract

Cited by 48 (1 self)
 Add to MetaCart
We develop a general technique for the quantitative analysis of iterative distributed load balancing schemes. We illustrate the technique by studying two simple, intuitively appealing models that are prevalent in the literature: the diffusive paradigm, and periodic balancing circuits (or the dimension exchange paradigm). It is well known that such load balancing schemes can be roughly modeled by Markov chains, but also that this approximation can be quite inaccurate. Our main contribution is an effective way of characterizing the deviation between the actual loads and the distribution generated by a related Markov chain, in terms of a natural quantity which we call the local divergence. We apply this technique to obtain bounds on the number of rounds required to achieve coarse balancing in general networks, cycles and meshes in these models. For balancing circuits, we also present bounds for the stronger requirement of perfect balancing, or counting.
Combinatorial Property Testing (a survey)
 In: Randomization Methods in Algorithm Design
, 1998
"... We consider the question of determining whether a given object has a predetermined property or is "far" from any object having the property. Specifically, objects are modeled by functions, and distance between functions is measured as the fraction of the domain on which the functions diffe ..."
Abstract

Cited by 42 (0 self)
 Add to MetaCart
(Show Context)
We consider the question of determining whether a given object has a predetermined property or is "far" from any object having the property. Specifically, objects are modeled by functions, and distance between functions is measured as the fraction of the domain on which the functions differ. We consider (randomized) algorithms which may query the function at arguments of their choice, and seek algorithms which query the function at relatively few places. We focus on combinatorial properties, and specifically on graph properties. The two standard representations of graphs  by adjacency matrices and by incidence lists  yield two different models for testing graph properties. In the first model, most appropriate for dense graphs, distance between Nvertex graphs is measured as the fraction of edges on which the graphs disagree over N 2 . In the second model, most appropriate for boundeddegree graphs, distance between Nvertex ddegree graphs is measured as the fraction of edges on ...