Results 1  10
of
132
The Power of Amnesia: Learning Probabilistic Automata with Variable Memory Length
 Machine Learning
, 1996
"... . We propose and analyze a distribution learning algorithm for variable memory length Markov processes. These processes can be described by a subclass of probabilistic finite automata which we name Probabilistic Suffix Automata (PSA). Though hardness results are known for learning distributions gene ..."
Abstract

Cited by 226 (18 self)
 Add to MetaCart
(Show Context)
. We propose and analyze a distribution learning algorithm for variable memory length Markov processes. These processes can be described by a subclass of probabilistic finite automata which we name Probabilistic Suffix Automata (PSA). Though hardness results are known for learning distributions generated by general probabilistic automata, we prove that the algorithm we present can efficiently learn distributions generated by PSAs. In particular, we show that for any target PSA, the KLdivergence between the distribution generated by the target and the distribution generated by the hypothesis the learning algorithm outputs, can be made small with high confidence in polynomial time and sample complexity. The learning algorithm is motivated by applications in humanmachine interaction. Here we present two applications of the algorithm. In the first one we apply the algorithm in order to construct a model of the English language, and use this model to correct corrupted text. In the second ...
Spectral Partitioning Works: Planar graphs and finite element meshes
 In IEEE Symposium on Foundations of Computer Science
, 1996
"... Spectral partitioning methods use the Fiedler vectorthe eigenvector of the secondsmallest eigenvalue of the Laplacian matrixto find a small separator of a graph. These methods are important components of many scientific numerical algorithms and have been demonstrated by experiment to work extr ..."
Abstract

Cited by 199 (10 self)
 Add to MetaCart
Spectral partitioning methods use the Fiedler vectorthe eigenvector of the secondsmallest eigenvalue of the Laplacian matrixto find a small separator of a graph. These methods are important components of many scientific numerical algorithms and have been demonstrated by experiment to work extremely well. In this paper, we show that spectral partitioning methods work well on boundeddegree planar graphs and finite element meshes the classes of graphs to which they are usually applied. While naive spectral bisection does not necessarily work, we prove that spectral partitioning techniques can be used to produce separators whose ratio of vertices removed to edges cut is O( p n) for boundeddegree planar graphs and twodimensional meshes and O i n 1=d j for wellshaped ddimensional meshes. The heart of our analysis is an upper bound on the secondsmallest eigenvalues of the Laplacian matrices of these graphs. 1. Introduction Spectral partitioning has become one of the mos...
Logarithmic Sobolev inequality and finite markov chains
, 1996
"... This is an expository paper on the use of logarithmic Sobolev inequalities for bounding rates of convergence of Markov chains on finite state spaces to their stationary distributions. Logarithmic Sobolev inequalities complement eigenvalue techniques and work for nonreversible chains in continuous ti ..."
Abstract

Cited by 179 (15 self)
 Add to MetaCart
This is an expository paper on the use of logarithmic Sobolev inequalities for bounding rates of convergence of Markov chains on finite state spaces to their stationary distributions. Logarithmic Sobolev inequalities complement eigenvalue techniques and work for nonreversible chains in continuous time. Some aspects of the theory simplify considerably with finite state spaces and we are able to give a selfcontained development. Examples of applications include the study of a Metropolis chain for the binomial distribution, sharp results for natural chains on the box of side n in d dimensions and improved rates for exclusion processes. We also show that for most rregular graphs the logSobolev constant is of smaller order than the spectral gap. The logSobolev constant of the asymmetric twopoint space is computed exactly as well as the logSobolev constant of the complete graph on n points.
A chernoff bound for random walks on expander graphs
 In IEEE Symposium on Foundations of Computer Science
, 1993
"... ..."
Laplacians and the Cheeger Inequality for Directed Graphs
 Annals of Combinatorics
, 2005
"... We consider Laplacians for directed graphs and examine their eigenvalues. We introduce a notion of a circulation in a directed graph and its connection with the Rayleigh quotient. We then define a Cheeger constant and establish the Cheeger inequality for directed graphs. These relations can be used ..."
Abstract

Cited by 103 (4 self)
 Add to MetaCart
(Show Context)
We consider Laplacians for directed graphs and examine their eigenvalues. We introduce a notion of a circulation in a directed graph and its connection with the Rayleigh quotient. We then define a Cheeger constant and establish the Cheeger inequality for directed graphs. These relations can be used to deal with various problems that often arise in the study of nonreversible Markov chains including bounding the rate of convergence and deriving comparison theorems. 1
What do we know about the Metropolis algorithm
 J. Comput. System. Sci
, 1998
"... The Metropolis algorithm is a widely used procedure for sampling from a specified distribution on a large finite set. We survey what is rigorously known about running times. This includes work from statistical physics, computer science, probability and statistics. Some new results are given ae an il ..."
Abstract

Cited by 89 (13 self)
 Add to MetaCart
The Metropolis algorithm is a widely used procedure for sampling from a specified distribution on a large finite set. We survey what is rigorously known about running times. This includes work from statistical physics, computer science, probability and statistics. Some new results are given ae an illustration of the geometric theory of Markov chains. 1. Introduction. Let % be a finite set and m(~)> 0 a probability distribution on %. The Metropolis algorithm is a procedure for drawing samples from X. It was introduced by Metropolis, Rosenbluth, Rosenbluth, Teller, and Teller [1953]. The algorithm requires the user to specify a connected, aperiodic Markov chain 1<(z, y) on %. This chain need not be symmetric but if K(z, y)>0, one needs 1<(Y, z)>0. The chain K is modified