Results 1 
2 of
2
Probabilistic Inference Using Markov Chain Monte Carlo Methods
, 1993
"... Probabilistic inference is an attractive approach to uncertain reasoning and empirical learning in artificial intelligence. Computational difficulties arise, however, because probabilistic models with the necessary realism and flexibility lead to complex distributions over highdimensional spaces. R ..."
Abstract

Cited by 576 (21 self)
 Add to MetaCart
Probabilistic inference is an attractive approach to uncertain reasoning and empirical learning in artificial intelligence. Computational difficulties arise, however, because probabilistic models with the necessary realism and flexibility lead to complex distributions over highdimensional spaces. Related problems in other fields have been tackled using Monte Carlo methods based on sampling using Markov chains, providing a rich array of techniques that can be applied to problems in artificial intelligence. The "Metropolis algorithm" has been used to solve difficult problems in statistical physics for over forty years, and, in the last few years, the related method of "Gibbs sampling" has been applied to problems of statistical inference. Concurrently, an alternative method for solving problems in statistical physics by means of dynamical simulation has been developed as well, and has recently been unified with the Metropolis algorithm to produce the "hybrid Monte Carlo" method. In computer science, Markov chain sampling is the basis of the heuristic optimization technique of "simulated annealing", and has recently been used in randomized algorithms for approximate counting of large sets. In this review, I outline the role of probabilistic inference in artificial intelligence, present the theory of Markov chains, and describe various Markov chain Monte Carlo algorithms, along with a number of supporting techniques. I try to present a comprehensive picture of the range of methods that have been developed, including techniques from the varied literature that have not yet seen wide application in artificial intelligence, but which appear relevant. As illustrative examples, I use the problems of probabilistic inference in expert systems, discovery of latent classes from data, and Bayesian learning for neural networks.
The world in a spin
 in American Scientist
"... Pierre Simon de Laplace had a plan for understanding everything. To this celestial mechanic, it looked simple: The particles of matter produce forces, and those forces in turn move the particles. So if we could just measure all the forces and motions at any one instant, we could calculate the entire ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Pierre Simon de Laplace had a plan for understanding everything. To this celestial mechanic, it looked simple: The particles of matter produce forces, and those forces in turn move the particles. So if we could just measure all the forces and motions at any one instant, we could calculate the entire history of the universe—past, present and future. Two centuries of progress in the sciences have not fulfilled Laplace’s vision; on the contrary, quantum mechanics, and lately chaos theory, have undermined faith in his program. But let’s pretend. If we study a computational model of the universe rather than the real thing, we really can track all the forces and motions. The laws of physics can be kept as simple as we please, since we invent and enforce them. In this toy universe, we can banish all quantum uncertainties, and trace every last detail of every microscopic event. Yet even in such an open and transparent world, total knowledge is still elusive. Although we can follow the individual particles, we have trouble seeing how they act in the aggregate. For example, we may well fail to predict basic thermodynamic phenomena such as boiling and freezing. We could know the whereabouts of every molecule of water in an artificial ocean, but not know whether the stuff is solid or liquid or vapor. The prototypical system for exploring issues of this kind is called the Ising model. It is a model of matter pared down to its barest essentials—just about the simplest imaginable system in which large numbers of particles might be expected to produce some kind of cooperative behavior. If Laplace’s plan can be made to work anywhere, it should succeed here. But the Ising model has proved a difficult challenge, even when attacked with some heavyduty mathematics and computer science. Indeed, the most important version of the model remains without an exact solution.