Results 1  10
of
20
Loopy Belief Propagation for Approximate Inference: An Empirical Study
 In Proceedings of Uncertainty in AI
, 1999
"... Recently, researchers have demonstrated that "loopy belief propagation"  the use of Pearl's polytree algorithm in a Bayesian network with loops  can perform well in the context of errorcorrecting codes. The most dramatic instance of this is the near Shannonlimit performance of "Turbo ..."
Abstract

Cited by 466 (18 self)
 Add to MetaCart
Recently, researchers have demonstrated that "loopy belief propagation"  the use of Pearl's polytree algorithm in a Bayesian network with loops  can perform well in the context of errorcorrecting codes. The most dramatic instance of this is the near Shannonlimit performance of "Turbo Codes"  codes whose decoding algorithm is equivalent to loopy belief propagation in a chainstructured Bayesian network. In this paper we ask: is there something special about the errorcorrecting code context, or does loopy propagation work as an approximate inference scheme in a more general setting? We compare the marginals computed using loopy propagation to the exact ones in four Bayesian network architectures, including two realworld networks: ALARM and QMR. We find that the loopy beliefs often converge and when they do, they give a good approximation to the correct marginals. However, on the QMR network, the loopy beliefs oscillated and had no obvious relationship ...
Bayesian Networks Without Tears
 AI MAGAZINE
, 1991
"... I give an introduction to Bayesian networks for AI researchers with a limited grounding in probability theory. Over the last few years, this method of reasoning using probabilities has become popular within the AI probability and uncertainty community. Indeed, it is probably fair to say that Bayesia ..."
Abstract

Cited by 236 (2 self)
 Add to MetaCart
I give an introduction to Bayesian networks for AI researchers with a limited grounding in probability theory. Over the last few years, this method of reasoning using probabilities has become popular within the AI probability and uncertainty community. Indeed, it is probably fair to say that Bayesian networks are to a large segment of the AIuncertainty community what resolution theorem proving is to the AIlogic community. Nevertheless, despite what seems to be their obvious importance, the ideas and techniques have not spread much beyond the research community responsible for them. This is probably because the ideas and techniques are not that easy to understand. I hope to rectify this situation by making Bayesian networks more accessible to the probabilistically unsophisticated.
AISBN: An Adaptive Importance Sampling Algorithm for Evidential Reasoning in Large Bayesian Networks
 Journal of Artificial Intelligence Research
, 2000
"... Stochastic sampling algorithms, while an attractive alternative to exact algorithms in very large Bayesian network models, have been observed to perform poorly in evidential reasoning with extremely unlikely evidence. To address this problem, we propose an adaptive importance sampling algorithm, ..."
Abstract

Cited by 69 (4 self)
 Add to MetaCart
Stochastic sampling algorithms, while an attractive alternative to exact algorithms in very large Bayesian network models, have been observed to perform poorly in evidential reasoning with extremely unlikely evidence. To address this problem, we propose an adaptive importance sampling algorithm, AISBN, that shows promising convergence rates even under extreme conditions and seems to outperform the existing sampling algorithms consistently. Three sources of this performance improvement are (1) two heuristics for initialization of the importance function that are based on the theoretical properties of importance sampling in nitedimensional integrals and the structural advantages of Bayesian networks, (2) a smooth learning method for the importance function, and (3) a dynamic weighting function for combining samples from dierent stages of the algorithm. We tested the performance of the AISBN algorithm along with two state of the art general purpose sampling algorithms, lik...
Variational Probabilistic Inference and the QMRDT Network
 JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH
, 1999
"... We describe a variational approximation method for efficient inference in largescale probabilistic models. Variational methods are deterministic procedures that provide approximations to marginal and conditional probabilities of interest. They provide alternatives to approximate inference method ..."
Abstract

Cited by 57 (3 self)
 Add to MetaCart
We describe a variational approximation method for efficient inference in largescale probabilistic models. Variational methods are deterministic procedures that provide approximations to marginal and conditional probabilities of interest. They provide alternatives to approximate inference methods based on stochastic sampling or search. We describe a variational approach to the problem of diagnostic inference in the "Quick Medical Reference" (QMR) network. The QMR network is a largescale probabilistic graphical model built on statistical and expert knowledge. Exact probabilistic inference is infeasible in this model for all but a small set of cases. We evaluate our variational inference algorithm on a large set of diagnostic test cases, comparing the algorithm to a stateoftheart stochastic sampling method.
Variational Methods for Inference and Estimation in Graphical Models
, 1997
"... Graphical models enhance the representational power of probability models through qualitative characterization of their properties. This also leads to greater efficiency in terms of the computational algorithms that empower such representations. The increasing complexity of these models, however, qu ..."
Abstract

Cited by 57 (3 self)
 Add to MetaCart
Graphical models enhance the representational power of probability models through qualitative characterization of their properties. This also leads to greater efficiency in terms of the computational algorithms that empower such representations. The increasing complexity of these models, however, quickly renders exact probabilistic calculations infeasible. We propose a principled framework for approximating graphical models based on variational methods. We develop variational techniques from the perspective that unifies and expands their applicability to graphical models. These methods allow the (recursive) computation of upper and lower bounds on the quantities of interest. Such bounds yield considerably more information than mere approximations and provide an inherent error metric for tailoring the approximations individually to the cases considered. These desirable properties, concomitant to the variational methods, are unlikely to arise as a result of other deterministic or stochas...
Decomposing Bayesian Networks: Triangulation of Moral Graph with Genetic Algorithms
 Statistics and Computing
, 1997
"... In this paper we consider the optimal decomposition of Bayesian networks. More concretely, we examine  empirically , the applicability of genetic algorithms to the problem of the triangulation of moral graphs. This problem constitutes the only difficult step in the evidence propagation algorithm ..."
Abstract

Cited by 22 (4 self)
 Add to MetaCart
In this paper we consider the optimal decomposition of Bayesian networks. More concretely, we examine  empirically , the applicability of genetic algorithms to the problem of the triangulation of moral graphs. This problem constitutes the only difficult step in the evidence propagation algorithm of Lauritzen and Spiegelhalter (1988) and is known to be NPhard (Wen, 1991). We carry out experiments with distinct crossover and mutation operators and with different population sizes, mutation rates and selection biasses. The results are analyzed statistically. They turn out to improve the results obtained with most other known triangulation methods (Kjaerulff, 1990) and are comparable to the ones obtained with simulated annealing (Kjaerulff, 1990; Kjaerulff, 1992). Keywords: Bayesian networks, genetic algorithms, optimal decomposition, graph triangulation, moral graph, NPhard problems, statistical analysis. 1 Introduction The Bayesian networks constitute a reasoning method based on p...
A Bayesian Analysis of Simulation Algorithms for Inference in Belief Networks,
 Networks
, 1993
"... A belief network is a graphical representation of the underlying probabilistic relationships in a complex system. Belief networks have been employed as a representation of uncertain relationships in computerbased diagnostic systems. These diagnostic systems provide assistance by assigning likeli ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
A belief network is a graphical representation of the underlying probabilistic relationships in a complex system. Belief networks have been employed as a representation of uncertain relationships in computerbased diagnostic systems. These diagnostic systems provide assistance by assigning likelihoods to alternative explanatory hypotheses in response to a set of findings or observations. Approximation algorithms have been used to compute likelihoods of hypotheses in large networks. We analyze the performance of leading Monte Carlo approximation algorithms for computing posterior probabilities in belief networks. The analysis differs from earlier attempts to characterize the behavior of simulation algorithms in our explicit use of Bayesian statistics: We update a probability distribution over target probabilities of interest with information from randomized trials. For real ffl; ffi ! 1 and for a probabilistic inference Pr[xje], the output of an inference approximation algorithm is an (ffl; ffi)estimate of Pr[xje] if with probability at least 1 \Gamma ffi the output is within relative error ffl of Pr[xje]. We construct a stopping rule for the number of simulations required by logic sampling, randomized approximation schemes, and likelihood weighting to provide (ffl; ffi)estimates of Pr[xje]. With probability 1 \Gamma ffi, the stopping rule is optimal in the sense that the algorithm performs the minimum number of required simulations. We prove that our stopping rules are insensitive to the prior probability distribution on Pr[xje].
Variational probabilistic inference and the QMRDT database
 Journal of Artificial Intelligence Research
, 1999
"... We describe a variational approximation method for efficient inference in largescale probabilistic models. Variational methods are deterministic procedures that provide approximations to marginal and conditional probabilities of interest. They provide alternatives to approximate inference methods b ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
We describe a variational approximation method for efficient inference in largescale probabilistic models. Variational methods are deterministic procedures that provide approximations to marginal and conditional probabilities of interest. They provide alternatives to approximate inference methods based on stochastic sampling or search. We describe a variational approach to the problem of diagnostic inference in the "Quick Medical Reference" (QMR) database. The QMR database is a largescale probabilistic graphical model built on statistical and expert knowledge. Exact probabilistic inference is infeasible in this model for all but a small set of cases. We evaluate our variational inference algorithm on a large set of diagnostic test cases, comparing the algorithm to a stateoftheart stochastic sampling method. 1 Introduction Probabilistic models have become increasingly prevalent in AI in recent years. Beyond the significant representational advantages of probability theory, inclu...
Variational methods and the QMRDT database
 Journal of Arti Intelligence
, 1999
"... We describe variational approximation methods for e cient probabilistic reasoning, applying these methods to the problem of diagnostic inference in the QMRDT database. The QMRDT database is a largescale belief network based on statistical and expert knowledge in internal medicine. The size and co ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
We describe variational approximation methods for e cient probabilistic reasoning, applying these methods to the problem of diagnostic inference in the QMRDT database. The QMRDT database is a largescale belief network based on statistical and expert knowledge in internal medicine. The size and complexity of this network render exact probabilistic diagnosis infeasible for all but a small set of cases. This has hindered the development of the QMRDT network as a practical diagnostic tool and has hindered researchers from exploring and critiquing the diagnostic behavior of QMR. In this paper we describe how variational approximation methods can be applied to the QMR network, resulting in fast diagnostic inference. We evaluate the accuracy of our methods on a set of standard diagnostic cases and compare to stochastic sampling methods. 1
Optimal Monte Carlo Estimation of Belief Network Inference
 In Proceedings of the 12th Conference on Uncertainty in Artificial Intelligence
, 1996
"... We present two Monte Carlo sampling algorithms for probabilistic inference that guarantee polynomialtime convergence for a larger class of network than current sampling algorithms provide. These new methods are variants of the known likelihood weighting algorithm. We use of recent advances in ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
We present two Monte Carlo sampling algorithms for probabilistic inference that guarantee polynomialtime convergence for a larger class of network than current sampling algorithms provide. These new methods are variants of the known likelihood weighting algorithm. We use of recent advances in the theory of optimal stopping rules for Monte Carlo simulation to obtain an inference approximation with relative error and a small failure probability . We present an empirical evaluation of the algorithms which demonstrates their improved performance. 1