Results 1  10
of
21
Approximating probabilistic inference in Bayesian belief networks is NPhard
, 1991
"... Abstract A belief network comprises a graphical representation of dependencies between variables of a domain and a set of conditional probabilities associated with each dependency. Unless P=NP, an efficient, exact algorithm does not exist to compute probabilistic inference in belief networks. Stoch ..."
Abstract

Cited by 256 (3 self)
 Add to MetaCart
Abstract A belief network comprises a graphical representation of dependencies between variables of a domain and a set of conditional probabilities associated with each dependency. Unless P=NP, an efficient, exact algorithm does not exist to compute probabilistic inference in belief networks. Stochastic simulation methods, which often improve run times, provide an alternative to exact inference algorithms. We present such a stochastic simulation algorithm 2)BNRAS that is a randomized approximation scheme. To analyze the run time, we parameterize belief networks by the dependence value PE, which is a measure of the cumulative strengths of the belief network dependencies given background evidence E. This parameterization defines the class of fdependence networks. The run time of 2)BNRAS is polynomial when f is a polynomial function. Thus, the results of this paper prove the existence of a class of belief networks for which inference approximation is polynomial and, hence, provably faster than any exact algorithm. I.
Learning Bayesian belief networks: An approach based on the MDL principle
 Computational Intelligence
, 1994
"... A new approach for learning Bayesian belief networks from raw data is presented. The approach is based on Rissanen's Minimal Description Length (MDL) principle, which is particularly well suited for this task. Our approach does not require any prior assumptions about the distribution being learned. ..."
Abstract

Cited by 188 (8 self)
 Add to MetaCart
A new approach for learning Bayesian belief networks from raw data is presented. The approach is based on Rissanen's Minimal Description Length (MDL) principle, which is particularly well suited for this task. Our approach does not require any prior assumptions about the distribution being learned. In particular, our method can learn unrestricted multiplyconnected belief networks. Furthermore, unlike other approaches our method allows us to tradeo accuracy and complexity in the learned model. This is important since if the learned model is very complex (highly connected) it can be conceptually and computationally intractable. In such a case it would be preferable to use a simpler model even if it is less accurate. The MDL principle o ers a reasoned method for making this tradeo. We also show that our method generalizes previous approaches based on Kullback crossentropy. Experiments have been conducted to demonstrate the feasibility of the approach. Keywords: Knowledge Acquisition � Bayes Nets � Uncertainty Reasoning. 1
AISBN: An Adaptive Importance Sampling Algorithm for Evidential Reasoning in Large Bayesian Networks
 Journal of Artificial Intelligence Research
, 2000
"... Stochastic sampling algorithms, while an attractive alternative to exact algorithms in very large Bayesian network models, have been observed to perform poorly in evidential reasoning with extremely unlikely evidence. To address this problem, we propose an adaptive importance sampling algorithm, ..."
Abstract

Cited by 69 (4 self)
 Add to MetaCart
Stochastic sampling algorithms, while an attractive alternative to exact algorithms in very large Bayesian network models, have been observed to perform poorly in evidential reasoning with extremely unlikely evidence. To address this problem, we propose an adaptive importance sampling algorithm, AISBN, that shows promising convergence rates even under extreme conditions and seems to outperform the existing sampling algorithms consistently. Three sources of this performance improvement are (1) two heuristics for initialization of the importance function that are based on the theoretical properties of importance sampling in nitedimensional integrals and the structural advantages of Bayesian networks, (2) a smooth learning method for the importance function, and (3) a dynamic weighting function for combining samples from dierent stages of the algorithm. We tested the performance of the AISBN algorithm along with two state of the art general purpose sampling algorithms, lik...
An Optimal Approximation Algorithm For Bayesian Inference
 Artificial Intelligence
, 1997
"... Approximating the inference probability Pr[X = xjE = e] in any sense, even for a single evidence node E, is NPhard. This result holds for belief networks that are allowed to contain extreme conditional probabilitiesthat is, conditional probabilities arbitrarily close to 0. Nevertheless, all p ..."
Abstract

Cited by 48 (2 self)
 Add to MetaCart
Approximating the inference probability Pr[X = xjE = e] in any sense, even for a single evidence node E, is NPhard. This result holds for belief networks that are allowed to contain extreme conditional probabilitiesthat is, conditional probabilities arbitrarily close to 0. Nevertheless, all previous approximation algorithms have failed to approximate efficiently many inferences, even for belief networks without extreme conditional probabilities. We prove that we can approximate efficiently probabilistic inference in belief networks without extreme conditional probabilities. We construct a randomized approximation algorithmthe boundedvariance algorithmthat is a variant of the known likelihoodweighting algorithm. The boundedvariance algorithm is the first algorithm with provably fast inference approximation on all belief networks without extreme conditional probabilities. From the boundedvariance algorithm, we construct a deterministic approximation algorithm u...
Optimization by learning and simulation of Bayesian and Gaussian networks
, 1999
"... Estimation of Distribution Algorithms (EDA) constitute an example of stochastics heuristics based on populations of individuals every of which encode the possible solutions to the optimization problem. These populations of individuals evolve in succesive generations as the search progresses  organ ..."
Abstract

Cited by 43 (6 self)
 Add to MetaCart
Estimation of Distribution Algorithms (EDA) constitute an example of stochastics heuristics based on populations of individuals every of which encode the possible solutions to the optimization problem. These populations of individuals evolve in succesive generations as the search progresses  organized in the same way as most evolutionary computation heuristics. In opposition to most evolutionary computation paradigms which consider the crossing and mutation operators as essential tools to generate new populations, EDA replaces those operators by the estimation and simulation of the joint probability distribution of the selected individuals. In this work, after making a review of the different approaches based on EDA for problems of combinatorial optimization as well as for problems of optimization in continuous domains, we propose new approaches based on the theory of probabilistic graphical models to solve problems in both domains. More precisely, we propose to adapt algorit...
Using Causal Information and Local Measures to Learn Bayesian Networks
, 1993
"... In previous work we developed a method of learning Bayesian Network models from raw data. This method relies on the well known minimal description length (MDL) principle. The MDL principle is particularly well suited to this task as it allows us to tradeoff, in a principled way, the accuracy of the ..."
Abstract

Cited by 35 (2 self)
 Add to MetaCart
In previous work we developed a method of learning Bayesian Network models from raw data. This method relies on the well known minimal description length (MDL) principle. The MDL principle is particularly well suited to this task as it allows us to tradeoff, in a principled way, the accuracy of the learned network against its practical usefulness. In this paper we present some new results that have arisen from our work. In particular, we present a new local way of computing the description length. This allows us to make significant improvements in our search algorithm. In addition, we modify our algorithm so that it can take into account partial domain information that might be provided by a domain expert. The local computation of description length also opens the door for local refinement of an existent network. The feasibility of our approach is demonstrated by experiments involving networks of a practical size.
Decomposing Bayesian Networks: Triangulation of Moral Graph with Genetic Algorithms
 Statistics and Computing
, 1997
"... In this paper we consider the optimal decomposition of Bayesian networks. More concretely, we examine  empirically , the applicability of genetic algorithms to the problem of the triangulation of moral graphs. This problem constitutes the only difficult step in the evidence propagation algorithm ..."
Abstract

Cited by 22 (4 self)
 Add to MetaCart
In this paper we consider the optimal decomposition of Bayesian networks. More concretely, we examine  empirically , the applicability of genetic algorithms to the problem of the triangulation of moral graphs. This problem constitutes the only difficult step in the evidence propagation algorithm of Lauritzen and Spiegelhalter (1988) and is known to be NPhard (Wen, 1991). We carry out experiments with distinct crossover and mutation operators and with different population sizes, mutation rates and selection biasses. The results are analyzed statistically. They turn out to improve the results obtained with most other known triangulation methods (Kjaerulff, 1990) and are comparable to the ones obtained with simulated annealing (Kjaerulff, 1990; Kjaerulff, 1992). Keywords: Bayesian networks, genetic algorithms, optimal decomposition, graph triangulation, moral graph, NPhard problems, statistical analysis. 1 Introduction The Bayesian networks constitute a reasoning method based on p...
Uncertain Reasoning and Forecasting
 International Journal of Forecasting
, 1995
"... We develop a probability forecasting model through a synthesis of Bayesian beliefnetwork models and classical timeseries analysis. By casting Bayesian timeseries analyses as temporal beliefnetwork problems, weintroduce dependency models that capture richer and more realistic models of dynamic ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
We develop a probability forecasting model through a synthesis of Bayesian beliefnetwork models and classical timeseries analysis. By casting Bayesian timeseries analyses as temporal beliefnetwork problems, weintroduce dependency models that capture richer and more realistic models of dynamic dependencies. With richer models and associated computational methods, we can movebeyond the rigid classical assumptions of linearityin the relationships among variables and of normality of their probability distributions.
A Bayesian Analysis of Simulation Algorithms for Inference in Belief Networks,
 Networks
, 1993
"... A belief network is a graphical representation of the underlying probabilistic relationships in a complex system. Belief networks have been employed as a representation of uncertain relationships in computerbased diagnostic systems. These diagnostic systems provide assistance by assigning likeli ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
A belief network is a graphical representation of the underlying probabilistic relationships in a complex system. Belief networks have been employed as a representation of uncertain relationships in computerbased diagnostic systems. These diagnostic systems provide assistance by assigning likelihoods to alternative explanatory hypotheses in response to a set of findings or observations. Approximation algorithms have been used to compute likelihoods of hypotheses in large networks. We analyze the performance of leading Monte Carlo approximation algorithms for computing posterior probabilities in belief networks. The analysis differs from earlier attempts to characterize the behavior of simulation algorithms in our explicit use of Bayesian statistics: We update a probability distribution over target probabilities of interest with information from randomized trials. For real ffl; ffi ! 1 and for a probabilistic inference Pr[xje], the output of an inference approximation algorithm is an (ffl; ffi)estimate of Pr[xje] if with probability at least 1 \Gamma ffi the output is within relative error ffl of Pr[xje]. We construct a stopping rule for the number of simulations required by logic sampling, randomized approximation schemes, and likelihood weighting to provide (ffl; ffi)estimates of Pr[xje]. With probability 1 \Gamma ffi, the stopping rule is optimal in the sense that the algorithm performs the minimum number of required simulations. We prove that our stopping rules are insensitive to the prior probability distribution on Pr[xje].
Propagating Imprecise Probabilities In Bayesian Networks
 Artificial Intelligence
, 1996
"... Often experts are incapable of providing `exact' probabilities; likewise, samples on which the probabilities in networks are based must often be small and preliminary. In such cases the probabilities in the networks are imprecise. The imprecision can be handled by secondorder probability distribu ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
Often experts are incapable of providing `exact' probabilities; likewise, samples on which the probabilities in networks are based must often be small and preliminary. In such cases the probabilities in the networks are imprecise. The imprecision can be handled by secondorder probability distributions. It is convenient to use beta or Dirichlet distributions to express the uncertainty about probabilities. The problem of how to propagate point probabilities in a Bayesian network now is transformed into the problem of how to propagate Dirichlet distributions in Bayesian networks. It is shown that the propagation of Dirichlet distributions in Bayesian networks with incomplete data results in a system of probability mixtures of betabinomial and Dirichlet distributions. Approximate first order probabilities and their second order probability density functions are be obtained by stochastic simulation. A number of properties of the propagation of imprecise probabilities are discuss...