Results 1  10
of
13
Causality: Models
 Reasoning, and Inference
, 2000
"... This paper explores the role of Directed Acyclic Graphs (DAGs) as a representation of conditional independence relationships. We show that DAGs offer polynomially sound and complete inference mechanisms for inferring conditional independence relationships from a given causal set of such relationship ..."
Abstract

Cited by 103 (15 self)
 Add to MetaCart
This paper explores the role of Directed Acyclic Graphs (DAGs) as a representation of conditional independence relationships. We show that DAGs offer polynomially sound and complete inference mechanisms for inferring conditional independence relationships from a given causal set of such relationships. As a consequence, dseparation, a graphical criterion for identifying independencies in a DAG, is shown to uncover more valid independencies then any other criterion. In addition, we employ the Armstrong property of conditional independence to show that the dependence relationships displayed by a DAG are inherently consistent, i.e. for every DAG D there exists some probability distribution P that embodies all the conditional independencies displayed in D and none other. INTRODUCTION AND SUMMARY OF RESULTS Networks employing Directed Acyclic Graphs (DAGs) have a long and rich tradition, starting with the geneticist Wright (1921). He developed a method called path analysis [Wright, 1934] which later on, became an established representation of causal models in economics [Wold, 1964], sociology [Blalock, 1971] and psychology [Duncan, 1975]. Influence diagrams represent another application of
A simple approach to Bayesian network computations
, 1994
"... The general problem of computing posterior probabilities in Bayesian networks is NPhard (Cooper 1990). However efficient algorithms are often possible for particular applications by exploiting problem structures. It is well understood that the key to the materialization of such a possibility is to ..."
Abstract

Cited by 82 (8 self)
 Add to MetaCart
The general problem of computing posterior probabilities in Bayesian networks is NPhard (Cooper 1990). However efficient algorithms are often possible for particular applications by exploiting problem structures. It is well understood that the key to the materialization of such a possibility is to make use of conditional independence and work with factorizations of joint probabilities rather than joint probabilities themselves. Different exact approaches can be characterized in terms of their choices of factorizations. We propose a new approach which adopts a straightforward way for factorizing joint probabilities. In comparison with the clique tree propagation approach, our approach is very simple. It allows the pruning of irrelevant variables, it accommodates changes to the knowledge base more easily. it is easier to implement. More importantly, it can be adapted to utilize both intercausal independence and conditional independence in one uniform framework. On the other hand, clique tree propagation is better in terms of facilitating precomputations.
A Survey of Algorithms for RealTime Bayesian Network Inference
 In In the joint AAAI02/KDD02/UAI02 workshop on RealTime Decision Support and Diagnosis Systems
, 2002
"... As Bayesian networks are applied to more complex and realistic realworld applications, the development of more efficient inference algorithms working under realtime constraints is becoming more and more important. This paper presents a survey of various exact and approximate Bayesian network ..."
Abstract

Cited by 32 (2 self)
 Add to MetaCart
As Bayesian networks are applied to more complex and realistic realworld applications, the development of more efficient inference algorithms working under realtime constraints is becoming more and more important. This paper presents a survey of various exact and approximate Bayesian network inference algorithms. In particular, previous research on realtime inference is reviewed. It provides a framework for understanding these algorithms and the relationships between them. Some important issues in realtime Bayesian networks inference are also discussed.
Importance Sampling Algorithms for the Propagation of Probabilities in Belief Networks
 INTERNATIONAL JOURNAL OF APPROXIMATE REASONING
, 1994
"... This paper investigates the use of a class of importance sampling algorithms for probabilistic graphs in graphical structures. A general model for constructing importance sampling algorithms is given and then some particular cases are considered. Logical sampling and likelihood weighting are particu ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
This paper investigates the use of a class of importance sampling algorithms for probabilistic graphs in graphical structures. A general model for constructing importance sampling algorithms is given and then some particular cases are considered. Logical sampling and likelihood weighting are particular cases of the model. Our proposal will be an algorithm which uses the functions with less entropy (more informative) to simulate the variables and the functions with more entropy to weight the simulations, in this way we expec...
Learning hybrid Bayesian networks from data
, 1998
"... We illustrate two different methodologies for learning Hybrid Bayesian networks, that is, Bayesian networks containing both continuous and discrete variables, from data. The two methodologies differ in the way of handling continuous data when learning the Bayesian network structure. The first method ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
We illustrate two different methodologies for learning Hybrid Bayesian networks, that is, Bayesian networks containing both continuous and discrete variables, from data. The two methodologies differ in the way of handling continuous data when learning the Bayesian network structure. The first methodology uses discretized data to learn the Bayesian network structure, and the original nondiscretized data for the parameterization of the learned structure. The second methodology uses nondiscretized data both to learn the Bayesian network structure and its parameterization. For the direct handling of continuous data, we propose the use of artificial neural networks as probability estimators, to be used as an integral part of the scoring metric defined to search the space of Bayesian network structures. With both methodologies, we assume the availability of a complete dataset, with no missing values or hidden variables. We report experimental results aimed at comparing the two methodologies. These results provide evidence that learning with discretized data presents advantages both in terms of efficiency and in terms of accuracy of the learned models over the alternative approach of using nondiscretized data.
A method for learning belief networks that contain hidden variables
 in Proceedings of the Workshop on Knowledge Discovery in Databases
, 1994
"... This paper presents a Bayesian method for computing the probability of a Bayesian beliefnetwork structure from a database. In particular, the paper focuses on computing the probability of a beliefnetwork structure that contains e. hidden (latent) variable. A hidden variable represents a postulated ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
This paper presents a Bayesian method for computing the probability of a Bayesian beliefnetwork structure from a database. In particular, the paper focuses on computing the probability of a beliefnetwork structure that contains e. hidden (latent) variable. A hidden variable represents a postulated entity about which we have no data. For example, we may wish to postulate the existence of a hidden
Inference using message propagation and topology transformation in vector gaussian continuous networks
 Proceedings of the Twelfth UAI Conference
, 1996
"... We extend continuous Gaussian networks − directed acyclic graphs that encode probabilistic relationships between variables − to its vector form. Vector Gaussian continuous networks consist of composite nodes representing multivariables, that take continuous values. These vector or composite nodes ca ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
We extend continuous Gaussian networks − directed acyclic graphs that encode probabilistic relationships between variables − to its vector form. Vector Gaussian continuous networks consist of composite nodes representing multivariables, that take continuous values. These vector or composite nodes can represent correlations between parents, as opposed to conventional univariate nodes. We derive rules for inference in these networks based on two methods: message propagation and topology transformation. These two approaches lead to the development of algorithms, that can be implemented in either a centralized or a decentralized manner. The domain of application of these networks are monitoring and estimation problems. This new representation along with the rules for inference developed here can be used to derive current Bayesian algorithms such as the Kalman filter, and provide a rich foundation to develop new algorithms. We illustrate this process by deriving the decentralized form of the Kalman filter. This work unifies concepts from artificial intelligence and modern control theory. 1
Decision Analytic Networks in Artificial Intelligence
, 1995
"... Researchers in artificial intelligence and decision analysis share a concern with the construction of formal models of human knowledge and expertise. Historically, however, their approaches to these problems have diverged. Members of these two communities have recently discovered common ground: a fa ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Researchers in artificial intelligence and decision analysis share a concern with the construction of formal models of human knowledge and expertise. Historically, however, their approaches to these problems have diverged. Members of these two communities have recently discovered common ground: a family of graphical models of decision theory known as influence diagrams or as belief networks. These models are equally attractive to theoreticians, decision modelers, and designers of knowledgebased systems. From a theoretical perspective, they combine graph theory, probability theory and decision theory. From an implementation perspective, they lead to powerful automated systems. Although many practicing decision analysts have already adopted influence diagrams as modeling and structuring tools, they may remain unaware of the theoretical work that has emerged from the artificial intelligence community. This paper surveys the first decade or so of this work. Investment Technology Group, ...
A simple approach toBayesian network computations
"... The general problem of computing posterior probabilities in Bayesian networks is NPhard (Cooper 1990). However e cient algorithms are often possible for particular applications by exploiting problem structures. It is well understood that the key to the materialization of such a possibility istomake ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The general problem of computing posterior probabilities in Bayesian networks is NPhard (Cooper 1990). However e cient algorithms are often possible for particular applications by exploiting problem structures. It is well understood that the key to the materialization of such a possibility istomake use of conditional independence and work with factorizations of joint probabilities rather than joint probabilities themselves. Di erent exact approaches can be characterized in terms of their choices of factorizations. We propose a new approach which adopts a straightforward way for factorizing joint probabilities. In comparison with the clique tree propagation approach, our approach isvery simple. It allows the pruning of irrelevantvariables, it accommodates changes to the knowledge base more easily. it is easier to implement. More importantly, it can be adapted to utilize both intercausal independence and conditional independence in one uniform framework. On the other hand, clique tree propagation is better in terms of facilitating precomputations.