Results 1 
6 of
6
Learning Bayesian networks: The combination of knowledge and statistical data
 Machine Learning
, 1995
"... We describe scoring metrics for learning Bayesian networks from a combination of user knowledge and statistical data. We identify two important properties of metrics, which we call event equivalence and parameter modularity. These properties have been mostly ignored, but when combined, greatly simpl ..."
Abstract

Cited by 1048 (36 self)
 Add to MetaCart
We describe scoring metrics for learning Bayesian networks from a combination of user knowledge and statistical data. We identify two important properties of metrics, which we call event equivalence and parameter modularity. These properties have been mostly ignored, but when combined, greatly simplify the encoding of a user’s prior knowledge. In particular, a user can express his knowledge—for the most part—as a single prior Bayesian network for the domain. 1
Decision Analytic Networks in Artificial Intelligence
, 1995
"... Researchers in artificial intelligence and decision analysis share a concern with the construction of formal models of human knowledge and expertise. Historically, however, their approaches to these problems have diverged. Members of these two communities have recently discovered common ground: a fa ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Researchers in artificial intelligence and decision analysis share a concern with the construction of formal models of human knowledge and expertise. Historically, however, their approaches to these problems have diverged. Members of these two communities have recently discovered common ground: a family of graphical models of decision theory known as influence diagrams or as belief networks. These models are equally attractive to theoreticians, decision modelers, and designers of knowledgebased systems. From a theoretical perspective, they combine graph theory, probability theory and decision theory. From an implementation perspective, they lead to powerful automated systems. Although many practicing decision analysts have already adopted influence diagrams as modeling and structuring tools, they may remain unaware of the theoretical work that has emerged from the artificial intelligence community. This paper surveys the first decade or so of this work. Investment Technology Group, ...
Moninder Singh
 Proceedings of AAAI97
, 1997
"... Much of the current research in learning Bayesian Networks fails to effectively deal with missing data. Most of the methods assume that the data is complete, or make the data complete using fairly adhoc methods; other methods do deal with missing data but learn only the conditional probabilities, a ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Much of the current research in learning Bayesian Networks fails to effectively deal with missing data. Most of the methods assume that the data is complete, or make the data complete using fairly adhoc methods; other methods do deal with missing data but learn only the conditional probabilities, assuming that the structure is known. We present a principled approach to learn both the Bayesian network structure as well as the conditional probabilities from incomplete data. The proposed algorithm is an iterative method that uses a combination of ExpectationMaximization (EM) and Imputation techniques. Results are presented on synthetic data sets which show that the performance of the new algorithm is much better than adhoc methods for handling missing data. Introduction Many reallife domains are replete with missing values e.g. unobserved symptoms in a medical domain, nonresponse in a survey or missing readings from nonfunctioning sensors. For learning techniques to develop accurate...
Combining Multiple Perspectives
 In Proceedings of the Seventeenth International Conference on Machine Learning
, 2000
"... We consider a group of Bayesian learners whose interactions with the environment and other agents allow them to improve their model of the dependency among various factors that have influence on their interactions with the environment. Effective collaboration can improve the performance of isolated ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
We consider a group of Bayesian learners whose interactions with the environment and other agents allow them to improve their model of the dependency among various factors that have influence on their interactions with the environment. Effective collaboration can improve the performance of isolated individual learners. We present a mechanism to pool together the knowledge of many modelers in the domain, each of whom may have only partial access to the environment. The application domain used in this study is a multiagent negotiation problem. We present results to compare the performance of such knowledgecomposition against isolated learners, as also against a learner who has complete access to the environment.
Learning Bayesian Networks for Solving RealWorld Problems
, 1998
"... Bayesian networks, which provide a compact graphical way to express complex probabilistic relationships among several random variables, are rapidly becoming the tool of choice for dealing with uncertainty in knowledge based systems. However, approaches based on Bayesian networks have often been dism ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Bayesian networks, which provide a compact graphical way to express complex probabilistic relationships among several random variables, are rapidly becoming the tool of choice for dealing with uncertainty in knowledge based systems. However, approaches based on Bayesian networks have often been dismissed as unfit for many realworld applications since probabilistic inference is intractable for most problems of realistic size, and algorithms for learning Bayesian networks impose the unrealistic requirement of datasets being complete. In this thesis, I present practical solutions to these two problems, and demonstrate their effectiveness on several realworld problems. The solution proposed to the first problem is to learn selective Bayesian networks, i.e., ones that use only a subset of the given attributes to model a domain. The aim is to learn networks that are smaller, and henc...
Finding Consensus Bayesian Network Structures
"... Suppose that multiple experts (or learning algorithms) provide us with alternative Bayesian network (BN) structures over a domain, and that we are interested in combining them into a single consensus BN structure. Specifically, we are interested in that the consensus BN structure only represents ind ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Suppose that multiple experts (or learning algorithms) provide us with alternative Bayesian network (BN) structures over a domain, and that we are interested in combining them into a single consensus BN structure. Specifically, we are interested in that the consensus BN structure only represents independences all the given BN structures agree upon and that it has as few parameters associated as possible. In this paper, we prove that there may exist several nonequivalent consensus BN structures and that finding one of them is NPhard. Thus, we decide to resort to heuristics to find an approximated consensus BN structure. In this paper, we consider the heuristic proposed by Matzkevich and Abramson, which builds upon two algorithms, called Methods A and B, for efficiently deriving the minimal directed independence map of a BN structure relative to a given node ordering. Methods A and B are claimed to be correct although no proof is provided (a proof is just sketched). In this paper, we show that Methods A and B are not correct and propose a correction of them. 1.