Results 1  10
of
57
Learning Bayesian networks: The combination of knowledge and statistical data
 Machine Learning
, 1995
"... We describe scoring metrics for learning Bayesian networks from a combination of user knowledge and statistical data. We identify two important properties of metrics, which we call event equivalence and parameter modularity. These properties have been mostly ignored, but when combined, greatly simpl ..."
Abstract

Cited by 913 (38 self)
 Add to MetaCart
We describe scoring metrics for learning Bayesian networks from a combination of user knowledge and statistical data. We identify two important properties of metrics, which we call event equivalence and parameter modularity. These properties have been mostly ignored, but when combined, greatly simplify the encoding of a user’s prior knowledge. In particular, a user can express his knowledge—for the most part—as a single prior Bayesian network for the domain. 1
A Guide to the Literature on Learning Probabilistic Networks From Data
, 1996
"... This literature review discusses different methods under the general rubric of learning Bayesian networks from data, and includes some overlapping work on more general probabilistic networks. Connections are drawn between the statistical, neural network, and uncertainty communities, and between the ..."
Abstract

Cited by 172 (0 self)
 Add to MetaCart
This literature review discusses different methods under the general rubric of learning Bayesian networks from data, and includes some overlapping work on more general probabilistic networks. Connections are drawn between the statistical, neural network, and uncertainty communities, and between the different methodological communities, such as Bayesian, description length, and classical statistics. Basic concepts for learning and Bayesian networks are introduced and methods are then reviewed. Methods are discussed for learning parameters of a probabilistic network, for learning the structure, and for learning hidden variables. The presentation avoids formal definitions and theorems, as these are plentiful in the literature, and instead illustrates key concepts with simplified examples. Keywords Bayesian networks, graphical models, hidden variables, learning, learning structure, probabilistic networks, knowledge discovery. I. Introduction Probabilistic networks or probabilistic gra...
Learning Bayesian Networks from Data: An InformationTheory Based Approach
"... This paper provides algorithms that use an informationtheoretic analysis to learn Bayesian network structures from data. Based on our threephase learning framework, we develop efficient algorithms that can effectively learn Bayesian networks, requiring only polynomial numbers of conditional indepe ..."
Abstract

Cited by 93 (5 self)
 Add to MetaCart
This paper provides algorithms that use an informationtheoretic analysis to learn Bayesian network structures from data. Based on our threephase learning framework, we develop efficient algorithms that can effectively learn Bayesian networks, requiring only polynomial numbers of conditional independence (CI) tests in typical cases. We provide precise conditions that specify when these algorithms are guaranteed to be correct as well as empirical evidence (from real world applications and simulation tests) that demonstrates that these systems work efficiently and reliably in practice.
A Bayesian Approach to Causal Discovery
, 1997
"... We examine the Bayesian approach to the discovery of directed acyclic causal models and compare it to the constraintbased approach. Both approaches rely on the Causal Markov assumption, but the two differ significantly in theory and practice. An important difference between the approaches is that t ..."
Abstract

Cited by 79 (1 self)
 Add to MetaCart
We examine the Bayesian approach to the discovery of directed acyclic causal models and compare it to the constraintbased approach. Both approaches rely on the Causal Markov assumption, but the two differ significantly in theory and practice. An important difference between the approaches is that the constraintbased approach uses categorical information about conditionalindependence constraints in the domain, whereas the Bayesian approach weighs the degree to which such constraints hold. As a result, the Bayesian approach has three distinct advantages over its constraintbased counterpart. One, conclusions derived from the Bayesian approach are not susceptible to incorrect categorical decisions about independence facts that can occur with data sets of finite size. Two, using the Bayesian approach, finer distinctions among model structuresboth quantitative and qualitativecan be made. Three, information from several models can be combined to make better inferences and to better ...
The maxmin hillclimbing bayesian network structure learning algorithm
 Machine Learning
, 2006
"... Abstract. We present a new algorithm for Bayesian network structure learning, called MaxMin HillClimbing (MMHC). The algorithm combines ideas from local learning, constraintbased, and searchandscore techniques in a principled and effective way. It first reconstructs the skeleton of a Bayesian n ..."
Abstract

Cited by 76 (7 self)
 Add to MetaCart
Abstract. We present a new algorithm for Bayesian network structure learning, called MaxMin HillClimbing (MMHC). The algorithm combines ideas from local learning, constraintbased, and searchandscore techniques in a principled and effective way. It first reconstructs the skeleton of a Bayesian network and then performs a Bayesianscoring greedy hillclimbing search to orient the edges. In our extensive empirical evaluation MMHC outperforms on average and in terms of various metrics several prototypical and stateoftheart algorithms, namely the PC, Sparse Candidate, Three Phase Dependency Analysis, Optimal Reinsertion, Greedy Equivalence Search, and Greedy Search. These are the first empirical results simultaneously comparing most of the major Bayesian network algorithms against each other. MMHC offers certain theoretical advantages, specifically over the Sparse Candidate algorithm, corroborated by our experiments. MMHC and detailed results of our study are publicly available at
Learning Bayesian Networks by Genetic Algorithms. A case study in the prediction of survival in malignant skin melanoma
, 1997
"... In this work we introduce a methodology based on Genetic Algorithms for the automatic induction of Bayesian Networks from a file containing cases and variables related to the problem. The methodology is applied to the problem of predicting survival of people after one, three and five years of being ..."
Abstract

Cited by 71 (11 self)
 Add to MetaCart
In this work we introduce a methodology based on Genetic Algorithms for the automatic induction of Bayesian Networks from a file containing cases and variables related to the problem. The methodology is applied to the problem of predicting survival of people after one, three and five years of being diagnosed as having malignant skin melanoma. The accuracy of the obtained model, measured in terms of the percentage of wellclassified subjects, is compared to that obtained by the called NaiveBayes. In both cases, the estimation of the model accuracy is obtained from the 10fold crossvalidation method. 1. Introduction Expert systems, one of the most developed areas in the field of Artificial Intelligence, are computer programs designed to help or replace humans beings in tasks in which the human experience and human knowledge are scarce and unreliable. Although, there are domains in which the tasks can be specifed by logic rules, other domains are characterized by an uncertainty inherent...
Learning Belief Networks from Data: An Information Theory Based Approach
 In Proceedings of the Sixth ACM International Conference on Information and Knowledge Management
"... This paper presents an efficient algorithm for learning Bayesian belief networks from databases. The algorithm takes a database as input and constructs the belief network structure as output. The construction process is based on the computation of mutual information of attribute pairs. Given a data ..."
Abstract

Cited by 65 (7 self)
 Add to MetaCart
This paper presents an efficient algorithm for learning Bayesian belief networks from databases. The algorithm takes a database as input and constructs the belief network structure as output. The construction process is based on the computation of mutual information of attribute pairs. Given a data set that is large enough, this algorithm can generate a belief network very close to the underlying model, and at the same time, enjoys the time complexity of O N ( ) 4 on conditional independence (CI) tests. When the data set has a normal DAGFaithful (see Section 3.2) probability distribution, the algorithm guarantees that the structure of a perfect map [Pearl, 1988] of the underlying dependency model is generated. To evaluate this algorithm, we present the experimental results on three versions of the wellknown ALARM network database, which has 37 attributes and 10,000 records. The results show that this algorithm is accurate and efficient. The proof of correctness and the analysis of c...
Learning Bayesian Network Structures by Searching For the Best Ordering With Genetic Algorithms
 IEEE Transactions on Systems, Man and Cybernetics
, 1996
"... In this paper we present a ne_(l n [!ii ' with respect to Bayesian networks con ogy for inducing Bayesian network structures frop3 titute the roblem of the evidence propagation and a database of cases. The methodology is based oap&lll searching for the best ordering of the system vari the problem ..."
Abstract

Cited by 54 (9 self)
 Add to MetaCart
In this paper we present a ne_(l n [!ii ' with respect to Bayesian networks con ogy for inducing Bayesian network structures frop3 titute the roblem of the evidence propagation and a database of cases. The methodology is based oap&lll searching for the best ordering of the system vari the problem of the model search. The problem of shies by means of genetic algorithl{. Since his th_vidence propagation consists of once the vMproblem of finding an optimal ordea. teeuarue}rables are known, the assignment of resembles the traveling salesman p'FolUleh)ve use .... IW. ....... probablhles to the values of the rest of the van genetic operators that were developed for the latter  problem. The quality of a variable ordering is eval ables. Cooper [4] demonstrated that this problem Mated with the algorithm K2. We present empirical results that were obtained with a simulation of the ALARM network.
Efficient Learning of Selective Bayesian Network Classifiers
, 1995
"... In this paper, we present a computationally efficient method for inducing selective Bayesian network classifiers. Our approach is to use informationtheoretic metrics to efficiently select a subset of attributes from which to learn the classifier. We explore three conditional, informationtheoretic ..."
Abstract

Cited by 50 (4 self)
 Add to MetaCart
In this paper, we present a computationally efficient method for inducing selective Bayesian network classifiers. Our approach is to use informationtheoretic metrics to efficiently select a subset of attributes from which to learn the classifier. We explore three conditional, informationtheoretic metrics that are extensions of metrics used extensively in decision tree learning, namely Quinlan's gain and gain ratio metrics and Mantaras's distance metric. We experimentally show that the algorithms based on gain ratio and distance metric learn selective Bayesian networks that have predictive accuracies as good as or better than those learned by existing selective Bayesian network induction approaches (K2AS), but at a significantly lower computational cost. We prove that the subsetselection phase of these informationbased algorithms has polynomial complexity as compared to the worstcase exponential time complexity of the corresponding phase in K2AS. We also compare the performance o...
An Algorithm for Bayesian Belief Network Construction from Data
 IN PROCEEDINGS OF AI & STAT’97
, 1997
"... This paper presents an efficient algorithm for constructing Bayesian belief networks from databases. The algorithm takes a database and an attributes ordering (i.e., the causal attributes of an attribute should appear earlier in the order) as input and constructs a belief network structure as output ..."
Abstract

Cited by 43 (6 self)
 Add to MetaCart
This paper presents an efficient algorithm for constructing Bayesian belief networks from databases. The algorithm takes a database and an attributes ordering (i.e., the causal attributes of an attribute should appear earlier in the order) as input and constructs a belief network structure as output. The construction process is based on the computation of mutual information of attribute pairs. Given a data set which is large enough and has a DAGIsomorphic probability distribution, this algorithm guarantees that the perfect map [1] of the underlying dependency model is generated, and at the same time, enjoys the time complexity of O N ( ) on conditional independence (CI) tests. To evaluate this algorithm, we present the experimental results on three versions of the wellknown ALARM network database, which has 37 attributes and 10,000 records. The correctness proof and the analysis of computational complexity are also presented. We also discuss the features of our work and relate it to previous works.