Results 1 
5 of
5
A Tutorial on Learning Bayesian Networks
 Communications of the ACM
, 1995
"... We examine a graphical representation of uncertain knowledge called a Bayesian network. The representation is easy to construct and interpret, yet has formal probabilistic semantics making it suitable for statistical manipulation. We show how we can use the representation to learn new knowledge by c ..."
Abstract

Cited by 299 (13 self)
 Add to MetaCart
We examine a graphical representation of uncertain knowledge called a Bayesian network. The representation is easy to construct and interpret, yet has formal probabilistic semantics making it suitable for statistical manipulation. We show how we can use the representation to learn new knowledge by combining domain knowledge with statistical data. 1 Introduction Many techniques for learning rely heavily on data. In contrast, the knowledge encoded in expert systems usually comes solely from an expert. In this paper, we examine a knowledge representation, called a Bayesian network, that lets us have the best of both worlds. Namely, the representation allows us to learn new knowledge by combining expert domain knowledge and statistical data. A Bayesian network is a graphical representation of uncertain knowledge that most people find easy to construct and interpret. In addition, the representation has formal probabilistic semantics, making it suitable for statistical manipulation (Howard,...
Learning Bayesian Networks is NPComplete
, 1996
"... Algorithms for learning Bayesian networks from data havetwo components: a scoring metric and a search procedure. The scoring metric computes a score reflecting the goodnessoffit of the structure to the data. The search procedure tries to identify network structures with high scores. Heckerman e ..."
Abstract

Cited by 159 (8 self)
 Add to MetaCart
Algorithms for learning Bayesian networks from data havetwo components: a scoring metric and a search procedure. The scoring metric computes a score reflecting the goodnessoffit of the structure to the data. The search procedure tries to identify network structures with high scores. Heckerman et al. (1995) introduce a Bayesian metric, called the BDe metric, that computes the relative posterior probabilityofanetwork structure given data. In this paper, we show that the search problem of identifying a Bayesian networkamong those where each node has at most K parentsthat has a relative posterior probability greater than a given constant is NPcomplete, when the BDe metric is used. 12.1
Learning Equivalence Classes Of Bayesian Network Structures
, 1996
"... Approaches to learning Bayesian networks from data typically combine a scoring metric with a heuristic search procedure. Given aBayesian network structure, many of the scoring metrics derived in the literature return a score for the entire equivalence class to which the structure belongs. When ..."
Abstract

Cited by 132 (1 self)
 Add to MetaCart
Approaches to learning Bayesian networks from data typically combine a scoring metric with a heuristic search procedure. Given aBayesian network structure, many of the scoring metrics derived in the literature return a score for the entire equivalence class to which the structure belongs. When using such a metric, it is appropriate for the heuristic search algorithm to searchover equivalence classes of Bayesian networks as opposed to individual structures. We present the general formulation of a search space for which the states of the search correspond to equivalence classes of structures. Using this space, anyoneofanumber of heuristic searchalgorithms can easily be applied. We compare greedy search performance in the proposed search space to greedy search performance in a search space for which the states correspond to individual Bayesian network structures. 1
Fast learning from sparse data
 In Proc. UAI99
, 1999
"... We describe two techniques that significantly improve the running time of several standard machinelearning algorithms when data is sparse. The first technique is an algorithm that efficiently extracts oneway and twoway counts—either real or expected— from discrete data. Extracting such counts is ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
We describe two techniques that significantly improve the running time of several standard machinelearning algorithms when data is sparse. The first technique is an algorithm that efficiently extracts oneway and twoway counts—either real or expected— from discrete data. Extracting such counts is a fundamental step in learning algorithms for constructing a variety of models including decision trees, decision graphs, Bayesian networks, and naiveBayes clustering models. The second technique is an algorithm that efficiently performs the Estep of the EM algorithm (i.e., inference) when applied to a naiveBayes clustering model. Using realworld data sets, we demonstrate a dramatic decrease in running time for algorithms that incorporate these techniques. 1