Results 1 
4 of
4
A simple approach to Bayesian network computations
, 1994
"... The general problem of computing posterior probabilities in Bayesian networks is NPhard (Cooper 1990). However efficient algorithms are often possible for particular applications by exploiting problem structures. It is well understood that the key to the materialization of such a possibility is to ..."
Abstract

Cited by 82 (8 self)
 Add to MetaCart
The general problem of computing posterior probabilities in Bayesian networks is NPhard (Cooper 1990). However efficient algorithms are often possible for particular applications by exploiting problem structures. It is well understood that the key to the materialization of such a possibility is to make use of conditional independence and work with factorizations of joint probabilities rather than joint probabilities themselves. Different exact approaches can be characterized in terms of their choices of factorizations. We propose a new approach which adopts a straightforward way for factorizing joint probabilities. In comparison with the clique tree propagation approach, our approach is very simple. It allows the pruning of irrelevant variables, it accommodates changes to the knowledge base more easily. it is easier to implement. More importantly, it can be adapted to utilize both intercausal independence and conditional independence in one uniform framework. On the other hand, clique tree propagation is better in terms of facilitating precomputations.
Using Causal Information and Local Measures to Learn Bayesian Networks
, 1993
"... In previous work we developed a method of learning Bayesian Network models from raw data. This method relies on the well known minimal description length (MDL) principle. The MDL principle is particularly well suited to this task as it allows us to tradeoff, in a principled way, the accuracy of the ..."
Abstract

Cited by 35 (2 self)
 Add to MetaCart
In previous work we developed a method of learning Bayesian Network models from raw data. This method relies on the well known minimal description length (MDL) principle. The MDL principle is particularly well suited to this task as it allows us to tradeoff, in a principled way, the accuracy of the learned network against its practical usefulness. In this paper we present some new results that have arisen from our work. In particular, we present a new local way of computing the description length. This allows us to make significant improvements in our search algorithm. In addition, we modify our algorithm so that it can take into account partial domain information that might be provided by a domain expert. The local computation of description length also opens the door for local refinement of an existent network. The feasibility of our approach is demonstrated by experiments involving networks of a practical size.
On the complexity of fundamental computational problems in pedigree analysis
 Computer Science Department, University of California, Davis
, 1999
"... On the complexity of fundamental computational problems in pedigree analysis ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
On the complexity of fundamental computational problems in pedigree analysis
A simple approach toBayesian network computations
"... The general problem of computing posterior probabilities in Bayesian networks is NPhard (Cooper 1990). However e cient algorithms are often possible for particular applications by exploiting problem structures. It is well understood that the key to the materialization of such a possibility istomake ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The general problem of computing posterior probabilities in Bayesian networks is NPhard (Cooper 1990). However e cient algorithms are often possible for particular applications by exploiting problem structures. It is well understood that the key to the materialization of such a possibility istomake use of conditional independence and work with factorizations of joint probabilities rather than joint probabilities themselves. Di erent exact approaches can be characterized in terms of their choices of factorizations. We propose a new approach which adopts a straightforward way for factorizing joint probabilities. In comparison with the clique tree propagation approach, our approach isvery simple. It allows the pruning of irrelevantvariables, it accommodates changes to the knowledge base more easily. it is easier to implement. More importantly, it can be adapted to utilize both intercausal independence and conditional independence in one uniform framework. On the other hand, clique tree propagation is better in terms of facilitating precomputations.