Results 11 
18 of
18
Learning the Tree Augmented Naive Bayes Classifier from incomplete datasets
"... The Bayesian network formalism is becoming increasingly popular in many areas such as decision aid or diagnosis, in particular thanks to its inference capabilities, even when data are incomplete. For classification tasks, Naive Bayes and Augmented Naive Bayes classifiers have shown excellent perform ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The Bayesian network formalism is becoming increasingly popular in many areas such as decision aid or diagnosis, in particular thanks to its inference capabilities, even when data are incomplete. For classification tasks, Naive Bayes and Augmented Naive Bayes classifiers have shown excellent performances. Learning a Naive Bayes classifier from incomplete datasets is not difficult as only parameter learning has to be performed. But there are not many methods to efficiently learn Tree Augmented Naive Bayes classifiers from incomplete datasets. In this paper, we take up the structural em algorithm principle introduced by (Friedman, 1997) to propose an algorithm to answer this question. 1
BAYESIAN NETWORK STRUCTURAL LEARNING AND INCOMPLETE DATA
"... The Bayesian network formalism is becoming increasingly popular in many areas such as decision aid, diagnosis and complex systems control, in particular thanks to its inference capabilities, even when data are incomplete. Besides, estimating the parameters of a fixedstructure Bayesian network is ea ..."
Abstract
 Add to MetaCart
The Bayesian network formalism is becoming increasingly popular in many areas such as decision aid, diagnosis and complex systems control, in particular thanks to its inference capabilities, even when data are incomplete. Besides, estimating the parameters of a fixedstructure Bayesian network is easy. However, very few methods are capable of using incomplete cases as a base to determine the structure of a Bayesian network. In this paper, we take up the structural EM algorithm principle [9, 10] to propose an algorithm which extends the Maximum Weight Spanning Tree algorithm to deal with incomplete data. We also propose to use this extension in order to (1) speed up the structural EM algorithm or (2) in classification tasks extend the Tree Augmented Naive classifier in order to deal with incomplete data. 1.
Toward the Automatic Assessment of Evolvability for Reusable Class Libraries
"... Many sources agree that managing the evolution of an OO system constitutes a complex and resourceconsuming task. This is particularly true for reusable class libraries, as the user interface must be preserved to allow for version compatibility. Thus, the symptomatic detection of potential instabili ..."
Abstract
 Add to MetaCart
Many sources agree that managing the evolution of an OO system constitutes a complex and resourceconsuming task. This is particularly true for reusable class libraries, as the user interface must be preserved to allow for version compatibility. Thus, the symptomatic detection of potential instabilities during the design phase of such libraries may serve to avoid later problems. This paper presents a fuzzy logicbased approach for evaluating the interface stability of a reusable class library, by using structural metrics as stability indicators. 1.
c ○ 2001 Kluwer Academic Publishers. Manufactured in The Netherlands. Robust Learning with Missing Data
"... Abstract. This paper introduces a new method, called the robust Bayesian estimator (RBE), to learn conditional probability distributions from incomplete data sets. The intuition behind the RBE is that, when no information about the pattern of missing data is available, an incomplete database constra ..."
Abstract
 Add to MetaCart
Abstract. This paper introduces a new method, called the robust Bayesian estimator (RBE), to learn conditional probability distributions from incomplete data sets. The intuition behind the RBE is that, when no information about the pattern of missing data is available, an incomplete database constrains the set of all possible estimates and this paper provides a characterization of these constraints. An experimental comparison with two popular methods to estimate conditional probability distributions from incomplete data—Gibbs sampling and the EM algorithm—shows a gain in robustness. An application of the RBE to quantify a naive Bayesian classifier from an incomplete data set illustrates its practical relevance. Keywords:
On Computing Marginal Probability Intervals in Inference Networks *‡
"... Abstract – Existing methods of parameter and structure learning of probabilistic inference networks assume that the database is complete. If there are missing values, these values are assumed to be missing at random. This paper incorporates the concepts use in DempsterShafer theory of belief functi ..."
Abstract
 Add to MetaCart
Abstract – Existing methods of parameter and structure learning of probabilistic inference networks assume that the database is complete. If there are missing values, these values are assumed to be missing at random. This paper incorporates the concepts use in DempsterShafer theory of belief functions to learn both the parameters and structure of the inference networks. Instead of filling the missing values by their estimates, we model these missing values as representing our ignorance or lack of belief in the actual state of the corresponding variables. The representation allows us to add new findings in terms of support functions as used in belief functions, thus providing a richer way to enter evidence in an inference network.
Study of Four Types of Learning Bayesian Networks Cases
, 2014
"... Abstract: As the combination of parameter learning and structure learning, learning Bayesian networks can also be examined, Parameter learning is estimation of the dependencies in the network. Structural learning is the estimation of the links of the network. In terms of whether the structure of the ..."
Abstract
 Add to MetaCart
Abstract: As the combination of parameter learning and structure learning, learning Bayesian networks can also be examined, Parameter learning is estimation of the dependencies in the network. Structural learning is the estimation of the links of the network. In terms of whether the structure of the network is known and whether the variables are all observable, there are four types of learning Bayesian networks cases. In this paper, first introduce two cases of learning Bayesian networks from complete data: known structure and unobservable variables and unknown structure and unobservable variables. Next, we study two cases of learning Bayesian networks from incomplete data: known network structure and unobservable variables, unknown network structure and unobservable variables.