### A Hybrid Approach for Learning Parameters of Probabilistic Networks from Incomplete Databases *

"... ..."

(Show Context)
### c ○ 2001 Kluwer Academic Publishers. Manufactured in The Netherlands. Robust Learning with Missing Data

"... Abstract. This paper introduces a new method, called the robust Bayesian estimator (RBE), to learn conditional probability distributions from incomplete data sets. The intuition behind the RBE is that, when no information about the pattern of missing data is available, an incomplete database constra ..."

Abstract
- Add to MetaCart

(Show Context)
Abstract. This paper introduces a new method, called the robust Bayesian estimator (RBE), to learn conditional probability distributions from incomplete data sets. The intuition behind the RBE is that, when no information about the pattern of missing data is available, an incomplete database constrains the set of all possible estimates and this paper provides a characterization of these constraints. An experimental comparison with two popular methods to estimate conditional probability distributions from incomplete data—Gibbs sampling and the EM algorithm—shows a gain in robustness. An application of the RBE to quantify a naive Bayesian classifier from an incomplete data set illustrates its practical relevance. Keywords:

### unknown title

, 2000

"... Estimating probability values from an incomplete dataset q ..."

(Show Context)
### BAYESIAN NETWORK STRUCTURAL LEARNING AND INCOMPLETE DATA

"... The Bayesian network formalism is becoming increasingly popular in many areas such as decision aid, diagnosis and complex systems control, in particular thanks to its inference capabilities, even when data are incomplete. Besides, estimating the parameters of a fixed-structure Bayesian network is ea ..."

Abstract
- Add to MetaCart

(Show Context)
The Bayesian network formalism is becoming increasingly popular in many areas such as decision aid, diagnosis and complex systems control, in particular thanks to its inference capabilities, even when data are incomplete. Besides, estimating the parameters of a fixed-structure Bayesian network is easy. However, very few methods are capable of using incomplete cases as a base to determine the structure of a Bayesian network. In this paper, we take up the structural EM algorithm principle [9, 10] to propose an algorithm which extends the Maximum Weight Spanning Tree algorithm to deal with incomplete data. We also propose to use this extension in order to (1) speed up the structural EM algorithm or (2) in classification tasks extend the Tree Augmented Naive classifier in order to deal with incomplete data. 1.

### Toward the Automatic Assessment of Evolvability for Reusable Class Libraries

"... Many sources agree that managing the evolution of an OO system constitutes a complex and resource-consuming task. This is particularly true for reusable class libraries, as the user interface must be preserved to allow for version compatibility. Thus, the symptomatic detection of potential instabili ..."

Abstract
- Add to MetaCart

(Show Context)
Many sources agree that managing the evolution of an OO system constitutes a complex and resource-consuming task. This is particularly true for reusable class libraries, as the user interface must be preserved to allow for version compatibility. Thus, the symptomatic detection of potential instabilities during the design phase of such libraries may serve to avoid later problems. This paper presents a fuzzy logic-based approach for evaluating the interface stability of a reusable class library, by using structural metrics as stability indicators. 1.

### Study of Four Types of Learning Bayesian Networks Cases

, 2014

"... Abstract: As the combination of parameter learning and structure learning, learning Bayesian networks can also be examined, Parameter learning is estimation of the dependencies in the network. Structural learning is the estimation of the links of the network. In terms of whether the structure of the ..."

Abstract
- Add to MetaCart

(Show Context)
Abstract: As the combination of parameter learning and structure learning, learning Bayesian networks can also be examined, Parameter learning is estimation of the dependencies in the network. Structural learning is the estimation of the links of the network. In terms of whether the structure of the network is known and whether the variables are all observable, there are four types of learning Bayesian networks cases. In this paper, first introduce two cases of learning Bayesian networks from complete data: known structure and unobservable variables and unknown structure and unobservable variables. Next, we study two cases of learning Bayesian networks from incomplete data: known network structure and unobservable variables, unknown network structure and unobservable variables.

### On Computing Marginal Probability Intervals in Inference Networks *‡

"... Abstract – Existing methods of parameter and structure learning of probabilistic inference networks assume that the database is complete. If there are missing values, these values are assumed to be missing at random. This paper incorporates the concepts use in Dempster-Shafer theory of belief functi ..."

Abstract
- Add to MetaCart

(Show Context)
Abstract – Existing methods of parameter and structure learning of probabilistic inference networks assume that the database is complete. If there are missing values, these values are assumed to be missing at random. This paper incorporates the concepts use in Dempster-Shafer theory of belief functions to learn both the parameters and structure of the inference networks. Instead of filling the missing values by their estimates, we model these missing values as representing our ignorance or lack of belief in the actual state of the corresponding variables. The representation allows us to add new findings in terms of support functions as used in belief functions, thus providing a richer way to enter evidence in an inference network.

### Efficient Bayesian Network Learning Using EM or Pairwise Deletion

"... In previous work, we have seen how to learn a TAN classifier from incomplete dataset using the Expectation Maximisation algorithm (François and Leray, 2006). In this paper, we study differences for Bayesian network structure learning between estimating prob-abilities using the EM algorithm or using ..."

Abstract
- Add to MetaCart

In previous work, we have seen how to learn a TAN classifier from incomplete dataset using the Expectation Maximisation algorithm (François and Leray, 2006). In this paper, we study differences for Bayesian network structure learning between estimating prob-abilities using the EM algorithm or using Pairwise Deletion. We have implemented these two estimation techniques with greedy search learning methods in several spaces: Trees, Directed Acyclic Graphs, Completed Partially Directed Acyclic Graphs or Tree Augmented Naive Bayes structures. An experimental study shows strengths and weaknesses of using the EM algorithm or Pairwise Deletion on classification tasks. 1

### Statistical Methods in Applied Computer Science Lecture Notes Preliminary for course 2D5342, Data Mining, Jan-April 2006.

, 2006

"... We overview fundamental inference principles for hypothesis and deci-sion choice, parameter(state) estimation and tracking methods. We also explore methods for finding dependencies and graphical models, latent variables and robust decision trees in a joint Bayesian framework. We also consider the mo ..."

Abstract
- Add to MetaCart

(Show Context)
We overview fundamental inference principles for hypothesis and deci-sion choice, parameter(state) estimation and tracking methods. We also explore methods for finding dependencies and graphical models, latent variables and robust decision trees in a joint Bayesian framework. We also consider the most important methods for performing Bayesian inference in various settings: Analytic integration using conjugate families of dis-tributions and likelihoods, discretization, Monte Carlo as well as Markov Chain Monte Carlo and particle filters. We overview related probabilis-tic methods such as robust Bayesian analysis, evidence theory and PAC learning.