Results 1 
4 of
4
Learning recursive Bayesian multinets for data clustering by means of constructive induction
, 2001
"... This paper introduces and evaluates a new class of knowledge model, the recursive Bayesian multinet (RBMN), which encodes the joint probability distribution of a given database. RBMNs extend Bayesian networks (BNs) as well as partitional clustering systems. Briefly, a RBMN is a decision tree with co ..."
Abstract

Cited by 19 (7 self)
 Add to MetaCart
This paper introduces and evaluates a new class of knowledge model, the recursive Bayesian multinet (RBMN), which encodes the joint probability distribution of a given database. RBMNs extend Bayesian networks (BNs) as well as partitional clustering systems. Briefly, a RBMN is a decision tree with component BNs at the leaves. A RBMN is learnt using a greedy, heuristic approach akin to that used by many supervised decision tree learners, but where BNs are learnt at leaves using constructive induction. A key idea is to treat expected data as real data. This allows us to complete the database and to take advantage of a closed form for the marginal likelihood of the expected complete data that factorizes into separate marginal likelihoods for each family (a node and its parents). Our approach is evaluated on synthetic and realworld databases.
Performance Evaluation of Compromise Conditional Gaussian Networks for Data Clustering
, 2001
"... This paper is devoted to the proposal of two classes of compromise conditional Gaussian networks for data clustering as well as to their experimental evaluation and comparison on synthetic and realworld databases. According to the reported results, the models show an ideal tradeoff between eciency ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
This paper is devoted to the proposal of two classes of compromise conditional Gaussian networks for data clustering as well as to their experimental evaluation and comparison on synthetic and realworld databases. According to the reported results, the models show an ideal tradeoff between eciency and effectiveness, i.e., a balance between the cost of the unsupervised model learning process and the quality of the learnt models. Moreover, the proposed models are very appealing due to their closeness to human intuition and computational advantages for the unsupervised model induction process, while preserving a rich enough modelling power.
Learning Bayesian networks from data. Some applications in biomedicine
, 2002
"... es or by automatic learning from a database of cases. Due to the facility on accessing huge databases in the recent years has led to the development of a big number of model learning algorithms. In this talk we will not consider algorithms based on the detection of conditional independencies, and ..."
Abstract
 Add to MetaCart
es or by automatic learning from a database of cases. Due to the facility on accessing huge databases in the recent years has led to the development of a big number of model learning algorithms. In this talk we will not consider algorithms based on the detection of conditional independencies, and we will concentrate on score+search methods. A a good tutorial on this topic is [12]. In a first step we will present some methods for learning from complete data, that is when we don't have any missing values in the database. The methods can be classified regarding the space where the search is done (directed acyclic graph, space of orderings, or space of equivalence classes), but also taking into account the heuristic used in the search (greedy, simulated annealing, tabu search, branch and bound, floating, ant colonies, genetic algorithms, estimation of distribution algorithms, ...). This heuristic approach for searching for the best structure is justified after [3]. We can also Intellig
Globally Multimodal Problem Optimization Via an Estimation of Distribution Algorithm Based on Unsupervised Learning of Bayesian Networks
"... Many optimization problems are what can be called globally multimodal, i.e. they present several global optima. Unfortunately, this is a major source of difficulties for most estimation of distribution algorithms that makes their effectiveness and efficiency degrade, due to genetic drift. With the a ..."
Abstract
 Add to MetaCart
(Show Context)
Many optimization problems are what can be called globally multimodal, i.e. they present several global optima. Unfortunately, this is a major source of difficulties for most estimation of distribution algorithms that makes their effectiveness and efficiency degrade, due to genetic drift. With the aim of overcoming these drawbacks for discrete globally multimodal problem optimization, this paper introduces and evaluates a new estimation of distribution algorithm based on unsupervised learning of Bayesian networks. We report satisfactory experiments with symmetrical binary optimization problems. Key words: Estimation of distribution algorithms, Bayesian networks, unsupervised learning