Results 1  10
of
12
Exact Bayesian structure discovery in Bayesian networks
 J. of Machine Learning Research
, 2004
"... We consider a Bayesian method for learning the Bayesian network structure from complete data. Recently, Koivisto and Sood (2004) presented an algorithm that for any single edge computes its marginal posterior probability in O(n2 n) time, where n is the number of attributes; the number of parents per ..."
Abstract

Cited by 55 (8 self)
 Add to MetaCart
We consider a Bayesian method for learning the Bayesian network structure from complete data. Recently, Koivisto and Sood (2004) presented an algorithm that for any single edge computes its marginal posterior probability in O(n2 n) time, where n is the number of attributes; the number of parents per attribute is bounded by a constant. In this paper we show that the posterior probabilities for all the n(n−1) potential edges can be computed in O(n2 n) total time. This result is achieved by a forward–backward technique and fast Möbius transform algorithms, which are of independent interest. The resulting speedup by a factor of about n 2 allows us to experimentally study the statistical power of learning moderatesize networks. We report results from a simulation study that covers data sets with 20 to 10,000 records over 5 to 25 discrete attributes. 1
Efficient markov network structure discovery using independence tests
 In Proc SIAM Data Mining
, 2006
"... We present two algorithms for learning the structure of a Markov network from discrete data: GSMN and GSIMN. Both algorithms use statistical conditional independence tests on data to infer the structure by successively constraining the set of structures consistent with the results of these tests. GS ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
We present two algorithms for learning the structure of a Markov network from discrete data: GSMN and GSIMN. Both algorithms use statistical conditional independence tests on data to infer the structure by successively constraining the set of structures consistent with the results of these tests. GSMN is a natural adaptation of the GrowShrink algorithm of Margaritis and Thrun for learning the structure of Bayesian networks. GSIMN extends GSMN by additionally exploiting Pearl’s wellknown properties of conditional independence relations to infer novel independencies from known independencies, thus avoiding the need to perform these tests. Experiments on artificial and real data sets show GSIMN can yield savings of up to 70 % with respect to GSMN, while generating a Markov network with comparable or in several cases considerably improved quality. In addition
An effective structure learning method for constructing gene networks
 Bioinformatics
, 2006
"... Motivation: Bayesian network methods have shown promise in gene regulatory network reconstruction because of their capability of capturing causal relationships between genes and handling data with noises found in biological experiments. The problem of learning network structures, however, is NP hard ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
Motivation: Bayesian network methods have shown promise in gene regulatory network reconstruction because of their capability of capturing causal relationships between genes and handling data with noises found in biological experiments. The problem of learning network structures, however, is NP hard. Consequently, heuristic methods such as hill climbing are used for structure learning. For networks of a moderate size, hill climbing methods are not computationally efficient. Furthermore, relatively low accuracy of the learned structures may be observed. The purpose of this paper is to present a novel structure learning method for gene network discovery.. Results: In this paper, we present a novel structure learning method to reconstruct the underlying gene networks from the observational gene expression data. Unlike hill climbing approaches, the proposed method first constructs an undirected network based on mutual information between two nodes and then split the structure into substructures. The directional orientations for the edges that connect two nodes are then obtained by optimizing a scoring function for each substructure. Our method is evaluated using two benchmark network datasets with known structures. The results show that the proposed method can identify networks that are close to the optimal structures. It outperforms hill climbing methods in terms of both computation time and predicted structure accuracy. We also apply the method to gene expression data measured during the yeast cycle and show the effectiveness of the proposed method for network reconstruction.
Learning Bayesian Network Classifiers: Searching . . .
, 2005
"... There is a commonly held opinion that the algorithms for learning unrestricted types of Bayesian networks, especially those based on the score+search paradigm, are not suitable for building competitive Bayesian networkbased classifiers. Several specialized algorithms that carry out the search into ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
There is a commonly held opinion that the algorithms for learning unrestricted types of Bayesian networks, especially those based on the score+search paradigm, are not suitable for building competitive Bayesian networkbased classifiers. Several specialized algorithms that carry out the search into different types of directed acyclic graph (DAG) topologies have since been developed, most of these being extensions (using augmenting arcs) or modifications of the Naive Bayes basic topology. In this paper, we present a new algorithm to induce classifiers based on Bayesian networks which obtains excellent results even when standard scoring functions are used. The method performs a simple local search in a space unlike unrestricted or augmented DAGs. Our search space consists of a type of partially directed acyclic graph (PDAG) which combines two concepts of DAG equivalence: classification equivalence and independence equivalence. The results of exhaustive experimentation indicate that the proposed method can compete with stateoftheart algorithms for classification.
Q.: Learning Bayesian network equivalence classes with ant colony optimization
 Journal of Artificial Intelligence Research
, 2009
"... Bayesian networks are a useful tool in the representation of uncertain knowledge. This paper proposes a new algorithm called ACOE, to learn the structure of a Bayesian network. It does this by conducting a search through the space of equivalence classes of Bayesian networks using Ant Colony Optimiz ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
Bayesian networks are a useful tool in the representation of uncertain knowledge. This paper proposes a new algorithm called ACOE, to learn the structure of a Bayesian network. It does this by conducting a search through the space of equivalence classes of Bayesian networks using Ant Colony Optimization (ACO). To this end, two novel extensions of traditional ACO techniques are proposed and implemented. Firstly, multiple types of moves are allowed. Secondly, moves can be given in terms of indices that are not based on construction graph nodes. The results of testing show that ACOE performs better than a greedy search and other stateoftheart and metaheuristic algorithms whilst searching in the space of equivalence classes. 1.
C.: A primer on the evolution of equivalence classes of BayesianNetwork structures
 Parallel Problem Solving from Nature VIII. Volume 3242 of Lecture Notes in Computer Science
, 2004
"... Abstract. Bayesian networks (BN) constitute a useful tool to model the joint distribution of a set of random variables of interest. To deal with the problem of learning sensible BN models from data, we have previously considered various evolutionary algorithms for searching the space of BN structure ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Abstract. Bayesian networks (BN) constitute a useful tool to model the joint distribution of a set of random variables of interest. To deal with the problem of learning sensible BN models from data, we have previously considered various evolutionary algorithms for searching the space of BN structures directly. In this paper, we explore a simple evolutionary algorithm designed to search the space of BN equivalence classes. We discuss a number of issues arising in this evolutionary context and provide a first assessment of the new class of algorithms. 1
A Study on the Evolution of Bayesian Network Graph Structures
"... Abstract. Bayesian Networks (BN) are often sought as useful descriptive and predictive models for theavaHable data. Learning algorithms trying to ascertain automatically the best BN model (graph structure) for some input data are of the greatest interest for practical reasons. In this paper we exami ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. Bayesian Networks (BN) are often sought as useful descriptive and predictive models for theavaHable data. Learning algorithms trying to ascertain automatically the best BN model (graph structure) for some input data are of the greatest interest for practical reasons. In this paper we examine a number of evolutionary programming algorithms for this network induction problem. Our algorithms build on recent advances in the field and are based on selection and various kinds of mutation operators (working at both the directed acyclic and essential graph level). A review of related evolutionary work is also provided. We analyze and discuss the merit and computational toll of these EP algorithms in a couple of benchmark tasks. Some general conclusions'about the most efficient algorithms, and the most appropriate search landscapes are presented. 1
17 Evolutionary Methods for Learning Bayesian Network Structures
"... Bayesian networks (BN) are a family of probabilistic graphical models representing a joint distribution for a set of random variables. Conditional dependencies between these variables are symbolized by a Directed Acyclic Graph (DAG). Two classical approaches are often encountered when automaticaly d ..."
Abstract
 Add to MetaCart
Bayesian networks (BN) are a family of probabilistic graphical models representing a joint distribution for a set of random variables. Conditional dependencies between these variables are symbolized by a Directed Acyclic Graph (DAG). Two classical approaches are often encountered when automaticaly determining an appropriate graphical structure from
Nested
"... Vol. 23 ISMB/ECCB 2007, pages i305–i312 doi:10.1093/bioinformatics/btm178 ..."
Abstract
 Add to MetaCart
Vol. 23 ISMB/ECCB 2007, pages i305–i312 doi:10.1093/bioinformatics/btm178