Results 1  10
of
24
Ant Colony Optimization  Artificial Ants as a Computational Intelligence Technique
 IEEE COMPUT. INTELL. MAG
, 2006
"... ..."
A Review on the Ant Colony Optimization Metaheuristic: Basis, Models and New Trends
 Mathware & Soft Computing
, 2002
"... Ant Colony Optimization (ACO) is a recent metaheuristic method that is inspired by the behavior of real ant colonies. In this paper, we review the underlying ideas of this approach that lead from the biological inspiration to the ACO metaheuristic, which gives a set of rules of how to apply ACO ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
Ant Colony Optimization (ACO) is a recent metaheuristic method that is inspired by the behavior of real ant colonies. In this paper, we review the underlying ideas of this approach that lead from the biological inspiration to the ACO metaheuristic, which gives a set of rules of how to apply ACO algorithms to challenging combinatorial problems. We present some of the algorithms that were developed under this framework, give an overview of current applications, and analyze the relationship between ACO and some of the best known metaheuristics. In addition, we describe recent theoretical developments in the eld and we conclude by showing several new trends and new research directions in this eld.
A Scoring Function for Learning Bayesian Networks based on Mutual Information and Conditional Independence Tests
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2006
"... We propose a new scoring function for learning Bayesian networks from data using score search algorithms. This is based on the concept of mutual information and exploits some wellknown properties of this measure in a novel way. Essentially, a statistical independence test based on the chisquare di ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
We propose a new scoring function for learning Bayesian networks from data using score search algorithms. This is based on the concept of mutual information and exploits some wellknown properties of this measure in a novel way. Essentially, a statistical independence test based on the chisquare distribution, associated with the mutual information measure, together with a property of additive decomposition of this measure, are combined in order to measure the degree of interaction between each variable and its parent variables in the network. The result is a nonBayesian scoring function called MIT (mutual information tests) which belongs to the family of scores based on information theory. The MIT score also represents a penalization of the KullbackLeibler divergence between the joint probability distributions associated with a candidate network and with the available data set. Detailed results of a complete experimental evaluation of the proposed scoring function and its comparison with the wellknown K2, BDeu and BIC/MDL scores are also presented.
Searching for Bayesian Network Structures in the Space of Restricted Acyclic Aprtially Directed Graphs
 Journal of Artificial Intelligence Research
, 2003
"... Although many algorithms have been designed to construct Bayesian network structures using dierent approaches and principles, they all employ only two methods: those based on independence criteria, and those based on a scoring function and a search procedure (although some methods combine the two). ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
Although many algorithms have been designed to construct Bayesian network structures using dierent approaches and principles, they all employ only two methods: those based on independence criteria, and those based on a scoring function and a search procedure (although some methods combine the two). Within the score+search paradigm, the dominant approach uses local search methods in the space of directed acyclic graphs (DAGs), where the usual choices for de ning the elementary modi cations (local changes) that can be applied are arc addition, arc deletion, and arc reversal. In this paper, we propose a new local search method that uses a dierent search space, and which takes account of the concept of equivalence between network structures: restricted acyclic partially directed graphs (RPDAGs). In this way, the number of dierent con gurations of the search space is reduced, thus improving eciency. Moreover, although the nal result must necessarily be a local optimum given the nature of the search method, the topology of the new search space, which avoids making early decisions about the directions of the arcs, may help to nd better local optima than those obtained by searching in the DAG space.
Some Variations on the PC Algorithm
"... This paper proposes some possible modifications on the PC basic learning algorithm and makes some experiments to study their behaviour. The variations are: to determine minimum size cut sets between two nodes to study the deletion of a link, to make statistical decisions taking into account a Bayesi ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
This paper proposes some possible modifications on the PC basic learning algorithm and makes some experiments to study their behaviour. The variations are: to determine minimum size cut sets between two nodes to study the deletion of a link, to make statistical decisions taking into account a Bayesian score instead of a classical Chisquare test, to study the refinement of the learned network by a greedy optimization of a Bayesian score, and to solve link ambiguities taking into account a measure of their strength. It will be shown that some of these modifications can improve PC performance, depending of the objective of the learning task: discovering the causal structure or approximating the joint probability distribution for the problem variables. 1
Particle Swarm Optimisation for learning Bayesian Networks
"... Abstract — This paper discusses the potential of Particle Swarm Optimisation (PSO) for inducing Bayesian Networks (BNs). Specifically, we detail two methods which adopt the search and score approach to BN learning. The two algorithms are similar in that they both use PSO as the search algorithm, and ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract — This paper discusses the potential of Particle Swarm Optimisation (PSO) for inducing Bayesian Networks (BNs). Specifically, we detail two methods which adopt the search and score approach to BN learning. The two algorithms are similar in that they both use PSO as the search algorithm, and the K2 metric to score the resulting network. The difference lies in the way networks are constructed. The CONstruct And Repair (CONAR) algorithm generates structures, validates, and repairs if required, and the REstricted STructure (REST) algorithm, only permits valid structures to be developed. Initial experiments indicate that these approaches produce promising results when compared to other BN learning strategies.
Methods to Accelerate the Learning of Bayesian Network Structures
 PROCEEDINGS OF THE 2007 UK WORKSHOP ON COMPUTATIONAL INTELLIGENCE
, 2007
"... Bayesian networks have become a standard technique in the representation of uncertain knowledge. This paper proposes methods that can accelerate the learning of a Bayesian network structure from a data set. These methods are applicable when learning an equivalence class of Bayesian network structure ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Bayesian networks have become a standard technique in the representation of uncertain knowledge. This paper proposes methods that can accelerate the learning of a Bayesian network structure from a data set. These methods are applicable when learning an equivalence class of Bayesian network structures whilst using a score and search strategy. They work by constraining the number of validity tests that need to be done and by caching the results of validity tests. The results of experiments show that the methods improve the performance of algorithms that search through the space of equivalence classes multiple times and that operate on wide data sets. The experiments were performed by sampling data from six standard Bayesian networks and running an ant colony optimization algorithm designed to learn a Bayesian network equivalence class. 1
Clusteringbased Bayesian Multinet Classifier Construction with Ant Colony Optimization
"... Abstract—Bayesian Multinets (BMNs) are a special kind of Bayesian network (BN) classifiers that consist of several local networks, typically, one for each predictable class, to model an asymmetric set of variable dependencies given each class value. Alternatively, multinets can be learnt upon arbi ..."
Abstract
 Add to MetaCart
Abstract—Bayesian Multinets (BMNs) are a special kind of Bayesian network (BN) classifiers that consist of several local networks, typically, one for each predictable class, to model an asymmetric set of variable dependencies given each class value. Alternatively, multinets can be learnt upon arbitrary partitions of a dataset, in which each partition holds more consistent variable dependencies given the data subset in the partition. This paper proposes two contributions to the approach that clusters the dataset into separate data subsets to build asymmetric local BN classifiers, one for each subset. First, we extend the Kmodes algorithm, previously used by the CaseBased Bayesian Network Classifiers (CBBN) approach to create clusters before learning the BN classifiers. Second, we introduce the AntClustB algorithm that employs Ant Colony Optimization (ACO) to learn clusteringbased BMNs. AntClustB uses ACO in the clustering step before learning the local BN classifiers. Empirical results are obtained from experiments on 18 UCI datasets. I.