Results 1  10
of
52
An Indexed Bibliography of Genetic Algorithms in Power Engineering
, 1995
"... s: Jan. 1992  Dec. 1994 ffl CTI: Current Technology Index Jan./Feb. 1993  Jan./Feb. 1994 ffl DAI: Dissertation Abstracts International: Vol. 53 No. 1  Vol. 55 No. 4 (1994) ffl EEA: Electrical & Electronics Abstracts: Jan. 1991  Dec. 1994 ffl P: Index to Scientific & Technical Proceedings: Ja ..."
Abstract

Cited by 73 (8 self)
 Add to MetaCart
s: Jan. 1992  Dec. 1994 ffl CTI: Current Technology Index Jan./Feb. 1993  Jan./Feb. 1994 ffl DAI: Dissertation Abstracts International: Vol. 53 No. 1  Vol. 55 No. 4 (1994) ffl EEA: Electrical & Electronics Abstracts: Jan. 1991  Dec. 1994 ffl P: Index to Scientific & Technical Proceedings: Jan. 1986  Feb. 1995 (except Nov. 1994) ffl EI A: The Engineering Index Annual: 1987  1992 ffl EI M: The Engineering Index Monthly: Jan. 1993  Dec. 1994 The following GA researchers have already kindly supplied their complete autobibliographies and/or proofread references to their papers: Dan Adler, Patrick Argos, Jarmo T. Alander, James E. Baker, Wolfgang Banzhaf, Ralf Bruns, I. L. Bukatova, Thomas Back, Yuval Davidor, Dipankar Dasgupta, Marco Dorigo, Bogdan Filipic, Terence C. Fogarty, David B. Fogel, Toshio Fukuda, Hugo de Garis, Robert C. Glen, David E. Goldberg, Martina GorgesSchleuter, Jeffrey Horn, Aristides T. Hatjimihail, Mark J. Jakiela, Richard S. Judson, Akihiko Konaga...
Feature Subset Selection by Bayesian networks: a comparison with genetic and sequential algorithms
"... In this paper we perform a comparison among FSSEBNA, a randomized, populationbased and evolutionary algorithm, and two genetic and other two sequential search approaches in the well known Feature Subset Selection (FSS) problem. In FSSEBNA, the FSS problem, stated as a search problem, uses the E ..."
Abstract

Cited by 42 (15 self)
 Add to MetaCart
In this paper we perform a comparison among FSSEBNA, a randomized, populationbased and evolutionary algorithm, and two genetic and other two sequential search approaches in the well known Feature Subset Selection (FSS) problem. In FSSEBNA, the FSS problem, stated as a search problem, uses the EBNA (Estimation of Bayesian Network Algorithm) search engine, an algorithm within the EDA (Estimation of Distribution Algorithm) approach. The EDA paradigm is born from the roots of the GA community in order to explicitly discover the relationships among the features of the problem and not disrupt them by genetic recombination operators. The EDA paradigm avoids the use of recombination operators and it guarantees the evolution of the population of solutions and the discovery of these relationships by the factorization of the probability distribution of best individuals in each generation of the search. In EBNA, this factorization is carried out by a Bayesian network induced by a chea...
Partial abductive inference in Bayesian belief networks using a genetic algorithm
 Pattern Recognit. Lett
, 1999
"... Abstract—Abductive inference in Bayesian belief networks (BBNs) is intended as the process of generating the most probable configurations given observed evidence. When we are interested only in a subset of the network’s variables, this problem is called partial abductive inference. Both problems are ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
Abstract—Abductive inference in Bayesian belief networks (BBNs) is intended as the process of generating the most probable configurations given observed evidence. When we are interested only in a subset of the network’s variables, this problem is called partial abductive inference. Both problems are NPhard and so exact computation is not always possible. In this paper, a genetic algorithm is used to perform partial abductive inference in BBNs. The main contribution is the introduction of new genetic operators designed specifically for this problem. By using these genetic operators, we try to take advantage of the calculations previously carried out, when a new individual is evaluated. The algorithm is tested using a widely used Bayesian network and a randomly generated one and then compared with a previous genetic algorithm based on classical genetic operators. From the experimental results, we conclude that the new genetic operators preserve the accuracy of the previous algorithm, and also reduce the number of operations performed during the evaluation of individuals. The performance of the genetic algorithm is, thus, improved. Index Terms—Abductive inference, bayesian belief networks, evolutionary computation, genetic operators, most probable explanation, probabilistic reasoning. I.
Decomposing Bayesian Networks: Triangulation of Moral Graph with Genetic Algorithms
 Statistics and Computing
, 1997
"... In this paper we consider the optimal decomposition of Bayesian networks. More concretely, we examine  empirically , the applicability of genetic algorithms to the problem of the triangulation of moral graphs. This problem constitutes the only difficult step in the evidence propagation algorithm ..."
Abstract

Cited by 22 (4 self)
 Add to MetaCart
In this paper we consider the optimal decomposition of Bayesian networks. More concretely, we examine  empirically , the applicability of genetic algorithms to the problem of the triangulation of moral graphs. This problem constitutes the only difficult step in the evidence propagation algorithm of Lauritzen and Spiegelhalter (1988) and is known to be NPhard (Wen, 1991). We carry out experiments with distinct crossover and mutation operators and with different population sizes, mutation rates and selection biasses. The results are analyzed statistically. They turn out to improve the results obtained with most other known triangulation methods (Kjaerulff, 1990) and are comparable to the ones obtained with simulated annealing (Kjaerulff, 1990; Kjaerulff, 1992). Keywords: Bayesian networks, genetic algorithms, optimal decomposition, graph triangulation, moral graph, NPhard problems, statistical analysis. 1 Introduction The Bayesian networks constitute a reasoning method based on p...
Combinatorial optimization by learning and simulation of Bayesian networks
 in Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence
, 2000
"... This paper shows how the Bayesian network paradigm can be used in order to solve combinatorial optimization problems. To do it some methods of structure learning from data and simulation of Bayesian networks are inserted inside Estimation of Distribution Algorithms (EDA). EDA are a new tool for evol ..."
Abstract

Cited by 21 (10 self)
 Add to MetaCart
This paper shows how the Bayesian network paradigm can be used in order to solve combinatorial optimization problems. To do it some methods of structure learning from data and simulation of Bayesian networks are inserted inside Estimation of Distribution Algorithms (EDA). EDA are a new tool for evolutionary computation in which populations of individuals are created by estimation and simulation of the joint probability distribution of the selected individuals. We propose new approaches to EDA for combinatorial optimization based on the theory of probabilistic graphical models. Experimental results are also presented.
Learning graphical model structure using L1regularization paths
 In Proceedings of the 21st Conference on Artificial Intelligence (AAAI
, 2007
"... Sparsitypromoting L1regularization has recently been succesfully used to learn the structure of undirected graphical models. In this paper, we apply this technique to learn the structure of directed graphical models. Specifically, we make three contributions. First, we show how the decomposability ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
Sparsitypromoting L1regularization has recently been succesfully used to learn the structure of undirected graphical models. In this paper, we apply this technique to learn the structure of directed graphical models. Specifically, we make three contributions. First, we show how the decomposability of the MDL score, plus the ability to quickly compute entire regularization paths, allows us to efficiently pick the optimal regularization parameter on a pernode basis. Second, we show how to use L1 variable selection to select the Markov blanket, before a DAG search stage. Finally, we show how L1 variable selection can be used inside of an order search algorithm. The effectiveness of these L1based approaches are compared to current state of the art methods on 10 datasets.
A Scoring Function for Learning Bayesian Networks based on Mutual Information and Conditional Independence Tests
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2006
"... We propose a new scoring function for learning Bayesian networks from data using score search algorithms. This is based on the concept of mutual information and exploits some wellknown properties of this measure in a novel way. Essentially, a statistical independence test based on the chisquare di ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
We propose a new scoring function for learning Bayesian networks from data using score search algorithms. This is based on the concept of mutual information and exploits some wellknown properties of this measure in a novel way. Essentially, a statistical independence test based on the chisquare distribution, associated with the mutual information measure, together with a property of additive decomposition of this measure, are combined in order to measure the degree of interaction between each variable and its parent variables in the network. The result is a nonBayesian scoring function called MIT (mutual information tests) which belongs to the family of scores based on information theory. The MIT score also represents a penalization of the KullbackLeibler divergence between the joint probability distributions associated with a candidate network and with the available data set. Detailed results of a complete experimental evaluation of the proposed scoring function and its comparison with the wellknown K2, BDeu and BIC/MDL scores are also presented.
Searching for Bayesian Network Structures in the Space of Restricted Acyclic Aprtially Directed Graphs
 Journal of Artificial Intelligence Research
, 2003
"... Although many algorithms have been designed to construct Bayesian network structures using dierent approaches and principles, they all employ only two methods: those based on independence criteria, and those based on a scoring function and a search procedure (although some methods combine the two). ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
Although many algorithms have been designed to construct Bayesian network structures using dierent approaches and principles, they all employ only two methods: those based on independence criteria, and those based on a scoring function and a search procedure (although some methods combine the two). Within the score+search paradigm, the dominant approach uses local search methods in the space of directed acyclic graphs (DAGs), where the usual choices for de ning the elementary modi cations (local changes) that can be applied are arc addition, arc deletion, and arc reversal. In this paper, we propose a new local search method that uses a dierent search space, and which takes account of the concept of equivalence between network structures: restricted acyclic partially directed graphs (RPDAGs). In this way, the number of dierent con gurations of the search space is reduced, thus improving eciency. Moreover, although the nal result must necessarily be a local optimum given the nature of the search method, the topology of the new search space, which avoids making early decisions about the directions of the arcs, may help to nd better local optima than those obtained by searching in the DAG space.
Predicting the Survival in Malignant Skin Melanoma Using Bayesian Networks Automatically Induced by Genetic Algorithms  An Empirical Comparision Between Different Approaches
 Artificial Intelligence in Medicine
, 1998
"... In this work we introduce a methodology based on Genetic Algorithms for the automatic induction of Bayesian Networks from a file containing cases and variables related to the problem. The structure is learned by applying three different methods: The Cooper and Herskovits metric for a general Bayesia ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
In this work we introduce a methodology based on Genetic Algorithms for the automatic induction of Bayesian Networks from a file containing cases and variables related to the problem. The structure is learned by applying three different methods: The Cooper and Herskovits metric for a general Bayesian Network, the Markov Blanket approach and the relaxed Markov Blanket method. The methodologies are applied to the problem of predicting survival of people after one, three and five years of being diagnosed as having malignant skin melanoma. The accuracy of the obtained models, measured in terms of the percentage of wellclassified subjects, is compared to that obtained by the socalled NaiveBayes. In the four approaches, the estimation of the model accuracy is obtained from the 10fold crossvalidation method. Keywords Bayesian Network, Genetic Algorithm, Structure Learning, Model Search, 10fold Crossvalidation 1. Introduction Expert systems, one of the most developed areas in the fiel...
Population Markov Chain Monte Carlo
 Machine Learning
, 2003
"... Stochastic search algorithms inspired by physical and biological systems are applied to the problem of learning directed graphical probability models in the presence of missing observations and hidden variables. For this class of problems, deterministic search algorithms tend to halt at local optima ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
Stochastic search algorithms inspired by physical and biological systems are applied to the problem of learning directed graphical probability models in the presence of missing observations and hidden variables. For this class of problems, deterministic search algorithms tend to halt at local optima, requiring random restarts to obtain solutions of acceptable quality. We compare three stochastic search algorithms: a MetropolisHastings Sampler (MHS), an Evolutionary Algorithm (EA), and a new hybrid algorithm called Population Markov Chain Monte Carlo, or popMCMC. PopMCMC uses statistical information from a population of MHSs to inform the proposal distributions for individual samplers in the population. Experimental results show that popMCMC and EAs learn more efficiently than the MHS with no information exchange. Populations of MCMC samplers exhibit more diversity than populations evolving according to EAs not satisfying physicsinspired local reversibility conditions. KEY WORDS: Markov Chain Monte Carlo, MetropolisHastings Algorithm, Graphical Probabilistic Models, Bayesian Networks, Bayesian Learning, Evolutionary Algorithms Machine Learning MCMC Issue 1 5/16/01 1.