Results 11  20
of
222
Bayesian Optimization Algorithm, Population Sizing, and Time to Convergence
 PROCEEDINGS OF THE GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, 275–282. (ALSO ILLIGAL
, 2000
"... This paper analyzes convergence properties of the ..."
Abstract

Cited by 39 (18 self)
 Add to MetaCart
This paper analyzes convergence properties of the
Probabilistic Model Building and Competent Genetic Programming
 GENETIC PROGRAMMING THEORY AND PRACTISE, CHAPTER 13
, 2003
"... This paper describes a probabilistic model building genetic programming (PMBGP) developed based on the extended compact genetic algorithm (eCGA). Unlike traditional genetic programming, which use fixed recombination operators, the proposed PMBGA adapts linkages. The proposed algorithms... ..."
Abstract

Cited by 39 (10 self)
 Add to MetaCart
This paper describes a probabilistic model building genetic programming (PMBGP) developed based on the extended compact genetic algorithm (eCGA). Unlike traditional genetic programming, which use fixed recombination operators, the proposed PMBGA adapts linkages. The proposed algorithms...
Bayesian Optimization Algorithm, Decision Graphs, and Occam's Razor
 Proceedings of the Genetic and Evolutionary Computation Conference (GECCO2001), 519–526. Also IlliGAL
, 2001
"... This paper discusses the use of various scoring metrics in the Bayesian optimization algorithm (BOA) which uses Bayesian networks to model promising solutions and generate the new ones. The use of decision graphs in Bayesian networks to improve the performance of the BOA is proposed. To favor simple ..."
Abstract

Cited by 37 (20 self)
 Add to MetaCart
This paper discusses the use of various scoring metrics in the Bayesian optimization algorithm (BOA) which uses Bayesian networks to model promising solutions and generate the new ones. The use of decision graphs in Bayesian networks to improve the performance of the BOA is proposed. To favor simple models, a complexity measure is incorporated into the BayesianDirichlet metric for Bayesian networks with decision graphs. The presented modi cations are compared on a number of interesting problems.
Relevance Estimation and Value Calibration of Evolutionary Algorithm Parameters
, 2007
"... The main objective of this paper is to present and evaluate a method that helps to calibrate the parameters of an evolutionary algorithm in a systematic and semiautomated manner. The method for Relevance Estimation and Value Calibration of EA parameters (REVAC) is empirically evaluated in two diffe ..."
Abstract

Cited by 34 (13 self)
 Add to MetaCart
The main objective of this paper is to present and evaluate a method that helps to calibrate the parameters of an evolutionary algorithm in a systematic and semiautomated manner. The method for Relevance Estimation and Value Calibration of EA parameters (REVAC) is empirically evaluated in two different ways. First, we use abstract test cases reflecting the typical properties of EA parameter spaces. Here we observe that REVAC is able to approximate the exact (handcoded) relevance of parameters and it works robustly with measurement noise that is highly variable and not normally distributed. Second, we use REVAC for calibrating GAs for a number of common objective functions. Here we obtain a common sense validation, REVAC finds mutation rate pm much more sensitive than crossover rate pc and it recommends intuitively sound values: pm between 0.01 and 0.1, and 0.6 ≤ pc ≤ 1.0. 1
Optimization in continuous domains by learning and simulation of Gaussian networks
"... This paper shows how the Gaussian network paradigm can be used to solve optimization problems in continuous domains. Some methods of structure learning from data and simulation of Gaussian networks are applied in the Estimation of Distribution Algorithm (EDA) as well as new methods based on in ..."
Abstract

Cited by 32 (4 self)
 Add to MetaCart
This paper shows how the Gaussian network paradigm can be used to solve optimization problems in continuous domains. Some methods of structure learning from data and simulation of Gaussian networks are applied in the Estimation of Distribution Algorithm (EDA) as well as new methods based on information theory are proposed. Experimental results are also presented. 1 Estimation of Distribution Algorithms approaches in continuous domains Figure 1 shows a schematic of the EDA approach for continuous domains. We will use x = (x 1 ; : : : ; xn ) to denote individuals, and D l to denote the population of N individuals in the lth generation. Similarly, D Se l will represent the population of the selected Se individuals from D l . In the EDA [9] our interest will be to estimate f(x j D Se ), that is, the joint probability density function over one individual x being among the selected individuals. We denote as f l (x) = f l (x j D Se l 1 ) the joint density of the lth genera...
Fitness inheritance in the Bayesian optimization algorithm
, 2004
"... This paper describes how fitness inheritance can be used to estimate fitness for a proportion of newly sampled candidate solutions in the Bayesian optimization algorithm (BOA). The goal of estimating fitness for some candidate solutions is to reduce the number of fitness evaluations for problems whe ..."
Abstract

Cited by 31 (22 self)
 Add to MetaCart
This paper describes how fitness inheritance can be used to estimate fitness for a proportion of newly sampled candidate solutions in the Bayesian optimization algorithm (BOA). The goal of estimating fitness for some candidate solutions is to reduce the number of fitness evaluations for problems where fitness evaluation is expensive. Bayesian networks used in BOA to model promising solutions and generate the new ones are extended to allow not only for modeling and sampling candidate solutions, but also for estimating their fitness. The results indicate that fitness inheritance is a promising concept in BOA, because populationsizing requirements for building appropriate models of promising solutions lead to good fitness estimates even if only a small proportion of candidate solutions is evaluated using the actual fitness function. This can lead to a reduction of the number of actual fitness evaluations by a factor of 30 or more.
Designing competent mutation operators via probabilistic model building of neighborhoods
 In Deb, K., & et al. (Eds.), Proceedings of the Genetic and Evolutionary Computation Conference (GECCO2004), Part II, LNCS 3103
, 2004
"... This paper presents a competent selectomutative genetic algorithm (GA), that adapts linkage and solves hard problems quickly, reliably, and accurately. A probabilistic model building process is used to automatically identify key building blocks (BBs) of the search problem. The mutation operator uses ..."
Abstract

Cited by 31 (21 self)
 Add to MetaCart
This paper presents a competent selectomutative genetic algorithm (GA), that adapts linkage and solves hard problems quickly, reliably, and accurately. A probabilistic model building process is used to automatically identify key building blocks (BBs) of the search problem. The mutation operator uses the probabilistic model of linkage groups to find the best among competing building blocks. The competent selectomutative GA successfully solves additively separable problems of bounded difficulty, requiring only subquadratic number of function evaluations. The results show that for additively separable problems the probabilistic model building BBwise mutation scales as O(2 k m 1.5), and requires O ( √ k log m) less function evaluations than its selectorecombinative counterpart, confirming theoretical results reported elsewhere (Sastry & Goldberg, 2004). 1
Expanding From Discrete To Continuous Estimation Of Distribution Algorithms: The IDEA
 In Parallel Problem Solving From Nature  PPSN VI
, 2000
"... . The direct application of statistics to stochastic optimization based on iterated density estimation has become more important and present in evolutionary computation over the last few years. The estimation of densities over selected samples and the sampling from the resulting distributions, i ..."
Abstract

Cited by 30 (7 self)
 Add to MetaCart
. The direct application of statistics to stochastic optimization based on iterated density estimation has become more important and present in evolutionary computation over the last few years. The estimation of densities over selected samples and the sampling from the resulting distributions, is a combination of the recombination and mutation steps used in evolutionary algorithms. We introduce the framework named IDEA to formalize this notion. By combining continuous probability theory with techniques from existing algorithms, this framework allows us to dene new continuous evolutionary optimization algorithms. 1 Introduction Algorithms in evolutionary optimization guide their search through statistics based on a vector of samples, often called a population. By using this stochastic information, non{deterministic induction is performed in order to attempt to use the structure of the search space and thereby aid the search for the optimal solution. In order to perform induct...
Hierarchical Problem Solving by the Bayesian Optimization Algorithm
 PROCEEDINGS OF THE GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE 2000
, 2000
"... The paper discusses three major issues. First, it discusses why it makes sense to approach problems in a hierarchical fashion. It defines the class of hierarchically decomposable functions that can be used to test the algorithms that approach problems in this fashion. Finally, the Bayesian optimi ..."
Abstract

Cited by 28 (7 self)
 Add to MetaCart
The paper discusses three major issues. First, it discusses why it makes sense to approach problems in a hierarchical fashion. It defines the class of hierarchically decomposable functions that can be used to test the algorithms that approach problems in this fashion. Finally, the Bayesian optimization algorithm (BOA) is extended in order to solve the proposed class of problems.
Analysis and Improvement of Fitness Exploitation in XCS: Bounding Models, Tournament Selection, and Bilateral Accuracy
 EVOLUTIONARY COMPUTATION
, 2003
"... The evolutionary learning mechanism in XCS strongly depends on its accuracybased fitness approach. The approach is meant to result in an evolutionary drive from classifiers of low accuracy to those of high accuracy. Since, given inaccuracy, lower specificity often corresponds to lower accuracy fitn ..."
Abstract

Cited by 28 (17 self)
 Add to MetaCart
The evolutionary learning mechanism in XCS strongly depends on its accuracybased fitness approach. The approach is meant to result in an evolutionary drive from classifiers of low accuracy to those of high accuracy. Since, given inaccuracy, lower specificity often corresponds to lower accuracy fitness pressure most often also results in a pressure towards higher specificity. Moreover