Results 1  10
of
37
Bayesian Optimization Algorithm: From Single Level to Hierarchy
, 2002
"... There are four primary goals of this dissertation. First, design a competent optimization algorithm capable of learning and exploiting appropriate problem decomposition by sampling and evaluating candidate solutions. Second, extend the proposed algorithm to enable the use of hierarchical decompositi ..."
Abstract

Cited by 90 (18 self)
 Add to MetaCart
(Show Context)
There are four primary goals of this dissertation. First, design a competent optimization algorithm capable of learning and exploiting appropriate problem decomposition by sampling and evaluating candidate solutions. Second, extend the proposed algorithm to enable the use of hierarchical decomposition as opposed to decomposition on only a single level. Third, design a class of difficult hierarchical problems that can be used to test the algorithms that attempt to exploit hierarchical decomposition. Fourth, test the developed algorithms on the designed class of problems and several realworld applications. The dissertation proposes the Bayesian optimization algorithm (BOA), which uses Bayesian networks to model the promising solutions found so far and sample new candidate solutions. BOA is theoretically and empirically shown to be capable of both learning a proper decomposition of the problem and exploiting the learned decomposition to ensure robust and scalable search for the optimum across a wide range of problems. The dissertation then identifies important features that must be incorporated into the basic BOA to solve problems that are not decomposable on a single level, but that can still be solved by decomposition over multiple levels of difficulty. Hierarchical
Parallel estimation of distribution algorithms
, 2002
"... The thesis deals with the new evolutionary paradigm based on the concept of Estimation of Distribution Algorithms (EDAs) that use probabilistic model of promising solutions found so far to obtain new candidate solutions of optimized problem. There are six primary goals of this thesis: 1. Suggestion ..."
Abstract

Cited by 25 (4 self)
 Add to MetaCart
The thesis deals with the new evolutionary paradigm based on the concept of Estimation of Distribution Algorithms (EDAs) that use probabilistic model of promising solutions found so far to obtain new candidate solutions of optimized problem. There are six primary goals of this thesis: 1. Suggestion of a new formal description of EDA algorithm. This high level concept can be used to compare the generality of various probabilistic models by comparing the properties of underlying mappings. Also, some convergence issues are discussed and theoretical ways for further improvements are proposed. 2. Development of new probabilistic model and methods capable of dealing with continuous parameters. The resulting Mixed Bayesian Optimization Algorithm (MBOA) uses a set of decision trees to express the probability model. Its main advantage against the mostly used IDEA and EGNA approach is its backward compatibility with discrete domains, so it is uniquely capable of learning linkage between mixed continuousdiscrete genes. MBOA handles the discretization of continuous parameters as an integral part of the learning process, which outperforms the histogrambased
Mathematical Modelling of UMDAc Algorithm with Tournament Selection. Behaviour on Linear and Quadratic Functions
, 2002
"... This paper presents a theoretical study of the behaviour of the Univariate Marginal Distribution Algorithm for continuous domains (UMDAc ) in dimension n. To this end, the algorithm with tournament selection is modelled mathematically, assuming an infinite number of tournaments. The mathematical mod ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
This paper presents a theoretical study of the behaviour of the Univariate Marginal Distribution Algorithm for continuous domains (UMDAc ) in dimension n. To this end, the algorithm with tournament selection is modelled mathematically, assuming an infinite number of tournaments. The mathematical model is then used to study the algorithm's behaviour in the minimization of linear functions L(x) = a0 + i=1 a i x i and quadratic function Q(x) = i , with x = (x1 , . . . , xn ) and a i IR, i = 0, 1, . . . , n. Linear functions are used to model the algorithm when far from the optimum, while quadratic function is used to analyze the algorithm when near the optimum. The analysis shows that the algorithm performs poorly in the linear function L1 (x) = i=1 x i . In the case of quadratic function Q(x) the algorithm 's behaviour was analyzed for certain particular dimensions. After taking into account some simplifications we can conclude that when the algorithm starts near the optimum, UMDAc is able to reach it. Moreover the speed of convergence to the optimum decreases as the dimension increases.
Analyzing the PBIL Algorithm by Means of Discrete Dynamical Systems
 Complex Systems
"... this paper the convergence behavior of the Population Based Incremental Learning algorithm (PBIL) is analyzed using discrete dynamical systems. A discrete dynamical system is associated with the PBIL algorithm. We demonstrate that the behavior of the PBIL algorithm follows the iterates of the discre ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
(Show Context)
this paper the convergence behavior of the Population Based Incremental Learning algorithm (PBIL) is analyzed using discrete dynamical systems. A discrete dynamical system is associated with the PBIL algorithm. We demonstrate that the behavior of the PBIL algorithm follows the iterates of the discrete dynamical system for a long time when the parameter # is near zero. We show that all the points of the search space are fixed points of the dynamical system, and that the local optimum points for the function to optimize coincide with the stable fixed points. Hence it can be deduced that the PBIL algorithm converges to the global optimum in unimodal functions. 1. Introduction
Probabilistic ModelBuilding Genetic Algorithms in Permutation Representation Domain Using Edge Histogram
 Proc. of the 7th Int. Conf. on Parallel Problem Solving from Nature (PPSN VII
, 2002
"... Abstract. Recently, there has been a growing interest in developing evolutionary algorithms based on probabilistic modeling. In this scheme, the offspring population is generated according to the estimated probability density model of the parent instead of using recombination and mutation operators. ..."
Abstract

Cited by 15 (9 self)
 Add to MetaCart
Abstract. Recently, there has been a growing interest in developing evolutionary algorithms based on probabilistic modeling. In this scheme, the offspring population is generated according to the estimated probability density model of the parent instead of using recombination and mutation operators. In this paper, we have proposed probabilistic modelbuilding genetic algorithms (PMBGAs) in permutation representation domain using edge histogram based sampling algorithms (EHBSAs). Two types of sampling algorithms, without template (EHBSA/WO) and with template (EHBSA/WT), are presented. The results were tested in the TSP and showed EHBSA/WT worked fairly well with a small population size in the test problems used. It also worked better than wellknown traditional twoparent recombination operators. 1
The CorrelationTriggered Adaptive Variance Scaling IDEA
 IN PROCEEDINGS OF THE 8TH CONFERENCE ON GENETIC AND EVOLUTIONARY COMPUTATION
, 2006
"... It has previously been shown analytically and experimentally that continuous Estimation of Distribution Algorithms (EDAs) based on the normal pdf can easily suffer from premature convergence. This paper takes a principled first step towards solving this problem. First, prerequisites for the successf ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
It has previously been shown analytically and experimentally that continuous Estimation of Distribution Algorithms (EDAs) based on the normal pdf can easily suffer from premature convergence. This paper takes a principled first step towards solving this problem. First, prerequisites for the successful use of search distributions in EDAs are presented. Then, an adaptive variance scaling theme is introduced that aims at reducing the risk of premature convergence. Integrating the scheme into the iterated density–estimation evolutionary algorithm (IDEA) yields the correlationtriggered adaptive variance scaling IDEA (CTAVSIDEA). The CTAVSIDEA is compared to the original IDEA and the Evolution Strategy with Covariance Matrix Adaptation (CMAES) on a wide range of unimodal testproblems by means of a scalability analysis. It is found that the average number of fitness evaluations grows subquadratically with the dimensionality, competitively with the CMAES. In addition, CTAVSIDEA is indeed found to enlarge the class of problems that continuous EDAs can solve reliably.
Matching inductive search bias and problem structure in continuous estimation of distribution algorithms
 European Journal of Operational Research
"... Research into the dynamics of Genetic Algorithms (GAs) has led to the ¯eld of Estimation{of{Distribution Algorithms (EDAs). For discrete search spaces, EDAs have been developed that have obtained very promising results on a wide variety of problems. In this paper we investigate the conditions under ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
Research into the dynamics of Genetic Algorithms (GAs) has led to the ¯eld of Estimation{of{Distribution Algorithms (EDAs). For discrete search spaces, EDAs have been developed that have obtained very promising results on a wide variety of problems. In this paper we investigate the conditions under which the adaptation of this technique to continuous search spaces fails to perform optimization e±ciently. We show that without careful interpretation and adaptation of lessons learned from discrete EDAs, continuous EDAs will fail to perform e±cient optimization on even some of the simplest problems. We reconsider the most important lessons to be learned in the design of EDAs and subsequently show how we can use this knowledge to extend continuous EDAs that were obtained by straightforward adaptation from the discrete domain so as to obtain an improvement in performance. Experimental results are presented to illustrate this improvement and to additionally con¯rm experimentally that a proper adaptation of discrete EDAs to the continuous case indeed requires careful consideration. Key words: Estimation{of{distribution algorithms; Numerical optimization;
CIXL2: A Crossover Operator for Evolutionary Algorithms Based on Population Features
 Journal of Artificial Intelligence Research (JAIR
, 2005
"... In this paper we propose a crossover operator for evolutionary algorithms with real values that is based on the statistical theory of population distributions. The operator is based on the theoretical distribution of the values of the genes of the best individuals in the population. The proposed ope ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
(Show Context)
In this paper we propose a crossover operator for evolutionary algorithms with real values that is based on the statistical theory of population distributions. The operator is based on the theoretical distribution of the values of the genes of the best individuals in the population. The proposed operator takes into account the localization and dispersion features of the best individuals of the population with the objective that these features would be inherited by the offspring. Our aim is the optimization of the balance between exploration and exploitation in the search process. In order to test the efficiency and robustness of this crossover, we have used a set of functions to be optimized with regard to different criteria, such as, multimodality, separability, regularity and epistasis. With this set of functions we can extract conclusions in function of the problem at hand. We analyze the results using ANOVA and multiple comparison statistical tests. As an example of how our crossover can be used to solve artificial intelligence problems, we have applied the proposed model to the problem of obtaining the weight of each network in a ensemble of neural networks. The results obtained are above the performance of standard methods. 1.
A restart univariate estimation of distribution algorithm: sampling under mixed Gaussian and Lévy probability distribution
 in: Proceedings of the IEEE Congress on Evolutionary Computation (CEC2008), Hongkong
, 2008
"... large scale global optimization (LSGO) problems is proposed in this paper. Three efficient strategies: sampling under mixed Gaussian and Lévy probability distribution, Standard Deviation Control strategy and restart strategy are adopted to improve the performance of classical univariate EDA on LSGO ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
large scale global optimization (LSGO) problems is proposed in this paper. Three efficient strategies: sampling under mixed Gaussian and Lévy probability distribution, Standard Deviation Control strategy and restart strategy are adopted to improve the performance of classical univariate EDA on LSGO problems. The motivation of such work is to extend EDAs to LSGO domain reasonably. Comparison among LSEDAgl, EDA with standard deviation control strategy only (EDASTDC) and similar EDA version “continuous univariate marginal distribution algorithm ” UMDAc is carried out on classical test functions. Based on the general comparison standard, the strengths and weaknesses of the algorithms are discussed. Besides, LSEDAgl is tested on 7 functions with 100, 500, 1000 dimensions provided in the CEC’2008 Special Session on LSGO. This work is also expected to provide a comparison result for the CEC’2008 special session. I.
Program evolution by integrating EDP and GP
 In Genetic and Evolutionary Computation Conference
, 2004
"... Abstract. This paper discusses the performance of a hybrid system which consists of EDP and GP. EDP, Estimation of Distribution Programming, is the program evolution method based on the probabilistic model, where the probability distribution of a program is estimated by using a Bayesian network, and ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Abstract. This paper discusses the performance of a hybrid system which consists of EDP and GP. EDP, Estimation of Distribution Programming, is the program evolution method based on the probabilistic model, where the probability distribution of a program is estimated by using a Bayesian network, and a population evolves repeating estimation of distribution and program generation without crossover and mutation. Applying the hybrid system of EDP and GP to various problems, we discovered some important tendencies in the behavior of this hybrid system. The hybrid system was not only superior to pure GP in a search performance but also had interesting features in program evolution. More tests revealed how and when EDP and GP compensate for each other. We show some experimental results of program evolution by the hybrid system and discuss the characteristics of both EDP and GP.