Results 1  10
of
41
The CMA Evolution Strategy: A Comparing Review
 STUDFUZZ
, 2006
"... Derived from the concept of selfadaptation in evolution strategies, the CMA (Covariance Matrix Adaptation) adapts the covariance matrix of a multivariate normal search distribution. The CMA was originally designed to perform well with small populations. In this review, the argument starts out with ..."
Abstract

Cited by 101 (29 self)
 Add to MetaCart
Derived from the concept of selfadaptation in evolution strategies, the CMA (Covariance Matrix Adaptation) adapts the covariance matrix of a multivariate normal search distribution. The CMA was originally designed to perform well with small populations. In this review, the argument starts out with large population sizes, reflecting recent extensions of the CMA algorithm. Commonalities and differences to continuous Estimation of Distribution Algorithms are analyzed. The aspects of reliability of the estimation, overall step size control, and independence from the coordinate system (invariance) become particularly important in small populations sizes. Consequently, performing the adaptation task with small populations is more intricate.
Natural Evolution Strategies
"... Abstract — This paper presents Natural Evolution Strategies (NES), a novel algorithm for performing realvalued ‘black box ’ function optimization: optimizing an unknown objective function where algorithmselected function measurements constitute the only information accessible to the method. Natura ..."
Abstract

Cited by 41 (22 self)
 Add to MetaCart
(Show Context)
Abstract — This paper presents Natural Evolution Strategies (NES), a novel algorithm for performing realvalued ‘black box ’ function optimization: optimizing an unknown objective function where algorithmselected function measurements constitute the only information accessible to the method. Natural Evolution Strategies search the fitness landscape using a multivariate normal distribution with a selfadapting mutation matrix to generate correlated mutations in promising regions. NES shares this property with Covariance Matrix Adaption (CMA), an Evolution Strategy (ES) which has been shown to perform well on a variety of highprecision optimization tasks. The Natural Evolution Strategies algorithm, however, is simpler, less adhoc and more principled. Selfadaptation of the mutation matrix is derived using a Monte Carlo estimate of the natural gradient towards better expected fitness. By following the natural gradient instead of the ‘vanilla ’ gradient, we can ensure efficient update steps while preventing early convergence due to overly greedy updates, resulting in reduced sensitivity to local suboptima. We show NES has competitive performance with CMA on unimodal tasks, while outperforming it on several multimodal tasks that are rich in deceptive local optima. I.
An estimation of distribution algorithm with intelligent local search for rulebased nurse rostering
, 2007
"... ..."
Parallel estimation of distribution algorithms
, 2002
"... The thesis deals with the new evolutionary paradigm based on the concept of Estimation of Distribution Algorithms (EDAs) that use probabilistic model of promising solutions found so far to obtain new candidate solutions of optimized problem. There are six primary goals of this thesis: 1. Suggestion ..."
Abstract

Cited by 26 (4 self)
 Add to MetaCart
The thesis deals with the new evolutionary paradigm based on the concept of Estimation of Distribution Algorithms (EDAs) that use probabilistic model of promising solutions found so far to obtain new candidate solutions of optimized problem. There are six primary goals of this thesis: 1. Suggestion of a new formal description of EDA algorithm. This high level concept can be used to compare the generality of various probabilistic models by comparing the properties of underlying mappings. Also, some convergence issues are discussed and theoretical ways for further improvements are proposed. 2. Development of new probabilistic model and methods capable of dealing with continuous parameters. The resulting Mixed Bayesian Optimization Algorithm (MBOA) uses a set of decision trees to express the probability model. Its main advantage against the mostly used IDEA and EGNA approach is its backward compatibility with discrete domains, so it is uniquely capable of learning linkage between mixed continuousdiscrete genes. MBOA handles the discretization of continuous parameters as an integral part of the learning process, which outperforms the histogrambased
Advancing Continuous IDEAs with Mixture Distributions and Factorization Selection Metrics
 Proceedings of the Optimization by Building and Using Probabilistic Models OBUPM Workshop at the Genetic and Evolutionary Computation Conference GECCO–2001
, 2001
"... Evolutionary optimization based on proba bilistic models has so far been limited to the use of factorizations in the case of continuous representations. Furthermore, a maximum complexity parameter n was required previously to construct factorizations to prevent unnecessary complexity to be in ..."
Abstract

Cited by 23 (8 self)
 Add to MetaCart
(Show Context)
Evolutionary optimization based on proba bilistic models has so far been limited to the use of factorizations in the case of continuous representations. Furthermore, a maximum complexity parameter n was required previously to construct factorizations to prevent unnecessary complexity to be introduced in the factorization. In this paper, we advance these techniques by using clustering and the EM algorithm to allow for mixture distributions.
Matching inductive search bias and problem structure in continuous estimation of distribution algorithms
 European Journal of Operational Research
"... Research into the dynamics of Genetic Algorithms (GAs) has led to the ¯eld of Estimation{of{Distribution Algorithms (EDAs). For discrete search spaces, EDAs have been developed that have obtained very promising results on a wide variety of problems. In this paper we investigate the conditions under ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
(Show Context)
Research into the dynamics of Genetic Algorithms (GAs) has led to the ¯eld of Estimation{of{Distribution Algorithms (EDAs). For discrete search spaces, EDAs have been developed that have obtained very promising results on a wide variety of problems. In this paper we investigate the conditions under which the adaptation of this technique to continuous search spaces fails to perform optimization e±ciently. We show that without careful interpretation and adaptation of lessons learned from discrete EDAs, continuous EDAs will fail to perform e±cient optimization on even some of the simplest problems. We reconsider the most important lessons to be learned in the design of EDAs and subsequently show how we can use this knowledge to extend continuous EDAs that were obtained by straightforward adaptation from the discrete domain so as to obtain an improvement in performance. Experimental results are presented to illustrate this improvement and to additionally con¯rm experimentally that a proper adaptation of discrete EDAs to the continuous case indeed requires careful consideration. Key words: Estimation{of{distribution algorithms; Numerical optimization;
The CorrelationTriggered Adaptive Variance Scaling IDEA
 in Proceedings of GECCO2006, 2006
"... ABSTRACT It has previously been shown analytically and experimentally that continuous Estimation of Distribution Algorithms (EDAs) based on the normal pdf can easily suffer from premature convergence. This paper takes a principled first step towards solving this problem. First, prerequisites for th ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
(Show Context)
ABSTRACT It has previously been shown analytically and experimentally that continuous Estimation of Distribution Algorithms (EDAs) based on the normal pdf can easily suffer from premature convergence. This paper takes a principled first step towards solving this problem. First, prerequisites for the successful use of search distributions in EDAs are presented. Then, an adaptive variance scaling theme is introduced that aims at reducing the risk of premature convergence. Integrating the scheme into the iterated densityestimation evolutionary algorithm (ID A) yields the correlationtriggered adaptive variance scaling ID A (CTAVSID A). The CTAVSID A is compared to the original ID A and the Evolution Strategy with Covariance Matrix Adaptation (CMAES) on a wide range of unimodal testproblems by means of a scalability analysis. It is found that the average number of fitness evaluations grows subquadratically with the dimensionality, competitively with the CMAES. In addition, CTAVSID A is indeed found to enlarge the class of problems that continuous EDAs can solve reliably.
Populationbased continuous optimization, probabilistic modelling and mean shift
 Evolutionary Computation
, 2005
"... Evolutionary algorithms perform optimization using a population of sample solution points. An interesting development has been to view populationbased optimization as the process of evolving an explicit, probabilistic model of the search space. This paper investigates a formal basis for continuous, ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
(Show Context)
Evolutionary algorithms perform optimization using a population of sample solution points. An interesting development has been to view populationbased optimization as the process of evolving an explicit, probabilistic model of the search space. This paper investigates a formal basis for continuous, populationbased optimization in terms of a stochastic gradient descent on the KullbackLeibler divergence between the model probability density and the objective function, represented as an unknown density of assumed form. This leads to an update rule that is related and compared with previous theoretical work, a continuous version of the populationbased incremental learning algorithm, and the generalized mean shift clustering framework. Experimental results are presented that demonstrate the dynamics of the new algorithm on a set of simple test problems. Keywords Probabilistic modelling, estimation of distribution algorithms, populationbased incremental
MultiObjective Mixturebased Iterated Density Estimation Evolutionary Algorithms
 in Proceedings of the Genetic and Evolutionary Computation Conference. San Francisco,California
, 2001
"... We propose an algorithm for multiobjective optimization using a mixturebased iterated density estimation evolutionary algorithm (M IDE A). The M IDE A algorithm is a probabilistic model building evolutionary algorithm that constructs at each generation a mixture of factorized probability dis ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
(Show Context)
We propose an algorithm for multiobjective optimization using a mixturebased iterated density estimation evolutionary algorithm (M IDE A). The M IDE A algorithm is a probabilistic model building evolutionary algorithm that constructs at each generation a mixture of factorized probability distributions.