Results 1  10
of
36
Completely Derandomized SelfAdaptation in Evolution Strategies
 Evolutionary Computation
, 2001
"... This paper puts forward two useful methods for selfadaptation of the mutation distribution  the concepts of derandomization and cumulation. Principle shortcomings of the concept of mutative strategy parameter control and two levels of derandomization are reviewed. Basic demands on the selfadapta ..."
Abstract

Cited by 549 (58 self)
 Add to MetaCart
(Show Context)
This paper puts forward two useful methods for selfadaptation of the mutation distribution  the concepts of derandomization and cumulation. Principle shortcomings of the concept of mutative strategy parameter control and two levels of derandomization are reviewed. Basic demands on the selfadaptation of arbitrary (normal) mutation distributions are developed. Applying arbitrary, normal mutation distributions is equivalent to applying a general, linear problem encoding.
The CMA Evolution Strategy: A Comparing Review
 STUDFUZZ
, 2006
"... Derived from the concept of selfadaptation in evolution strategies, the CMA (Covariance Matrix Adaptation) adapts the covariance matrix of a multivariate normal search distribution. The CMA was originally designed to perform well with small populations. In this review, the argument starts out with ..."
Abstract

Cited by 101 (29 self)
 Add to MetaCart
Derived from the concept of selfadaptation in evolution strategies, the CMA (Covariance Matrix Adaptation) adapts the covariance matrix of a multivariate normal search distribution. The CMA was originally designed to perform well with small populations. In this review, the argument starts out with large population sizes, reflecting recent extensions of the CMA algorithm. Commonalities and differences to continuous Estimation of Distribution Algorithms are analyzed. The aspects of reliability of the estimation, overall step size control, and independence from the coordinate system (invariance) become particularly important in small populations sizes. Consequently, performing the adaptation task with small populations is more intricate.
A Method for Handling Uncertainty in Evolutionary Optimization with an Application to Feedback Control of Combustion
"... Abstract — We present a novel method for handling uncertainty in evolutionary optimization. The method entails quantification and treatment of uncertainty and relies on the rank based selection operator of evolutionary algorithms. The proposed uncertainty handling is implemented in the context of th ..."
Abstract

Cited by 50 (14 self)
 Add to MetaCart
Abstract — We present a novel method for handling uncertainty in evolutionary optimization. The method entails quantification and treatment of uncertainty and relies on the rank based selection operator of evolutionary algorithms. The proposed uncertainty handling is implemented in the context of the Covariance Matrix Adaptation Evolution Strategy (CMAES) and verified on test functions. The present method is independent of the uncertainty distribution, prevents premature convergence of the evolution strategy and is well suited for online optimization as it requires only a small number of additional function evaluations. The algorithm is applied in an experimental setup to the online optimization of feedback controllers of thermoacoustic instabilities of gas turbine combustors. In order to mitigate these instabilities, gaindelay or modelbased H ∞ controllers sense the pressure and command secondary fuel injectors. The parameters of these controllers are usually specified via a trial and error procedure. We demonstrate that their online optimization with the proposed methodology enhances, in an automated fashion, the online performance of the controllers, even under highly unsteady operating conditions, and it also compensates for uncertainties in the modelbuilding and design process. I.
On SelfAdaptive Features in RealParameter Evolutionary Algorithms
, 2001
"... Due to the flexibility in adapting to different fitness landscapes, selfadaptive evolutionary algorithms (SAEAs) have been gaining popularity in the recent past. In this paper, we postulate the properties that SAEA operators should have for successful applications in realvalued search spaces. Sp ..."
Abstract

Cited by 42 (7 self)
 Add to MetaCart
Due to the flexibility in adapting to different fitness landscapes, selfadaptive evolutionary algorithms (SAEAs) have been gaining popularity in the recent past. In this paper, we postulate the properties that SAEA operators should have for successful applications in realvalued search spaces. Specifically, population mean and variance of a number of SAEA operators, such as various realparameter crossover operators and selfadaptive evolution strategies, are calculated for this purpose. Simulation results are shown to verify the theoretical calculations. The postulations and population variance calculations explain why selfadaptive GAs and ESs have shown similar performance in the past and also suggest appropriate strategy parameter values which must be chosen while applying and comparing different SAEAs.
Noisy optimization with evolution strategies
 SIAM Journal on Optimization
"... Evolution strategies are general, natureinspired heuristics for search and optimization. Supported both by empirical evidence and by recent theoretical findings, there is a common belief that evolution strategies are robust and reliable, and frequently they are the method of choice if neither deriv ..."
Abstract

Cited by 39 (6 self)
 Add to MetaCart
(Show Context)
Evolution strategies are general, natureinspired heuristics for search and optimization. Supported both by empirical evidence and by recent theoretical findings, there is a common belief that evolution strategies are robust and reliable, and frequently they are the method of choice if neither derivatives of the objective function are at hand nor differentiability and numerical accuracy can be assumed. However, despite their widespread use, there is little exchange between members of the “classical ” optimization community and people working in the field of evolutionary computation. It is our belief that both sides would benefit from such an exchange. In this paper, we present a brief outline of evolution strategies and discuss some of their properties in the presence of noise. We then empirically demonstrate that for a simple but nonetheless nontrivial noisy objective function, an evolution strategy outperforms other optimization algorithms designed to be able to cope with noise. The environment in which the algorithms are tested is deliberately chosen to afford a transparency of the results that reveals the strengths and shortcomings of the strategies, making it possible to draw conclusions with regard to the design of better optimization algorithms for noisy environments. 1
Random Dynamics Optimum Tracking with Evolution Strategies
 In Parallel Problem Solving from Nature
, 2002
"... Dynamic optimization is frequently cited as a prime application area for evolutionary algorithms. In contrast to static optimization, the objective in dynamic optimization is to continuously adapt the solution to a changing environment a task that evolutionary algorithms are believed to be good ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
(Show Context)
Dynamic optimization is frequently cited as a prime application area for evolutionary algorithms. In contrast to static optimization, the objective in dynamic optimization is to continuously adapt the solution to a changing environment a task that evolutionary algorithms are believed to be good at. At the time being, hovever, almost all knovledge with regard to the performance of evolutionary algorithms in dynamic environments is of an empirical nature. In this paper, tools devised originally for the analysis in static environments are applied to study the performance of a popular type of recombinative evolution strategy with cumulative mutation strength adaptation on a dynamic problem. With relatively little effort, scaling lavs that quite accurately describe the behavior of the strategy and that greatly contribute to its understanding are derived and their implications are discussed.
An analysis of mutative σselfadaptation on linear fitness functions
 Evolutionary Computation
, 2006
"... This paper investigates σselfadaptation for real valued evolutionary algorithms on linear fitness functions. We identify the stepsize logarithm log σ as a key quantity to understand strategy behavior. Knowing the bias of mutation, recombination, and selection on log σ is sufficient to explain σd ..."
Abstract

Cited by 18 (12 self)
 Add to MetaCart
(Show Context)
This paper investigates σselfadaptation for real valued evolutionary algorithms on linear fitness functions. We identify the stepsize logarithm log σ as a key quantity to understand strategy behavior. Knowing the bias of mutation, recombination, and selection on log σ is sufficient to explain σdynamics and strategy behavior in many cases, even from previously reported results on nonlinear and/or noisy fitness functions. On a linear fitness function, if intermediate multirecombination is applied on the object parameters, the ith best and the ith worst individual have the same σdistribution. Consequently, the correlation between fitness and stepsize σ is zero. Assuming additionally that σchanges due to mutation and recombination are unbiased, then σselfadaptation enlarges σ if and only if µ < λ/2, given (µ, λ)truncation selection. Experiments show the relevance of the given assumptions.
Qualms Regarding the Optimality of Cumulative Path Length Control in CSA/CMAEvolution Strategies
 Evolutionary Computation
, 2003
"... The cumulative stepsize adaptation (CSA) based on path length control is regarded as a robust alternative to the standard mutative selfadaptation technique in evolution strategies (ES), guaranteeing an almost optimal control of the mutation operator. In this short paper it is shown that the und ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
(Show Context)
The cumulative stepsize adaptation (CSA) based on path length control is regarded as a robust alternative to the standard mutative selfadaptation technique in evolution strategies (ES), guaranteeing an almost optimal control of the mutation operator. In this short paper it is shown that the underlying basic assumption in CSA  the perpendicularity of expected consecutive steps  does not necessarily guarantee optimal progress performance for (=I ; ) intermediate recombinative ES.