Results 1 - 10
of
36
Completely Derandomized Self-Adaptation in Evolution Strategies
- Evolutionary Computation
, 2001
"... This paper puts forward two useful methods for self-adaptation of the mutation distribution -- the concepts of derandomization and cumulation. Principle shortcomings of the concept of mutative strategy parameter control and two levels of derandomization are reviewed. Basic demands on the self-adapta ..."
Abstract
-
Cited by 549 (58 self)
- Add to MetaCart
(Show Context)
This paper puts forward two useful methods for self-adaptation of the mutation distribution -- the concepts of derandomization and cumulation. Principle shortcomings of the concept of mutative strategy parameter control and two levels of derandomization are reviewed. Basic demands on the self-adaptation of arbitrary (normal) mutation distributions are developed. Applying arbitrary, normal mutation distributions is equivalent to applying a general, linear problem encoding.
The CMA Evolution Strategy: A Comparing Review
- STUDFUZZ
, 2006
"... Derived from the concept of self-adaptation in evolution strategies, the CMA (Covariance Matrix Adaptation) adapts the covariance matrix of a multi-variate normal search distribution. The CMA was originally designed to perform well with small populations. In this review, the argument starts out with ..."
Abstract
-
Cited by 101 (29 self)
- Add to MetaCart
Derived from the concept of self-adaptation in evolution strategies, the CMA (Covariance Matrix Adaptation) adapts the covariance matrix of a multi-variate normal search distribution. The CMA was originally designed to perform well with small populations. In this review, the argument starts out with large population sizes, reflecting recent extensions of the CMA algorithm. Commonalities and differences to continuous Estimation of Distribution Algorithms are analyzed. The aspects of reliability of the estimation, overall step size control, and independence from the coordinate system (invariance) become particularly important in small populations sizes. Consequently, performing the adaptation task with small populations is more intricate.
A Method for Handling Uncertainty in Evolutionary Optimization with an Application to Feedback Control of Combustion
"... Abstract — We present a novel method for handling uncertainty in evolutionary optimization. The method entails quantification and treatment of uncertainty and relies on the rank based selection operator of evolutionary algorithms. The proposed uncertainty handling is implemented in the context of th ..."
Abstract
-
Cited by 50 (14 self)
- Add to MetaCart
Abstract — We present a novel method for handling uncertainty in evolutionary optimization. The method entails quantification and treatment of uncertainty and relies on the rank based selection operator of evolutionary algorithms. The proposed uncertainty handling is implemented in the context of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) and verified on test functions. The present method is independent of the uncertainty distribution, prevents premature convergence of the evolution strategy and is well suited for online optimization as it requires only a small number of additional function evaluations. The algorithm is applied in an experimental set-up to the online optimization of feedback controllers of thermoacoustic instabilities of gas turbine combustors. In order to mitigate these instabilities, gain-delay or model-based H ∞ controllers sense the pressure and command secondary fuel injectors. The parameters of these controllers are usually specified via a trial and error procedure. We demonstrate that their online optimization with the proposed methodology enhances, in an automated fashion, the online performance of the controllers, even under highly unsteady operating conditions, and it also compensates for uncertainties in the model-building and design process. I.
On Self-Adaptive Features in Real-Parameter Evolutionary Algorithms
, 2001
"... Due to the flexibility in adapting to different fitness landscapes, self-adaptive evolutionary algorithms (SA-EAs) have been gaining popularity in the recent past. In this paper, we postulate the properties that SA-EA operators should have for successful applications in real-valued search spaces. Sp ..."
Abstract
-
Cited by 42 (7 self)
- Add to MetaCart
Due to the flexibility in adapting to different fitness landscapes, self-adaptive evolutionary algorithms (SA-EAs) have been gaining popularity in the recent past. In this paper, we postulate the properties that SA-EA operators should have for successful applications in real-valued search spaces. Specifically, population mean and variance of a number of SA-EA operators, such as various real-parameter crossover operators and self-adaptive evolution strategies, are calculated for this purpose. Simulation results are shown to verify the theoretical calculations. The postulations and population variance calculations explain why self-adaptive GAs and ESs have shown similar performance in the past and also suggest appropriate strategy parameter values which must be chosen while applying and comparing different SA-EAs.
Noisy optimization with evolution strategies
- SIAM Journal on Optimization
"... Evolution strategies are general, nature-inspired heuristics for search and optimization. Supported both by empirical evidence and by recent theoretical findings, there is a common belief that evolution strategies are robust and reliable, and frequently they are the method of choice if neither deriv ..."
Abstract
-
Cited by 39 (6 self)
- Add to MetaCart
(Show Context)
Evolution strategies are general, nature-inspired heuristics for search and optimization. Supported both by empirical evidence and by recent theoretical findings, there is a common belief that evolution strategies are robust and reliable, and frequently they are the method of choice if neither derivatives of the objective function are at hand nor differentiability and numerical accuracy can be assumed. However, despite their widespread use, there is little exchange between members of the “classical ” optimization community and people working in the field of evolutionary computation. It is our belief that both sides would benefit from such an exchange. In this paper, we present a brief outline of evolution strategies and discuss some of their properties in the presence of noise. We then empirically demonstrate that for a simple but nonetheless nontrivial noisy objective function, an evolution strategy outperforms other optimization algorithms designed to be able to cope with noise. The environment in which the algorithms are tested is deliberately chosen to afford a transparency of the results that reveals the strengths and shortcomings of the strategies, making it possible to draw conclusions with regard to the design of better optimization algorithms for noisy environments. 1
Random Dynamics Optimum Tracking with Evolution Strategies
- In Parallel Problem Solving from Nature
, 2002
"... Dynamic optimization is frequently cited as a prime application area for evolutionary algorithms. In contrast to static optimization, the objective in dynamic optimization is to continuously adapt the solution to a changing environment a task that evolutionary algorithms are believed to be good ..."
Abstract
-
Cited by 21 (1 self)
- Add to MetaCart
(Show Context)
Dynamic optimization is frequently cited as a prime application area for evolutionary algorithms. In contrast to static optimization, the objective in dynamic optimization is to continuously adapt the solution to a changing environment a task that evolutionary algorithms are believed to be good at. At the time being, hovever, almost all knovledge with regard to the performance of evolutionary algorithms in dynamic environments is of an empirical nature. In this paper, tools devised originally for the analysis in static environments are applied to study the performance of a popular type of recombinative evolution strategy with cumulative mutation strength adaptation on a dynamic problem. With relatively little effort, scaling lavs that quite accurately describe the behavior of the strategy and that greatly contribute to its understanding are derived and their implications are discussed.
An analysis of mutative σ-self-adaptation on linear fitness functions
- Evolutionary Computation
, 2006
"... This paper investigates σ-self-adaptation for real valued evolutionary algorithms on linear fitness functions. We identify the step-size logarithm log σ as a key quantity to understand strategy behavior. Knowing the bias of mutation, recombination, and selection on log σ is sufficient to explain σ-d ..."
Abstract
-
Cited by 18 (12 self)
- Add to MetaCart
(Show Context)
This paper investigates σ-self-adaptation for real valued evolutionary algorithms on linear fitness functions. We identify the step-size logarithm log σ as a key quantity to understand strategy behavior. Knowing the bias of mutation, recombination, and selection on log σ is sufficient to explain σ-dynamics and strategy behavior in many cases, even from previously reported results on non-linear and/or noisy fitness functions. On a linear fitness function, if intermediate multi-recombination is applied on the object parameters, the i-th best and the i-th worst individual have the same σdistribution. Consequently, the correlation between fitness and step-size σ is zero. Assuming additionally that σ-changes due to mutation and recombination are unbiased, then σ-self-adaptation enlarges σ if and only if µ < λ/2, given (µ, λ)-truncation selection. Experiments show the relevance of the given assumptions.
Qualms Regarding the Optimality of Cumulative Path Length Control in CSA/CMA-Evolution Strategies
- Evolutionary Computation
, 2003
"... The cumulative step-size adaptation (CSA) based on path length control is regarded as a robust alternative to the standard mutative self-adaptation technique in evolution strategies (ES), guaranteeing an almost optimal control of the mutation operator. In this short paper it is shown that the und ..."
Abstract
-
Cited by 17 (5 self)
- Add to MetaCart
(Show Context)
The cumulative step-size adaptation (CSA) based on path length control is regarded as a robust alternative to the standard mutative self-adaptation technique in evolution strategies (ES), guaranteeing an almost optimal control of the mutation operator. In this short paper it is shown that the underlying basic assumption in CSA -- the perpendicularity of expected consecutive steps -- does not necessarily guarantee optimal progress performance for (=I ; ) intermediate recombinative ES.