Results 1  10
of
42
P.J.: Tracking extrema in dynamic environments
 Proc Evolutionary Programming IV
, 1998
"... Abstract. Typical applications of evolutionary optimization involve the offline approximation of extrema of static multimodal functions. Methods which use a variety of techniques to selfadapt mutation parameters have been shown to be more successful than methods which do not use selfadaptation ..."
Abstract

Cited by 51 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Typical applications of evolutionary optimization involve the offline approximation of extrema of static multimodal functions. Methods which use a variety of techniques to selfadapt mutation parameters have been shown to be more successful than methods which do not use selfadaptation. For dynamic functions, the interest is not to obtain the extrema but to follow it as closely as possible. This paper compares the online extrema tracking performance of an evolutionary program without selfadaptation against an evolutionary program using a selfadaptive Gaussian update rule over a number of dynamics applied to a simple static function. The experiments demonstrate that for some dynamic functions, selfadaptation is effective while for others it is detrimental. 1
On SelfAdaptive Features in RealParameter Evolutionary Algorithms
, 2001
"... Due to the flexibility in adapting to different fitness landscapes, selfadaptive evolutionary algorithms (SAEAs) have been gaining popularity in the recent past. In this paper, we postulate the properties that SAEA operators should have for successful applications in realvalued search spaces. Sp ..."
Abstract

Cited by 36 (7 self)
 Add to MetaCart
Due to the flexibility in adapting to different fitness landscapes, selfadaptive evolutionary algorithms (SAEAs) have been gaining popularity in the recent past. In this paper, we postulate the properties that SAEA operators should have for successful applications in realvalued search spaces. Specifically, population mean and variance of a number of SAEA operators, such as various realparameter crossover operators and selfadaptive evolution strategies, are calculated for this purpose. Simulation results are shown to verify the theoretical calculations. The postulations and population variance calculations explain why selfadaptive GAs and ESs have shown similar performance in the past and also suggest appropriate strategy parameter values which must be chosen while applying and comparing different SAEAs.
Combining Mutation Operators in Evolutionary Programming
, 1998
"... Traditional investigations with evolutionary programming (EP) for continuous parameter optimization problems have used a single mutation operator with a parameterized probability density function (pdf), typically a Gaussian. Using a variety of mutation operators that can be combined during evolutio ..."
Abstract

Cited by 28 (0 self)
 Add to MetaCart
Traditional investigations with evolutionary programming (EP) for continuous parameter optimization problems have used a single mutation operator with a parameterized probability density function (pdf), typically a Gaussian. Using a variety of mutation operators that can be combined during evolution to generate pdf’s of varying shapes could hold the potential for producing better solutions with less computational effort. In view of this, a linear combination of Gaussian and Cauchy mutations is proposed. Simulations indicate that both the adaptive and nonadaptive versions of this operator are capable of producing solutions that are statistically as good as, or better, than those produced when using Gaussian or Cauchy mutations alone.
Fast Multiswarm Optimization for Dynamic Optimization Problems
"... In the real world, many applications are nonstationary optimization problems. This requires that the optimization algorithms need to not only find the global optimal solution but also track the trajectory of the changing global best solution in a dynamic environment. To achieve this, this paper pro ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
(Show Context)
In the real world, many applications are nonstationary optimization problems. This requires that the optimization algorithms need to not only find the global optimal solution but also track the trajectory of the changing global best solution in a dynamic environment. To achieve this, this paper proposes a multiswarm algorithm based on fast particle swarm optimization for dynamic optimization problems. The algorithm employs a mechanism to track multiple peaks by preventing overcrowding at a peak and a fast particle swarm optimization algorithm as a local search method to find the near optimal solutions in a local promising region in the search space. The moving peaks benchmark function is used to test the performance of the proposed algorithm. The numerical experimental results show the efficiency of the proposed algorithm for dynamic optimization problems. 1
Local Convergence Rates of Simple Evolutionary Algorithms with Cauchy Mutations
 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION
, 1998
"... The standard choice for mutating an individual of an evolutionary algorithm with continuous variables is the normal distribution; however other distributions, especially some versions of the multivariate Cauchy distribution, have recently gained increased popularity in practical applications. Here t ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
(Show Context)
The standard choice for mutating an individual of an evolutionary algorithm with continuous variables is the normal distribution; however other distributions, especially some versions of the multivariate Cauchy distribution, have recently gained increased popularity in practical applications. Here the extent to which Cauchy mutation distributions may affect the local convergence behavior of evolutionary algorithms is analyzed. The results show that the order of local convergence is identical for Gaussian and spherical Cauchy distributions, whereas nonspherical Cauchy mutations lead to slower local convergence. As a byproduct of the analysis some recommendations for the parametrization of the selfadaptive step size control mechanism can be derived.
A Model Reference Adaptive Search Method for Global Optimization
 2007 Oper. Res
, 2008
"... informs ® doi 10.1287/opre.1060.0367 © 2007 INFORMS Model reference adaptive search (MRAS) for solving global optimization problems works with a parameterized probabilistic model on the solution space and generates at each iteration a group of candidate solutions. These candidate solutions are then ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
informs ® doi 10.1287/opre.1060.0367 © 2007 INFORMS Model reference adaptive search (MRAS) for solving global optimization problems works with a parameterized probabilistic model on the solution space and generates at each iteration a group of candidate solutions. These candidate solutions are then used to update the parameters associated with the probabilistic model in such a way that the future search will be biased toward the region containing highquality solutions. The parameter updating procedure in MRAS is guided by a sequence of implicit probabilistic models we call reference models. We provide a particular algorithm instantiation of the MRAS method, where the sequence of reference models can be viewed as the generalized probability distribution models for estimation of distribution algorithms (EDAs) with proportional selection scheme. In addition, we show that the model reference framework can also be used to describe the recently proposed crossentropy (CE) method for optimization and to study its properties. Hence, this paper can also be seen as a study on the effectiveness of combining CE and EDAs. We prove global convergence of the proposed algorithm in both continuous and combinatorial domains, and we carry out numerical studies to illustrate the performance of the algorithm.
SelfAdaptation in Evolutionary Algorithms
 Parameter Setting in Evolutionary Algorithm
, 2006
"... this paper, we will give an overview over the selfadaptive behavior of evolutionary algorithms. We will start with a short overview over the historical development of adaptation mechanisms in evolutionary computation. In the following part, i.e., Section 2.2, we will introduce classification scheme ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
this paper, we will give an overview over the selfadaptive behavior of evolutionary algorithms. We will start with a short overview over the historical development of adaptation mechanisms in evolutionary computation. In the following part, i.e., Section 2.2, we will introduce classification schemes that are used to group the various approaches. Afterwards, selfadaptive mechanisms will be considered. The overview is started by some examples  introducing selfadaptation of the strategy parameter and of the crossover operator. Several authors have pointed out that the concept of selfadaptation may be extended. Section 3.2 is devoted to such ideas. The mechanism of selfadaptation has been examined in various areas in order to find answers to the question under which conditions selfadaptation works and when it could fail. In the remaining sections, therefore, we present a short overview over some of the research done in this field
Two New Mutation Operators for Enhanced Search and Optimization in Evolutionary Programming
, 1997
"... Evolutionary programming (EP) has been successfully applied to many parameter optimization problems. We propose a mean mutation operator, consisting of a linear combination of Gaussian and Cauchy mutations. Preliminary results indicate that both the adaptive and nonadaptive versions of the mean mut ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Evolutionary programming (EP) has been successfully applied to many parameter optimization problems. We propose a mean mutation operator, consisting of a linear combination of Gaussian and Cauchy mutations. Preliminary results indicate that both the adaptive and nonadaptive versions of the mean mutation operator are capable of producing solutions that are as good as, or better than those produced by Gaussian mutations alone. The success of the adaptive operator could be attributed to its ability to selfadapt the shape of the probability density function that generates the mutations during the run.
Dynamic Control of Adaptive Parameters in Evolutionary Programming
 Eds.): Simulated Evolution and Learning, 1998, Lecture Notes in Artificial Intelligence
, 1998
"... . Evolutionary programming (EP) has been widely used in numerical optimization in recent years. The adaptive parameters, also named step size control, in EP play a significant role which controls the step size of the objective variables in the evolutionary process. However, the step size control may ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
. Evolutionary programming (EP) has been widely used in numerical optimization in recent years. The adaptive parameters, also named step size control, in EP play a significant role which controls the step size of the objective variables in the evolutionary process. However, the step size control may not work in some cases. They are frequently lost and then make the search stagnate early. Applying the lower bound can maintain the step size in a work range, but it also constrains the objective variables from being further explored. In this paper, an adaptively adjusted lower bound is proposed which supports better finetune searches and spreads out exploration as well. 1 Introduction Evolutionary programming (EP) [1] has been applied to many optimization problems successfully in recent years [2, 3, 4]. A global optimization problem can be formalised as a pair (S; f), where S ` R n is a bounded set in R n and f : S ! R is an ndimensional realvalued function. The problem is to find...