Results 1  10
of
48
P.J.: Tracking extrema in dynamic environments
 Proc Evolutionary Programming IV
, 1998
"... Abstract. Typical applications of evolutionary optimization involve the offline approximation of extrema of static multimodal functions. Methods which use a variety of techniques to selfadapt mutation parameters have been shown to be more successful than methods which do not use selfadaptation ..."
Abstract

Cited by 59 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Typical applications of evolutionary optimization involve the offline approximation of extrema of static multimodal functions. Methods which use a variety of techniques to selfadapt mutation parameters have been shown to be more successful than methods which do not use selfadaptation. For dynamic functions, the interest is not to obtain the extrema but to follow it as closely as possible. This paper compares the online extrema tracking performance of an evolutionary program without selfadaptation against an evolutionary program using a selfadaptive Gaussian update rule over a number of dynamics applied to a simple static function. The experiments demonstrate that for some dynamic functions, selfadaptation is effective while for others it is detrimental. 1
On SelfAdaptive Features in RealParameter Evolutionary Algorithms
, 2001
"... Due to the flexibility in adapting to different fitness landscapes, selfadaptive evolutionary algorithms (SAEAs) have been gaining popularity in the recent past. In this paper, we postulate the properties that SAEA operators should have for successful applications in realvalued search spaces. Sp ..."
Abstract

Cited by 42 (7 self)
 Add to MetaCart
Due to the flexibility in adapting to different fitness landscapes, selfadaptive evolutionary algorithms (SAEAs) have been gaining popularity in the recent past. In this paper, we postulate the properties that SAEA operators should have for successful applications in realvalued search spaces. Specifically, population mean and variance of a number of SAEA operators, such as various realparameter crossover operators and selfadaptive evolution strategies, are calculated for this purpose. Simulation results are shown to verify the theoretical calculations. The postulations and population variance calculations explain why selfadaptive GAs and ESs have shown similar performance in the past and also suggest appropriate strategy parameter values which must be chosen while applying and comparing different SAEAs.
Combining Mutation Operators in Evolutionary Programming
, 1998
"... Traditional investigations with evolutionary programming (EP) for continuous parameter optimization problems have used a single mutation operator with a parameterized probability density function (pdf), typically a Gaussian. Using a variety of mutation operators that can be combined during evolutio ..."
Abstract

Cited by 35 (0 self)
 Add to MetaCart
Traditional investigations with evolutionary programming (EP) for continuous parameter optimization problems have used a single mutation operator with a parameterized probability density function (pdf), typically a Gaussian. Using a variety of mutation operators that can be combined during evolution to generate pdf’s of varying shapes could hold the potential for producing better solutions with less computational effort. In view of this, a linear combination of Gaussian and Cauchy mutations is proposed. Simulations indicate that both the adaptive and nonadaptive versions of this operator are capable of producing solutions that are statistically as good as, or better, than those produced when using Gaussian or Cauchy mutations alone.
Fast Multiswarm Optimization for Dynamic Optimization Problems
"... In the real world, many applications are nonstationary optimization problems. This requires that the optimization algorithms need to not only find the global optimal solution but also track the trajectory of the changing global best solution in a dynamic environment. To achieve this, this paper pro ..."
Abstract

Cited by 33 (4 self)
 Add to MetaCart
(Show Context)
In the real world, many applications are nonstationary optimization problems. This requires that the optimization algorithms need to not only find the global optimal solution but also track the trajectory of the changing global best solution in a dynamic environment. To achieve this, this paper proposes a multiswarm algorithm based on fast particle swarm optimization for dynamic optimization problems. The algorithm employs a mechanism to track multiple peaks by preventing overcrowding at a peak and a fast particle swarm optimization algorithm as a local search method to find the near optimal solutions in a local promising region in the search space. The moving peaks benchmark function is used to test the performance of the proposed algorithm. The numerical experimental results show the efficiency of the proposed algorithm for dynamic optimization problems. 1
A Model Reference Adaptive Search Method for Global Optimization
 2007 Oper. Res
, 2008
"... informs ® doi 10.1287/opre.1060.0367 © 2007 INFORMS Model reference adaptive search (MRAS) for solving global optimization problems works with a parameterized probabilistic model on the solution space and generates at each iteration a group of candidate solutions. These candidate solutions are then ..."
Abstract

Cited by 26 (10 self)
 Add to MetaCart
informs ® doi 10.1287/opre.1060.0367 © 2007 INFORMS Model reference adaptive search (MRAS) for solving global optimization problems works with a parameterized probabilistic model on the solution space and generates at each iteration a group of candidate solutions. These candidate solutions are then used to update the parameters associated with the probabilistic model in such a way that the future search will be biased toward the region containing highquality solutions. The parameter updating procedure in MRAS is guided by a sequence of implicit probabilistic models we call reference models. We provide a particular algorithm instantiation of the MRAS method, where the sequence of reference models can be viewed as the generalized probability distribution models for estimation of distribution algorithms (EDAs) with proportional selection scheme. In addition, we show that the model reference framework can also be used to describe the recently proposed crossentropy (CE) method for optimization and to study its properties. Hence, this paper can also be seen as a study on the effectiveness of combining CE and EDAs. We prove global convergence of the proposed algorithm in both continuous and combinatorial domains, and we carry out numerical studies to illustrate the performance of the algorithm.
Local Convergence Rates of Simple Evolutionary Algorithms with Cauchy Mutations
 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION
, 1998
"... The standard choice for mutating an individual of an evolutionary algorithm with continuous variables is the normal distribution; however other distributions, especially some versions of the multivariate Cauchy distribution, have recently gained increased popularity in practical applications. Here t ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
(Show Context)
The standard choice for mutating an individual of an evolutionary algorithm with continuous variables is the normal distribution; however other distributions, especially some versions of the multivariate Cauchy distribution, have recently gained increased popularity in practical applications. Here the extent to which Cauchy mutation distributions may affect the local convergence behavior of evolutionary algorithms is analyzed. The results show that the order of local convergence is identical for Gaussian and spherical Cauchy distributions, whereas nonspherical Cauchy mutations lead to slower local convergence. As a byproduct of the analysis some recommendations for the parametrization of the selfadaptive step size control mechanism can be derived.
SelfAdaptation in Evolutionary Algorithms
 Parameter Setting in Evolutionary Algorithm
, 2006
"... this paper, we will give an overview over the selfadaptive behavior of evolutionary algorithms. We will start with a short overview over the historical development of adaptation mechanisms in evolutionary computation. In the following part, i.e., Section 2.2, we will introduce classification scheme ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
(Show Context)
this paper, we will give an overview over the selfadaptive behavior of evolutionary algorithms. We will start with a short overview over the historical development of adaptation mechanisms in evolutionary computation. In the following part, i.e., Section 2.2, we will introduce classification schemes that are used to group the various approaches. Afterwards, selfadaptive mechanisms will be considered. The overview is started by some examples  introducing selfadaptation of the strategy parameter and of the crossover operator. Several authors have pointed out that the concept of selfadaptation may be extended. Section 3.2 is devoted to such ideas. The mechanism of selfadaptation has been examined in various areas in order to find answers to the question under which conditions selfadaptation works and when it could fail. In the remaining sections, therefore, we present a short overview over some of the research done in this field
A Fast Particle Swarm Optimization Algorithm with Cauchy
 Mutation and Natural Selection Strategy. ISICA2007, LNCS4683
, 2007
"... Abstract.The standard Particle Swarm Optimization (PSO) algorithm is a novel evolutionary algorithm in which each particle studies its own previous best solution and the group’s previous best to optimize problems. One problem exists in PSO is its tendency of trapping into local optima. In this paper ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
(Show Context)
Abstract.The standard Particle Swarm Optimization (PSO) algorithm is a novel evolutionary algorithm in which each particle studies its own previous best solution and the group’s previous best to optimize problems. One problem exists in PSO is its tendency of trapping into local optima. In this paper, a fast particle swarm optimization (FPSO) algorithm is proposed by combining PSO and the Cauchy mutation and an evolutionary selection strategy. The idea is to introduce the Cauchy mutation into PSO in the hope of preventing PSO from trapping into a local optimum through long jumps made by the Cauchy mutation. FPSO has been compared with another improved PSO called AMPSO [12] on a set of benchmark functions. The results show that FPSO is much faster than AMPSO on all the test functions. Keywords:Particle swarm optimization, Cauchy mutation, swarm intelligence 1.
A generalized approach to construct benchmark problems for dynamic
 optimization.Proceedings of the 7th Int. Conf. on Simulated Evolution and Learning
, 2008
"... Abstract. There has been a growing interest in studying evolutionary algorithms in dynamic environments in recent years due to its importance in real applications. However, different dynamic test problems have been used to test and compare the performance of algorithms. This paper proposes a general ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
(Show Context)
Abstract. There has been a growing interest in studying evolutionary algorithms in dynamic environments in recent years due to its importance in real applications. However, different dynamic test problems have been used to test and compare the performance of algorithms. This paper proposes a generalized dynamic benchmark generator (GDBG) that can be instantiated into the binary space, real space and combinatorial space. This generator can present a set of different properties to test algorithms by tuning some control parameters. Some experiments are carried out on the real space to study the performance of the generator. 1