Results 1  10
of
18
Finite Markov Chain Results in Evolutionary Computation: A Tour d'Horizon
, 1998
"... . The theory of evolutionary computation has been enhanced rapidly during the last decade. This survey is the attempt to summarize the results regarding the limit and finite time behavior of evolutionary algorithms with finite search spaces and discrete time scale. Results on evolutionary algorithms ..."
Abstract

Cited by 61 (2 self)
 Add to MetaCart
. The theory of evolutionary computation has been enhanced rapidly during the last decade. This survey is the attempt to summarize the results regarding the limit and finite time behavior of evolutionary algorithms with finite search spaces and discrete time scale. Results on evolutionary algorithms beyond finite space and discrete time are also presented but with reduced elaboration. Keywords: evolutionary algorithms, limit behavior, finite time behavior 1. Introduction The field of evolutionary computation is mainly engaged in the development of optimization algorithms which design is inspired by principles of natural evolution. In most cases, the optimization task is of the following type: Find an element x 2 X such that f(x ) f(x) for all x 2 X , where f : X ! IR is the objective function to be maximized and X the search set. In the terminology of evolutionary computation, an individual is represented by an element of the Cartesian product X \Theta A, where A is a possibly...
Rigorous Hitting Times for Binary Mutations
, 1999
"... In the binary evolutionary optimization framework, two mutation operators are theoretically investigated. For both the standard mutation, in which all bits are flipped independently with the same probability, and the 1bitflip mutation, which flips exactly one bit per bitstring, the statistical dis ..."
Abstract

Cited by 59 (2 self)
 Add to MetaCart
(Show Context)
In the binary evolutionary optimization framework, two mutation operators are theoretically investigated. For both the standard mutation, in which all bits are flipped independently with the same probability, and the 1bitflip mutation, which flips exactly one bit per bitstring, the statistical distribution of the first hitting times of the target are thoroughly computed (expectation and variance) up to terms of order l (the size of the bitstrings) in two distinct situations: without any selection, or with the deterministic (1+1)ES selection on the OneMax problem. In both cases, the 1bitflip mutation convergence time is smaller by a constant (in terms of l) multiplicative factor. These results extend to the case of multiple independent optimizers. Keywords Evolutionary algorithms, stochastic analysis, binary mutations, Markov chains, hitting times. 1 Introduction One known drawback of Evolutionary Algorithms as function optimizers is the amount of computational efforts they re...
Noisy optimization with evolution strategies
 SIAM Journal on Optimization
"... Evolution strategies are general, natureinspired heuristics for search and optimization. Supported both by empirical evidence and by recent theoretical findings, there is a common belief that evolution strategies are robust and reliable, and frequently they are the method of choice if neither deriv ..."
Abstract

Cited by 35 (6 self)
 Add to MetaCart
(Show Context)
Evolution strategies are general, natureinspired heuristics for search and optimization. Supported both by empirical evidence and by recent theoretical findings, there is a common belief that evolution strategies are robust and reliable, and frequently they are the method of choice if neither derivatives of the objective function are at hand nor differentiability and numerical accuracy can be assumed. However, despite their widespread use, there is little exchange between members of the “classical ” optimization community and people working in the field of evolutionary computation. It is our belief that both sides would benefit from such an exchange. In this paper, we present a brief outline of evolution strategies and discuss some of their properties in the presence of noise. We then empirically demonstrate that for a simple but nonetheless nontrivial noisy objective function, an evolution strategy outperforms other optimization algorithms designed to be able to cope with noise. The environment in which the algorithms are tested is deliberately chosen to afford a transparency of the results that reveals the strengths and shortcomings of the strategies, making it possible to draw conclusions with regard to the design of better optimization algorithms for noisy environments. 1
Combining Mutation Operators in Evolutionary Programming
, 1998
"... Traditional investigations with evolutionary programming (EP) for continuous parameter optimization problems have used a single mutation operator with a parameterized probability density function (pdf), typically a Gaussian. Using a variety of mutation operators that can be combined during evolutio ..."
Abstract

Cited by 28 (0 self)
 Add to MetaCart
Traditional investigations with evolutionary programming (EP) for continuous parameter optimization problems have used a single mutation operator with a parameterized probability density function (pdf), typically a Gaussian. Using a variety of mutation operators that can be combined during evolution to generate pdf’s of varying shapes could hold the potential for producing better solutions with less computational effort. In view of this, a linear combination of Gaussian and Cauchy mutations is proposed. Simulations indicate that both the adaptive and nonadaptive versions of this operator are capable of producing solutions that are statistically as good as, or better, than those produced when using Gaussian or Cauchy mutations alone.
A Convergence Analysis of Unconstrained and Bound Constrained Evolutionary Pattern Search
 Evolutionary Computation
, 1999
"... We present and analyze a class of evolutionary algorithms for unconstrained and bound constrained optimization on R n : evolutionary pattern search algorithms (EPSAs). EPSAs adaptively modify the step size of the mutation operator in response to the success of previous optimization steps. The d ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
We present and analyze a class of evolutionary algorithms for unconstrained and bound constrained optimization on R n : evolutionary pattern search algorithms (EPSAs). EPSAs adaptively modify the step size of the mutation operator in response to the success of previous optimization steps. The design of EPSAs is inspired by recent analyses of pattern search methods. We show that EPSAs can be cast as stochastic pattern search methods, and we use this observation to prove that EPSAs have a probabilistic, weak stationary point convergence theory. This convergence theory is distinguished by the fact that the analysis does not approximate the stochastic process of EPSAs, and hence it exactly characterizes their convergence properties. Keywords Evolutionary pattern search, local convergence, bound constraints, parameter adaptation. 1
When Do HeavyTail Distributions Help?
"... Abstract. We examine the evidence for the widespread belief that heavy tail distributions enhance the search for minima on multimodal objective functions. We analyze isotropic and anisotropic heavytail Cauchy distributions and investigate the probability to sample a better solution, depending on th ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
Abstract. We examine the evidence for the widespread belief that heavy tail distributions enhance the search for minima on multimodal objective functions. We analyze isotropic and anisotropic heavytail Cauchy distributions and investigate the probability to sample a better solution, depending on the step length and the dimensionality of the search space. The probability decreases fast with increasing step length for isotropic Cauchy distributions and moderate search space dimension. The anisotropic Cauchy distribution maintains a large probability for sampling large steps along the coordinate axes, resulting in an exceptionally good performance on the separable multimodal Rastrigin function. In contrast, on a nonseparable rotated Rastrigin function or for the isotropic Cauchy distribution the performance difference to a Gaussian search distribution is negligible. 1
Modeling Genetic Algorithms with Interacting Particle Systems
 In Theoretical Aspects of Evolutionary Computing
, 2001
"... We present in this work a natural Interacting Particle System (IPS) approach for modeling and studying the asymptotic behavior of Genetic Algorithms (GAs). In this model, a population is seen as a distribution (or measure) on the search space, and the Genetic Algorithm as a measure valued dynamical ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
We present in this work a natural Interacting Particle System (IPS) approach for modeling and studying the asymptotic behavior of Genetic Algorithms (GAs). In this model, a population is seen as a distribution (or measure) on the search space, and the Genetic Algorithm as a measure valued dynamical system. This model allows one to apply recent convergence results from the IPS literature for studying the convergence of genetic algorithms when the size of the population tends to infinity. We first review a number of approaches to Genetic Algorithms modeling and related convergence results. We then describe a general and abstract discrete time Interacting Particle System model for GAs, an we propose a brief review of some recent asymptotic results about the convergence of the NIPS approximating model (of finite Nsizedpopulation GAs) towards the IPS model (of infinite population GAs), including law of large number theorems, IL p mean and exponential bounds as well as large deviations...
Bitwise Regularity and GAHardness.
, 1998
"... We present in this paper a theoretical analysis that relates an irregularity measure of a fitness function to the socalled GAdeception. This approach is a continuation of a work [18] that has presented a deception analysis of Hölder functions. The analysis developed here is a generalization of thi ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
We present in this paper a theoretical analysis that relates an irregularity measure of a fitness function to the socalled GAdeception. This approach is a continuation of a work [18] that has presented a deception analysis of Hölder functions. The analysis developed here is a generalization of this work in two ways: we first use a "bitwise regularity" instead of a Höder exponent as a basis for our deception analysis, second, we perform a similar deception analysis of a GA with uniform crossover. We finally propose to use the bitwise regularity coefficients in order to analyze the influence of a chromosome encoding on the GA efficiency, and we present experiments with Gray encoding.
Local Convergence Rat e of Evolutionary Algorithm with Combined Mutation Operator
"... Abstract An appropriate mutation operator of the evolutionary algorithm (EA) maintains a balance between exploration and exploitation. This balance is usually satisfied by using the combined mutation operators (CMOs) of the Gaussian and Cauchy random variables. This paper studies the convergence pr ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract An appropriate mutation operator of the evolutionary algorithm (EA) maintains a balance between exploration and exploitation. This balance is usually satisfied by using the combined mutation operators (CMOs) of the Gaussian and Cauchy random variables. This paper studies the convergence property of the CMO. As a good model of the CMO, we propose to use the decision factor a, the probability of choosing the Gaussian random variable between the Gaussian and Cauchy random variables for a mutation operator. This paper shows that the optimal convergence rate and the associated optimal mutation step size are monotonically decreasing with respect to a. Keywords Evolutionary algorithm, combined mutation operator, Cauchy mutation, convergence rate. I.
Preventing Premature Convergence in a Simple EDA Via Global Step Size Setting
, 2008
"... When a simple realvalued estimation of distribution algorithm (EDA) with Gaussian model and maximum likelihood estimation of parameters is used, it converges prematurely even on the slope of the fitness function. The simplest way of preventing premature convergence by multiplying the variance estim ..."
Abstract
 Add to MetaCart
When a simple realvalued estimation of distribution algorithm (EDA) with Gaussian model and maximum likelihood estimation of parameters is used, it converges prematurely even on the slope of the fitness function. The simplest way of preventing premature convergence by multiplying the variance estimate by a constant factor k each generation is studied. Recent works have shown that when increasing the dimensionality of the search space, such an algorithm becomes very quickly unable to traverse the slope and focus to the optimum at the same time. In this paper it is shown that when isotropic distributions with Gaussian or Cauchy distributed norms are used, the simple constant setting of k is able to ensure a reasonable behaviour of the EDA on the slope and in the valley of the fitness function at the same time.