Results 1  10
of
18
Predictive Models for the Breeder Genetic Algorithm  I. Continuous Parameter Optimization
 EVOLUTIONARY COMPUTATION
, 1993
"... In this paper a new genetic algorithm called the Breeder Genetic Algorithm (BGA) is introduced. The BGA is based on artificial selection similar to that used by human breeders. A predictive model for the BGA is presented which is derived from quantitative genetics. The model is used to predict t ..."
Abstract

Cited by 342 (25 self)
 Add to MetaCart
In this paper a new genetic algorithm called the Breeder Genetic Algorithm (BGA) is introduced. The BGA is based on artificial selection similar to that used by human breeders. A predictive model for the BGA is presented which is derived from quantitative genetics. The model is used to predict the behavior of the BGA for simple test functions. Different mutation schemes are compared by computing the expected progress to the solution. The numerical performance of the BGA is demonstrated on a test suite of multimodal functions. The number of function evaluations needed to locate the optimum scales only as n ln(n) where n is the number of parameters. Results up to n = 1000 are reported.
Finite Markov Chain Results in Evolutionary Computation: A Tour d'Horizon
, 1998
"... . The theory of evolutionary computation has been enhanced rapidly during the last decade. This survey is the attempt to summarize the results regarding the limit and finite time behavior of evolutionary algorithms with finite search spaces and discrete time scale. Results on evolutionary algorithms ..."
Abstract

Cited by 57 (2 self)
 Add to MetaCart
. The theory of evolutionary computation has been enhanced rapidly during the last decade. This survey is the attempt to summarize the results regarding the limit and finite time behavior of evolutionary algorithms with finite search spaces and discrete time scale. Results on evolutionary algorithms beyond finite space and discrete time are also presented but with reduced elaboration. Keywords: evolutionary algorithms, limit behavior, finite time behavior 1. Introduction The field of evolutionary computation is mainly engaged in the development of optimization algorithms which design is inspired by principles of natural evolution. In most cases, the optimization task is of the following type: Find an element x 2 X such that f(x ) f(x) for all x 2 X , where f : X ! IR is the objective function to be maximized and X the search set. In the terminology of evolutionary computation, an individual is represented by an element of the Cartesian product X \Theta A, where A is a possibly...
Convergence of non{elitist strategies
 In Proceedings of the First IEEE Conference on Computational Intelligence
, 1994
"... Abstract  This paper o ers su cient conditions to prove global convergence of non{elitist evolutionary algorithms. If these conditions can be applied they yield bounds of the convergence rate as a by{product. This is demonstrated by an example that can be calculated exactly. ..."
Abstract

Cited by 33 (4 self)
 Add to MetaCart
Abstract  This paper o ers su cient conditions to prove global convergence of non{elitist evolutionary algorithms. If these conditions can be applied they yield bounds of the convergence rate as a by{product. This is demonstrated by an example that can be calculated exactly.
On Correlated Mutations in Evolution Strategies
 Parallel Problem Solving from Nature
, 1992
"... this paper we are interested mainly in convergence speed of ESs such that we shall assume for simplicity that f has only one local minimum which is of course the global one. First work on this topic has been done by [7], [9] and [8], who calculated the convergence rate of a stochastic process w.r.t. ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
this paper we are interested mainly in convergence speed of ESs such that we shall assume for simplicity that f has only one local minimum which is of course the global one. First work on this topic has been done by [7], [9] and [8], who calculated the convergence rate of a stochastic process w.r.t. the problem
Convergence rates of evolutionary algorithms for a class of convex objective functions
 Control and Cybernetics
, 1997
"... Probabilistic optimization algorithms that mimic the process of biological evolution are usually subsumed under the term `evolutionary algorithms. ' This work extends the convergence theory of evolutionary algorithms by presenting a su cient convergence condition for those evolutionary algorithms th ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
Probabilistic optimization algorithms that mimic the process of biological evolution are usually subsumed under the term `evolutionary algorithms. ' This work extends the convergence theory of evolutionary algorithms by presenting a su cient convergence condition for those evolutionary algorithms that do not necessarily generate a sequence of feasible points such that the associated objective function values decrease monotonically to the global minimum. Moreover, it is investigated how fast the sequence of objective function values generated by anevolutionary algorithm approaches the minimum of strongly convex functions in a probabilistic sense. The theoretical analysis presented here distinguishes from related studies in three points: First, it does not require advanced calculus. Second, only the rst partial derivatives of the objective function are assumed to exist. Third, one obtains sharp bounds on the convergence rates for a class of functions being a superset of the class of quadratic functions with positive de nite Hessian matrix. 1
Improving hitandrun for global optimization
 J. Global Optim
, 1993
"... Abstract. Improving HitandRun is a random search algorithm for global optimization that at each iteration generates a candidate point for improvement that is uniformly distributed along a randomly chosen direction within the feasible region. The candidate point is accepted as the next iterate if i ..."
Abstract

Cited by 20 (6 self)
 Add to MetaCart
Abstract. Improving HitandRun is a random search algorithm for global optimization that at each iteration generates a candidate point for improvement that is uniformly distributed along a randomly chosen direction within the feasible region. The candidate point is accepted as the next iterate if it offers an improvement over the current iterate. We show that for positive definite quadratic programs, the expected number of function evaluations needed to arbitrarily well approximate the optimal solution is at most O(n 5~2) where n is the dimension of the problem. Improving HitandRun when applied to global optimization problems can therefore be expected to converge polynomially fast as it approaches the global optimum. Key words. Random search, Monte Carlo optimization, algorithm complexity, global optimization. 1.
Parallel Approaches to Stochastic Global Optimization
 In Parallel Computing: From Theory to Sound Practice, W. Joosen and E. Milgrom, Eds., IOS
, 1992
"... In this paper we review parallel implementations of some stochastic global optimization methods on MIMD computers. Moreover, we present a new parallel version of an Evolutionary Algorithm for global optimization, where the inherent parallelism can be scaled to obtain a reasonable processor utilizati ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
In this paper we review parallel implementations of some stochastic global optimization methods on MIMD computers. Moreover, we present a new parallel version of an Evolutionary Algorithm for global optimization, where the inherent parallelism can be scaled to obtain a reasonable processor utilization. For this algorithm the convergence to the global optimum with probability one can be assured. Test results concerning speed up and reliability are given. 1 Introduction Many real world problems in engineering and economics can be formulated as optimization problems, in which the objective function is multimodal, i.e. the problem possesses many local minima. Compared to the number of methods designed to determine a local minimum, there are only a few methods which attempt to find the global minimum (see [52] for a survey). Although there are some special cases where the global optimum can be found (see [26]) the general case is unsolvable. This paper will be restricted to the more gener...
Natural Evolution and Collective OptimumSeeking
, 1992
"... On the one hand many people admire the often strikingly efficient results of organic evolution, on the other hand, however, they presuppose mutation and selection to be a rather prodigal and unefficient trialanderror strategy like MonteCarlo sampling. Taking into account the parallel processing o ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
On the one hand many people admire the often strikingly efficient results of organic evolution, on the other hand, however, they presuppose mutation and selection to be a rather prodigal and unefficient trialanderror strategy like MonteCarlo sampling. Taking into account the parallel processing of a heterogeneous population and sexual propagation with recombination as well as the endogenous adaptation of strategy characteristics, simulated evolution reveals a couple of interesting, sometimes surprising, properties of nature's learningbydoing algorithm. `Survival of the fittest', often taken as Darwin's view, turns out to be a bad advice. Forgetting, i.e. individual death, and even regression show up to be necessary ingredients of the life game. Whether the process should be termed gradualistic or punctualistic, is a matter of the observer's point of view. He even might observe `long waves'. 1. INTRODUCTION Evolution can be looked at from a large variety of positions. Beginning wi...
Tuning & Simplifying Heuristical Optimization
, 2010
"... This thesis is about the tuning and simplification of blackbox (directsearch, derivativefree) optimization methods, which by definition do not use gradient information to guide their search for an optimum but merely need a fitness (cost, error, objective) measure for each candidate solution to th ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
This thesis is about the tuning and simplification of blackbox (directsearch, derivativefree) optimization methods, which by definition do not use gradient information to guide their search for an optimum but merely need a fitness (cost, error, objective) measure for each candidate solution to the optimization problem. Such optimization methods often have parameters that influence their behaviour and efficacy. A MetaOptimization technique is presented here for tuning the behavioural parameters of an optimization method by employing an additional layer of optimization. This is used in a number of experiments on two popular optimization methods, Differential Evolution and Particle Swarm Optimization, and unveils the true performance capabilities of an optimizer in different usage scenarios. It is found that stateoftheart optimizer variants with their supposedly adaptive behavioural parameters do not have a general and consistent performance advantage but are outperformed in several cases by simplified optimizers, if only the behavioural parameters are tuned properly.