Results 1  10
of
145
Bayesian Optimization Algorithm: From Single Level to Hierarchy
, 2002
"... There are four primary goals of this dissertation. First, design a competent optimization algorithm capable of learning and exploiting appropriate problem decomposition by sampling and evaluating candidate solutions. Second, extend the proposed algorithm to enable the use of hierarchical decompositi ..."
Abstract

Cited by 101 (19 self)
 Add to MetaCart
There are four primary goals of this dissertation. First, design a competent optimization algorithm capable of learning and exploiting appropriate problem decomposition by sampling and evaluating candidate solutions. Second, extend the proposed algorithm to enable the use of hierarchical decomposition as opposed to decomposition on only a single level. Third, design a class of difficult hierarchical problems that can be used to test the algorithms that attempt to exploit hierarchical decomposition. Fourth, test the developed algorithms on the designed class of problems and several realworld applications. The dissertation proposes the Bayesian optimization algorithm (BOA), which uses Bayesian networks to model the promising solutions found so far and sample new candidate solutions. BOA is theoretically and empirically shown to be capable of both learning a proper decomposition of the problem and exploiting the learned decomposition to ensure robust and scalable search for the optimum across a wide range of problems. The dissertation then identifies important features that must be incorporated into the basic BOA to solve problems that are not decomposable on a single level, but that can still be solved by decomposition over multiple levels of difficulty. Hierarchical
Global Optimization Algorithms  Theory and Application
, 2011
"... This ebook is devoted to Global Optimization algorithms, which are methods for finding solutions of high quality for an incredible wide range of problems. We introduce the basic concepts of optimization and discuss features which make optimization problems difficult and thus, should be considered ..."
Abstract

Cited by 94 (26 self)
 Add to MetaCart
This ebook is devoted to Global Optimization algorithms, which are methods for finding solutions of high quality for an incredible wide range of problems. We introduce the basic concepts of optimization and discuss features which make optimization problems difficult and thus, should be considered when trying to solve them. In this book, we focus on
Dimensional Analysis of AlleleWise Mixing Revisited
 Parallel Problem Solving From Nature  PPSN IV
, 1998
"... . This paper revisits an important, yet poorly understood, phenomenon of genetic optimisation, namely the mixing or juxtapositioning capacity of recombination, and its relation to selection. Mixing is a key factor in order to determine when a genetic algorithm will converge to the global optimum, or ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
will prematurely converge to a suboptimal solution. Previous work (Goldberg, Deb & Thierens, 1993) mad...
Domino Convergence, Drift, and the TemporalSalience Structure of Problems
, 1998
"... The convergence speed of building blocks depends on their marginal fitness contribution or on the salience structure of the problem. We use a sequential parameterization approach to build models of the differential convergence behavior, and derive time complexities for the boundary case which is obt ..."
Abstract

Cited by 57 (19 self)
 Add to MetaCart
The convergence speed of building blocks depends on their marginal fitness contribution or on the salience structure of the problem. We use a sequential parameterization approach to build models of the differential convergence behavior, and derive time complexities for the boundary case which is obtained with an exponentially scaled problem (BinInt). We show that this domino convergence time complexity is linear in the number of building blocks (O(l)) for selection algorithms with constant selection intensity (such as tournament selection and ( ; ) or truncation selection), and exponential (O(2 l )) for proportionate selection. These complexities should be compared with the convergence speed for uniformly salient problems which are respectively (O( p l)) and (O(l ln l)). In addition we relate this facetwise model to a genetic drift model, and identify where and when the stochastic uctuations due to drift overwhelms the domino convergence, resulting in drift stall. The combined mo...
unknown title
"... robert engelman, daniel pauly, dirk zeller, ronald g. prinn, john k. pinnegar and nicholas v.c. polunin INTRODUCTION TO THE BOOK Evidence of human damage to natural resources and the ..."
Abstract
 Add to MetaCart
robert engelman, daniel pauly, dirk zeller, ronald g. prinn, john k. pinnegar and nicholas v.c. polunin INTRODUCTION TO THE BOOK Evidence of human damage to natural resources and the
Scalability Problems of Simple Genetic Algorithms
 Evolutionary Computation
, 1999
"... Scalable evolutionary computation has become an intensively studied research topic in recent years. The issue of scalability is predominant in any field of algorithmic design, but it became particularly relevant for the design of competent genetic algorithms once the scalability problems of simpl ..."
Abstract

Cited by 49 (5 self)
 Add to MetaCart
Scalable evolutionary computation has become an intensively studied research topic in recent years. The issue of scalability is predominant in any field of algorithmic design, but it became particularly relevant for the design of competent genetic algorithms once the scalability problems of simple genetic algorithms were understood. Here we present some of the work that has aided in getting a clear insight in the scalability problems of simple genetic algorithms. Particularly, we discuss the important issue of building block mixing. We show how the need for mixing places a boundary in the GA parameter space that, together with the boundary from the schema theorem, delimits the region where the GA converges reliably to the optimum in problems of bounded difficulty. This region shrinks rapidly with increasing problem size unless the building blocks are tightly linked in the problem coding structure. In addition, we look at how straightforward extensions of the simple genetic a...
Continuous Iterated Density Estimation Evolutionary Algorithms Within The IDEA Framework
, 2000
"... In this paper, we formalize the notion of performing optimization by iterated density estimation evolutionary algorithms as the IDEA framework. These algorithms build probabilistic models and estimate probability densities based upon a selection of available points. We show how these probabili ..."
Abstract

Cited by 49 (5 self)
 Add to MetaCart
In this paper, we formalize the notion of performing optimization by iterated density estimation evolutionary algorithms as the IDEA framework. These algorithms build probabilistic models and estimate probability densities based upon a selection of available points. We show how these probabilistic models can be built and used for dierent probability density functions within the IDEA framework. We put the emphasis on techniques for vectors of continuous random variables and thereby introduce new continuous evolutionary optimization algorithms.
Generalized Convergence Models for Tournament and (µ,lambda)Selection
, 1995
"... Within this paper, a unified view of the dynamics of simplified evolutionary algorithms using tournament selection and (¯,)selection is presented. This research is inspired by recent articles of Thierens and Goldberg (1994) and Muhlenbein and SchlierkampVoosen (Muhlenbein and SchlierkampVoosen 19 ..."
Abstract
 Add to MetaCart
Within this paper, a unified view of the dynamics of simplified evolutionary algorithms using tournament selection and (¯,)selection is presented. This research is inspired by recent articles of Thierens and Goldberg (1994) and Muhlenbein and SchlierkampVoosen (Muhlenbein and Schlierkamp
SEARCH, polynomial complexity, and the fast messy genetic algorithm
, 1995
"... Blackbox optimizationoptimization in presence of limited knowledge about the objective functionhas recently enjoyed a large increase in interest because of the demand from the practitioners. This has triggered a race for new high performance algorithms for solving large, difficult problems. Si ..."
Abstract

Cited by 58 (10 self)
 Add to MetaCart
Blackbox optimizationoptimization in presence of limited knowledge about the objective functionhas recently enjoyed a large increase in interest because of the demand from the practitioners. This has triggered a race for new high performance algorithms for solving large, difficult problems. Simulated annealing, genetic algorithms, tabu search are some examples. Unfortunately, each of these algorithms is creating a separate field in itself and their use in practice is often guided by personal discretion rather than scientific reasons. The primary reason behind this confusing situation is the lack of any comprehensive understanding about blackbox search. This dissertation takes a step toward clearing some of the confusion. The main objectives of this dissertation are: 1. present SEARCH (Search Envisioned As Relation & Class Hierarchizing)an alternate perspective of blackbox optimization and its quantitative analysis that lays the foundation essential for transcending the limits of random enumerative search; 2. design and testing of the fast messy genetic algorithm. SEARCH is a general framework for understanding blackbox optimization in terms of relations,
Learning Probability Distributions in Continuous Evolutionary Algorithms  A Comparative Review
 Natural Computing
, 2003
"... We present a comparative review of Evolutionary Algorithms that generate new population members by sampling a probability distribution constructed during the optimization process. We present a unifying formulation for five such algorithms that enables us to characterize them based on the parametriza ..."
Abstract

Cited by 56 (14 self)
 Add to MetaCart
We present a comparative review of Evolutionary Algorithms that generate new population members by sampling a probability distribution constructed during the optimization process. We present a unifying formulation for five such algorithms that enables us to characterize them based on the parametrization of the probability distribution, the learning methodology, and the use of historical information. The algorithms are evaluated on a number of test functions in order to assess their relative strengths and weaknesses. This comparative review helps to identify areas of applicability for the algorithms and to guide future algorithmic developments.
Results 1  10
of
145