Results 1  10
of
66
PopulationBased Incremental Learning: A Method for Integrating Genetic Search Based Function Optimization and Competitive Learning
, 1994
"... Genetic algorithms (GAs) are biologically motivated adaptive systems which have been used, with varying degrees of success, for function optimization. In this study, an abstraction of the basic genetic algorithm, the Equilibrium Genetic Algorithm (EGA), and the GA in turn, are reconsidered within th ..."
Abstract

Cited by 298 (11 self)
 Add to MetaCart
Genetic algorithms (GAs) are biologically motivated adaptive systems which have been used, with varying degrees of success, for function optimization. In this study, an abstraction of the basic genetic algorithm, the Equilibrium Genetic Algorithm (EGA), and the GA in turn, are reconsidered within the framework of competitive learning. This new perspective reveals a number of different possibilities for performance improvements. This paper explores populationbased incremental learning (PBIL), a method of combining the mechanisms of a generational genetic algorithm with simple competitive learning. The combination of these two methods reveals a tool which is far simpler than a GA, and which outperforms a GA on large set of optimization problems in terms of both speed and accuracy. This paper presents an empirical analysis of where the proposed technique will outperform genetic algorithms, and describes a class of problems in which a genetic algorithm may be able to perform better. Extensions to this algorithm are discussed and analyzed. PBIL and extensions are compared with a standard GA on twelve problems, including standard numerical optimization functions, traditional GA test suite problems, and NPComplete problems.
Removing the Genetics from the Standard Genetic Algorithm
, 1995
"... We present an abstraction of the genetic algorithm (GA), termed populationbased incremental learning (PBIL), that explicitly maintains the statistics contained in a GA's population, but which abstracts away the crossover operator and redefines the role of the population. This results in PBIL being ..."
Abstract

Cited by 178 (10 self)
 Add to MetaCart
We present an abstraction of the genetic algorithm (GA), termed populationbased incremental learning (PBIL), that explicitly maintains the statistics contained in a GA's population, but which abstracts away the crossover operator and redefines the role of the population. This results in PBIL being simpler, both computationally and theoretically, than the GA. Empirical results reported elsewhere show that PBIL is faster and more effective than the GA on a large set of commonly used benchmark problems. Here we present results on a problem custom designed to benefit both from the GA's crossover operator and from its use of a population. The results show that PBIL performs as well as, or better than, GAs carefully tuned to do well on this problem. This suggests that even on problems custom designed for GAs, much of the power of the GA may derive from the statistics maintained implicitly in its population, and not from the population itself nor from the crossover operator. Removing the Ge...
Simulated annealing: Practice versus theory
 Mathl. Comput. Modelling
, 1993
"... this paper "ergodic" is used in a very weak sense, as it is not proposed, theoretically or practically, that all states of the system are actually to be visited ..."
Abstract

Cited by 156 (20 self)
 Add to MetaCart
this paper "ergodic" is used in a very weak sense, as it is not proposed, theoretically or practically, that all states of the system are actually to be visited
Evaluating Evolutionary Algorithms
 Artificial Intelligence
, 1996
"... Test functions are commonly used to evaluate the effectiveness of different search algorithms. However, the results of evaluation are as dependent on the test problems as they are on the algorithms that are the subject of comparison. Unfortunately, developing a test suite for evaluating competing se ..."
Abstract

Cited by 86 (14 self)
 Add to MetaCart
Test functions are commonly used to evaluate the effectiveness of different search algorithms. However, the results of evaluation are as dependent on the test problems as they are on the algorithms that are the subject of comparison. Unfortunately, developing a test suite for evaluating competing search algorithms is difficult without clearly defined evaluation goals. In this paper we discuss some basic principles that can be used to develop test suites and we examine the role of test suites as they have been used to evaluate evolutionary search algorithms. Current test suites include functions that are easily solved by simple search methods such as greedy hillclimbers. Some test functions also have undesirable characteristics that are exaggerated as the dimensionality of the search space is increased. New methods are examined for constructing functions with different degrees of nonlinearity, where the interactions and the cost of evaluation scale with respect to the dimensionality of...
Deception Considered Harmful
 Foundations of Genetic Algorithms 2
, 1992
"... A central problem in the theory of genetic algorithms is the characterization of problems that are difficult for GAs to optimize. Many attempts to characterize such problems focus on the notion of Deception, defined in terms of the static average fitness of competing schemas. This article examines t ..."
Abstract

Cited by 73 (0 self)
 Add to MetaCart
A central problem in the theory of genetic algorithms is the characterization of problems that are difficult for GAs to optimize. Many attempts to characterize such problems focus on the notion of Deception, defined in terms of the static average fitness of competing schemas. This article examines the Static Building Block Hypothesis (SBBH), the underlying assumption used to define Deception. Exploiting contradictions between the SBBH and the Schema Theorem, we show that Deception is neither necessary nor sufficient for problems to be difficult for GAs. This article argues that the characterization of hard problems must take into account the basic features of genetic algorithms, especially their dynamic, biased sampling strategy. Keywords: Deception, building block hypothesis 1 INTRODUCTION Since Holland's early work on the analysis of genetic algorithms (GAs), the usual approach has been to focus on the allocation of search effort to subspaces described by schemas representing hyper...
Adaptive simulated annealing (ASA): Lessons learned
 Control and Cybernetics
, 1996
"... Adaptive simulated annealing (ASA) is a global optimization algorithm based on an associated proof that the parameter space can be sampled much more efficiently than by using other previous simulated annealing algorithms. The author's ASA code has been publicly available for over two years. Durin ..."
Abstract

Cited by 72 (14 self)
 Add to MetaCart
Adaptive simulated annealing (ASA) is a global optimization algorithm based on an associated proof that the parameter space can be sampled much more efficiently than by using other previous simulated annealing algorithms. The author's ASA code has been publicly available for over two years. During this time the author has volunteered to help people via email, and the feedback obtained has been used to further develop the code.
A Parallel Genetic Algorithm for the Set Partitioning Problem
, 1994
"... In this dissertation we report on our efforts to develop a parallel genetic algorithm and apply it to the solution of the set partitioning problema difficult combinatorial optimization problem used by many airlines as a mathematical model for flight crew scheduling. We developed a distributed stea ..."
Abstract

Cited by 66 (1 self)
 Add to MetaCart
In this dissertation we report on our efforts to develop a parallel genetic algorithm and apply it to the solution of the set partitioning problema difficult combinatorial optimization problem used by many airlines as a mathematical model for flight crew scheduling. We developed a distributed steadystate genetic algorithm in conjunction with a specialized local search heuristic for solving the set partitioning problem. The genetic algorithm is based on an island model where multiple independent subpopulations each run a steadystate genetic algorithm on their own subpopulation and occasionally fit strings migrate between the subpopulations. Tests on forty realworld set partitioning problems were carried out on up to 128 nodes of an IBM SP1 parallel computer. We found that performance, as measured by the quality of the solution found and the iteration on which it was found, improved as additional subpopulations were added to the computation. With larger numbers of subpopulations the genetic algorithm was regularly able to find the optimal solution to problems having up to a few thousand integer variables. In two cases, highquality integer feasible solutions were found for problems with 36,699 and 43,749 integer variables, respectively. A notable limitation we found was the difficulty solving problems with many constraints.
Fitness Variance of Formae and Performance Prediction
, 1994
"... Representation is widely recognised as a key determinant of performance in evolutionary computation. The development of families of representationindependentoperators allows the formulation of formal representationindependent evolutionary algorithms. These formal algorithms can be instantiated ..."
Abstract

Cited by 61 (7 self)
 Add to MetaCart
Representation is widely recognised as a key determinant of performance in evolutionary computation. The development of families of representationindependentoperators allows the formulation of formal representationindependent evolutionary algorithms. These formal algorithms can be instantiated for particular search problems by selecting a suitable representation. The performance of different representations, in the context of any given formal representationindependent algorithm, can then be measured. Simple analyses suggest that fitness variance of formae (generalised schemata) for the chosen representation might act as a performance predictor for evolutionary algorithms. This hypothesis is tested and supported through studies of four different representations for the travelling salesrep problem (TSP) in the context of both formal representationindependentgenetic algorithms and corresponding memetic algorithms. 1 Motivation The subject of this paper is representation i...
An Empirical Comparison of Seven Iterative and Evolutionary Function Optimization Heuristics
, 1995
"... This report is a repository for the results obtained from a large scale empirical comparison of seven iterative and evolutionbased optimization heuristics. Twentyseven static optimization problems, spanning six sets of problem classes which are commonly explored in genetic algorithm literature, ..."
Abstract

Cited by 50 (7 self)
 Add to MetaCart
This report is a repository for the results obtained from a large scale empirical comparison of seven iterative and evolutionbased optimization heuristics. Twentyseven static optimization problems, spanning six sets of problem classes which are commonly explored in genetic algorithm literature, are examined. The problem sets include jobshop scheduling, traveling salesman, knapsack, binpacking, neural network weight optimization, and standard numerical optimization. The search spaces in these problems range from 2^368 to 2^2040. The results indicate that using genetic algorithms for the optimization of static functions does not yield a benefit, in terms of the final answer obtained, over simpler optimization heuristics. Descriptions of the algorithms tested and the encodings of the problems are described in detail for reproducibility.