Results 1  10
of
18
Genetic Algorithms, Noise, and the Sizing of Populations
 COMPLEX SYSTEMS
, 1991
"... This paper considers the effect of stochasticity on the quality of convergence of genetic algorithms (GAs). In many problems, the variance of buildingblock fitness or socalled collateral noise is the major source of variance, and a populationsizing equation is derived to ensure that average sig ..."
Abstract

Cited by 244 (85 self)
 Add to MetaCart
This paper considers the effect of stochasticity on the quality of convergence of genetic algorithms (GAs). In many problems, the variance of buildingblock fitness or socalled collateral noise is the major source of variance, and a populationsizing equation is derived to ensure that average signaltocollateralnoise ratios are favorable to the discrimination of the best building blocks required to solve a problem of bounded deception. The sizing relation is modified to permit the inclusion of other sources of stochasticity, such as the noise of selection, the noise of genetic operators, and the explicit noise or nondeterminism of the objective function. In a test suite of five functions, the sizing relation proves to be a conservative predictor of average correct convergence, as long as all major sources of noise are considered in the sizing calculation. These results suggest how the sizing equation may be viewed as a coarse delineation of a boundary between what a physicist might call two distinct phases of GA behavior. At low population sizes the GA makes many errors of decision, and the quality of convergence is largely left to the vagaries of chance or the serial fixup of flawed results through mutation or other serial injection of diversity. At large population sizes, GAs can reliably discriminate between good and bad building blocks, and parallel processing and recombination of building blocks lead to quick solution of even difficult deceptive problems. Additionally, the paper outlines a number of extensions to this work, including the development of more refined models of the relation between generational average error and ultimate convergence quality, the development of online methods for sizing populations via the estimation of populations...
What Makes a Problem Hard for a Genetic Algorithm? Some Anomalous Results and Their Explanation
 Machine Learning
, 1993
"... Abstract. What makes a problem easy or hard for a genetic algorithm (GA)? This question has become increasingly important as people have tried to apply the GA to ever more diverse types of problems. Much previous work on this question has studied the relationship between GA performance and the stru ..."
Abstract

Cited by 103 (3 self)
 Add to MetaCart
Abstract. What makes a problem easy or hard for a genetic algorithm (GA)? This question has become increasingly important as people have tried to apply the GA to ever more diverse types of problems. Much previous work on this question has studied the relationship between GA performance and the structure of a given fitness function when it is expressed as a Walsh polynomial. The work of Bethke, Goldberg, and others has produced certain theoretical results about this relationship. In this article we review these theoretical results, and then discuss a number of seemingly anomalous experimental results reported by Tanese concerning the performance of the GA on a subclass of Walsh polynomials, some members of which were expected to be easy for the GA to optimize. Tanese found that the GA was poor at optimizing all functions in this subclass, that a partitioning of a single large population into a number of smaller independent populations seemed to improve performance, and that hillclimbing outperformed both the original and partitioned forms of the GA on these functions. These results seemed to contradict several commonly held expectations about GAs. We begin by reviewing schema processing in GAs. We then give an informal description of how Walsh analysis and Bethke's Walshschema transform relate to GA performance, and we discuss the relevance of this analysis for GA applications in optimization and machine learning. We then describe Tanese's surprising results, examine them experimentally and theoretically, and propose and evaluate some explanations. These explanations lead to a more fundamental question about GAs: what are the features of problems that determine the likelihood of successful GA performance?
The ParameterLess Genetic Algorithm: Rational And Automated Parameter Selection For Simplified Genetic Algorithm Operation
, 2000
"... Genetic algorithms (GAs) have been used to solve difficult optimization problems in a number of fields. One of the advantages of these algorithms is that they operate well even in domains where little is known, thus giving the GA the flavor of a general purpose problem solver. However, in order ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
Genetic algorithms (GAs) have been used to solve difficult optimization problems in a number of fields. One of the advantages of these algorithms is that they operate well even in domains where little is known, thus giving the GA the flavor of a general purpose problem solver. However, in order to solve a problem with the GA, the user usually has to specify a number of parameters that have little to do with the user's problem, and have more to do with the way the GA operates. This dissertation presents a technique that greatly simplifies the GA operation by relieving the user from having to set these parameters. Instead, the parameters are set automatically by the algorithm itself. The validity of the approach is illustrated with artificial problems often used to test GA techniques, and also with a simplified version of a network expansion problem.
Implicit Parallelism in Genetic Algorithms
, 1993
"... This paper is related to Holland's result on implicit parallelism. Roughly speaking, Holland showed a lower bound of the order of n 3 c 1 l to the number of schemata usefully processed by the genetic algorithm in a population of n = c 1 × 2 l binary strings, with c 1 a small integer. We ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
This paper is related to Holland's result on implicit parallelism. Roughly speaking, Holland showed a lower bound of the order of n 3 c 1 l to the number of schemata usefully processed by the genetic algorithm in a population of n = c 1 × 2 l binary strings, with c 1 a small integer. We analyze the case of population of n = 2 bl binary strings where b is a positive parameter (Holland's result is related to the case b=1). In the main result, for all b>0 we state a lower bound on the expected number of processed schemata; moreover, we prove that this bound is tight up to a constant for all b³1 and, in this case, we strengthen in probability the previous result. _______________________________ * This paper has appeared in Artificial Intelligence (61) 2, 307314. + Universitá Statale di Milano, Dip. di Scenze dell'Informazione, via Comelico 39, 20135 Milano, Italy. # International Computer Science Institute, Berkeley, CA 94704, and Progetto di Intelligenza Artificiale e R...
Self Adaptation in Evolutionary Algorithms
, 1998
"... Evolutionary Algorithms are search algorithms based on the Darwinian metaphor of “Natural Selection”. Typically these algorithms maintain a population of individual solutions, each of which has a fitness attached to it, which in some way reflects the quality of the solution. The search proceeds via ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
Evolutionary Algorithms are search algorithms based on the Darwinian metaphor of “Natural Selection”. Typically these algorithms maintain a population of individual solutions, each of which has a fitness attached to it, which in some way reflects the quality of the solution. The search proceeds via the iterative generation, evaluation and possible incorporation of new individuals based on the current population, using a number of parameterised genetic operators. In this thesis the phenomenon of Self Adaptation of the genetic operators is investigated. A new framework for classifying adaptive algorithms is proposed, based on the scope of the adaptation, and on the nature of the transition function guiding the search through the space of possible configurations of the algorithm. Mechanisms are investigated for achieving the self adaptation of recombination and mutation operators within a genetic algorithm, and means of combining them are investigated. These are shown to produce significantly better results than any of the combinations of fixed operators tested, across a range of problem types. These new operators reduce the need for the designer of an algorithm to select
Genetic Algorithm in Search and Optimization: The Technique and Applications
 Proc. of Int. Workshop on Soft Computing and Intelligent Systems
, 1997
"... A genetic algorithm (GA) is a search and optimization method developed by mimicking the evolutionary principles and chromosomal processing in natural genetics. A GA begins its search with a random set of solutions usually coded in binary string structures. Every solution is assigned a fitness which ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
A genetic algorithm (GA) is a search and optimization method developed by mimicking the evolutionary principles and chromosomal processing in natural genetics. A GA begins its search with a random set of solutions usually coded in binary string structures. Every solution is assigned a fitness which is directly related to the objective function of the search and optimization problem. Thereafter, the population of solutions is modified to a new population by applying three operators similar to natural genetic operatorsreproduction, crossover, and mutation. A GA works iteratively by successively applying these three operators in each generation till a termination criterion is satisfied. Over the past one decade, GAs have been successfully applied to a wide variety of problems, because of their simplicity, global perspective, and inherent parallel processing. In this paper, we outline the working principle of a GA by describing these three operators and by outlining an intuitive sketch ...
Applying Genetic Algorithms to Extract Workload Classes
 CMG’94 International Conference on Management and Performance Evaluation of Computer Systems
, 1994
"... It is often desirable to predict how a computer system will perform given changes to the system. Systems administrators are frequently faced with answering questions such as, "How will the throughput of the system be affected if the number of users increases by 50% and if the CPU is upgraded by ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
It is often desirable to predict how a computer system will perform given changes to the system. Systems administrators are frequently faced with answering questions such as, "How will the throughput of the system be affected if the number of users increases by 50% and if the CPU is upgraded by adding a floating point processing unit?" System models (e.g., analytical, simulation) can be used to help answer such questions. As one example, a closed multiclass queueing network model may be constructed of the system. This model consists of a network of servers and a set of customer classes. Each customer class is defined by the number of customers (i.e., jobs) in the class and by the demands that each customer in the class places on each of the servers. If the underlying model and assumptions are accurate, it is possible to predict how changes to the system will affect performance. One typical assumption of queueing models is that the time spent at a server by a class is exponentially dis...