Results 1  10
of
10
PopulationBased Incremental Learning: A Method for Integrating Genetic Search Based Function Optimization and Competitive Learning
, 1994
"... Genetic algorithms (GAs) are biologically motivated adaptive systems which have been used, with varying degrees of success, for function optimization. In this study, an abstraction of the basic genetic algorithm, the Equilibrium Genetic Algorithm (EGA), and the GA in turn, are reconsidered within th ..."
Abstract

Cited by 352 (12 self)
 Add to MetaCart
(Show Context)
Genetic algorithms (GAs) are biologically motivated adaptive systems which have been used, with varying degrees of success, for function optimization. In this study, an abstraction of the basic genetic algorithm, the Equilibrium Genetic Algorithm (EGA), and the GA in turn, are reconsidered within the framework of competitive learning. This new perspective reveals a number of different possibilities for performance improvements. This paper explores populationbased incremental learning (PBIL), a method of combining the mechanisms of a generational genetic algorithm with simple competitive learning. The combination of these two methods reveals a tool which is far simpler than a GA, and which outperforms a GA on large set of optimization problems in terms of both speed and accuracy. This paper presents an empirical analysis of where the proposed technique will outperform genetic algorithms, and describes a class of problems in which a genetic algorithm may be able to perform better. Extensions to this algorithm are discussed and analyzed. PBIL and extensions are compared with a standard GA on twelve problems, including standard numerical optimization functions, traditional GA test suite problems, and NPComplete problems.
A Sequential Niche Technique for Multimodal Function Optimization
 EVOLUTIONARY COMPUTATION
, 1993
"... A technique is described which allows unimodal function optimization methods to be extended to efficiently locate all optima of multimodal problems. We describe an algorithm based on a traditional genetic algorithm (GA). This involves iterating the GA, but uses knowledge gained during one iteration ..."
Abstract

Cited by 158 (2 self)
 Add to MetaCart
A technique is described which allows unimodal function optimization methods to be extended to efficiently locate all optima of multimodal problems. We describe an algorithm based on a traditional genetic algorithm (GA). This involves iterating the GA, but uses knowledge gained during one iteration to avoid researching, on subsequent iterations, regions of problem space where solutions have already been found. This is achieved by applying a fitness derating function to the raw fitness function, so that fitness values are depressed in the regions of the problem space where solutions have already been found. Consequently, the likelihood of discovering a new solution on each iteration is dramatically increased. The technique may be used with various styles of GA, or with other optimization methods, such as simulated annealing. The effectiveness of the algorithm is demonstrated on a number of multimodal test functions. The technique is at least as fast as fitness sharing methods. It provi...
Population Based Incremental Learning: A Method for Integrating Genetic Search Based Function Optimization and Competitve Learning
, 1994
"... Genetic algorithms (GAs) are biologically motivated adaptive systems which have been used, with varying degrees of success, for function optimization. In this study, an abstraction of the basic genetic algorithm, the Equilibrium Genetic Algorithm (EGA), and the GA in turn, are reconsidered within ..."
Abstract

Cited by 38 (0 self)
 Add to MetaCart
Genetic algorithms (GAs) are biologically motivated adaptive systems which have been used, with varying degrees of success, for function optimization. In this study, an abstraction of the basic genetic algorithm, the Equilibrium Genetic Algorithm (EGA), and the GA in turn, are reconsidered within the framework of competitive learning. This new perspective reveals a number of different possibilities for performance improvements. This paper explores population based incremental learning (PBIL), a method of combining the mechanisms of a generational genetic algorithm with simple competitive learning. The combination of these two methods reveals a tool which is far simpler than a GA, and which outperforms a GA on large set of optimization problems in terms of both speed and accuracy. This paper presents an empirical analysis of where the proposed technique will outperform genetic algorithms, and describes a class of problems in which a genetic algorithm may be able to perform b...
Using Neural Networks and Genetic Algorithms as Heuristics for NPComplete Problems
, 1983
"... Paradigms for using neural networks (NNs) and genetic algorithms (GAs) to heuristically solve boolean satisfiability (SAT) problems are presented. Since SAT is NPComplete, any other NPComplete problem can be transformed into an equivalent SAT problem in polynomial time, and solved via either parad ..."
Abstract

Cited by 18 (8 self)
 Add to MetaCart
Paradigms for using neural networks (NNs) and genetic algorithms (GAs) to heuristically solve boolean satisfiability (SAT) problems are presented. Since SAT is NPComplete, any other NPComplete problem can be transformed into an equivalent SAT problem in polynomial time, and solved via either paradigm. This technique is illustrated for hamiltonian circuit (HC) problems. INTRODUCTION NPComplete problems are problems that are not currently solvable in polynomial time. However, they are polynomially equivalent in the sense that any NPComplete problem can be transformed into any other in polynomial time. Thus, if any NPComplete problem can be solved in polynomial time, they all can [Garey]. The canonical example of an NPComplete problem is the boolean satisfiability (SAT) problem: Given an arbitrary boolean expression of n variables, does there exist an assignment to those variables such that the expression is true? Other familiar examples include job shop scheduling, bin packing, a...
Genetic Algorithms, Problem Difficulty, and the Modality of Fitness Landscapes
, 1995
"... We generally assume that the modality (i.e., number of local optima) of a fitness landscape is related to the difficulty of finding the best point on that landscape by evolutionary computation (e.g., hillclimbers and genetic algorithms (GAs)). This thesis first examines the limits of modality by con ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
We generally assume that the modality (i.e., number of local optima) of a fitness landscape is related to the difficulty of finding the best point on that landscape by evolutionary computation (e.g., hillclimbers and genetic algorithms (GAs)). This thesis first examines the limits of modality by constructing a unimodal function and a maximally multimodal function. At such extremes our intuition breaks down. A fitness landscape consisting entirely of a single hill leading to the global optimum proves to be harder for hillclimbers than for GA crossover. A provably maximally multimodal function, in which half the points in the search space are local optima, can be easier than the unimodal, single hill problem for both hillclimbers and GAs. Exploring the more realistic intermediate range between the extremes of modality, local optima are constructed with varying degrees of "attraction" to evolutionary algorithms. To construct such optima, it is necessary first to define attraction for an a...
Improved Automatic Classification of Biological Particles From ElectronMicroscopy Images Using Genetic Neural Nets
"... . In this paper several neural network classification algorithms have been applied to a realworld data case of electron microscopy image classification in which it was known a priori the existence of two differentiated views of the same specimen. Using several labeled sets as a reference, the param ..."
Abstract
 Add to MetaCart
. In this paper several neural network classification algorithms have been applied to a realworld data case of electron microscopy image classification in which it was known a priori the existence of two differentiated views of the same specimen. Using several labeled sets as a reference, the parameters and architecture of the classifier (both LVQ trained codebooks and BP trained neuralnets) were optimized using a genetic algorithm. The automatic process of training and optimization is implemented using a new version of the glvq (genetic learning vector quantization) and gprop algorithms, and compared to a nonoptimized version of the algorithms, Kohonen's lvq (learning vector quantization) and MLP trained with QP. Dividing the all available samples in three sets, for training, testing and validation, the results presented here show a low average error for unknown samples. Usually GPROP outperforms GLVQ, but GLVQ obtains codebooks with less parameters than the perceptrons obtain...
Improved Automatic Classification of Biological
"... In this paper several neural network classification algorithms have been applied to a realworld data case of electron microscopy image classification in which it was known a priori the existence of two differentiated views of the same specimen. Using several labeled sets as a reference, the par ..."
Abstract
 Add to MetaCart
In this paper several neural network classification algorithms have been applied to a realworld data case of electron microscopy image classification in which it was known a priori the existence of two differentiated views of the same specimen. Using several labeled sets as a reference, the parameters and architecture of the classifier (both LVQ trained codebooks and BP trained neuralnets) were optimized using a genetic algorithm. The automatic process of training and optimization is implemented using a new version of the GLVQ (genetic learning vector quantization) and GtROt algorithms, and compared to a nonoptimized version of the algorithms, Kohonen's .VQ (learning vector quantization) and MLP trained with QP. Dividing the all available samples in three sets, for training, testing and validation, the results presented here show a low average error for unknown samples. Usually GPROP outperforms GLVQ, but GLVQ obtains codebooks with less parameters than the percepttons obtained by GPROP. The implication of this kind of automatic classification algorithms in the determination of three dimensional structure of biological particles is finally discused.
A Note on the Hierarchical Nature of NParent Variation Operators in Evolutionary Algorithms Abstract
"... Variation operators can be characterized by the probability mass that they associate with potential solutions from the state space of all possible solutions. Analysis is undertaken to show that the space of reachable probability mass functions is fundamentally hierarchical. The class of nparent ope ..."
Abstract
 Add to MetaCart
Variation operators can be characterized by the probability mass that they associate with potential solutions from the state space of all possible solutions. Analysis is undertaken to show that the space of reachable probability mass functions is fundamentally hierarchical. The class of nparent operators can generate a more diverse set of possible probabilistic searches of the state space than can be obtained by (n − 1)parent operators, or even a succession of (n − 1)parent operators. The result suggests that greater attention might be usefully applied in the exploration of multiparent variation operators.
Genetic Algorithm Difficulty and the Modality of Fitness Landscapes
 Foundations of Genetic Algorithms 3
, 1994
"... We assume that the modality (i.e., number of local optima) of a fitness landscape is related to the difficulty of finding the best point on that landscape by evolutionary computation (e.g., hillclimbers and genetic algorithms (GAs)). We first examine the limits of modality by constructing a unimo ..."
Abstract
 Add to MetaCart
(Show Context)
We assume that the modality (i.e., number of local optima) of a fitness landscape is related to the difficulty of finding the best point on that landscape by evolutionary computation (e.g., hillclimbers and genetic algorithms (GAs)). We first examine the limits of modality by constructing a unimodal function and a maximally multimodal function. At such extremes our intuition breaks down. A fitness landscape consisting entirely of a single hill leading to the global optimum proves to be harder for hillclimbers than GAs. A provably maximally multimodal function, in which half the points in the search space are local optima, can be easier than the unimodal, single hill problem for both hillclimbers and GAs. Exploring the more realistic intermediate range between the extremes of modality, we construct local optima with varying degrees of "attraction" to our evolutionary algorithms. Most work on optima and their basins of attraction has focused on hills and hillclimbers, while s...
Signed........................................................... (candidate)
"... This work has not previously been accepted in substance for any degree and is not being concurrently submitted in candidature for any degree. ..."
Abstract
 Add to MetaCart
(Show Context)
This work has not previously been accepted in substance for any degree and is not being concurrently submitted in candidature for any degree.