Results 1  10
of
65
Niching Methods for Genetic Algorithms
, 1995
"... Niching methods extend genetic algorithms to domains that require the location and maintenance of multiple solutions. Such domains include classification and machine learning, multimodal function optimization, multiobjective function optimization, and simulation of complex and adaptive systems. This ..."
Abstract

Cited by 191 (1 self)
 Add to MetaCart
Niching methods extend genetic algorithms to domains that require the location and maintenance of multiple solutions. Such domains include classification and machine learning, multimodal function optimization, multiobjective function optimization, and simulation of complex and adaptive systems. This study presents a comprehensive treatment of niching methods and the related topic of population diversity. Its purpose is to analyze existing niching methods and to design improved niching methods. To achieve this purpose, it first develops a general framework for the modelling of niching methods, and then applies this framework to construct models of individual niching methods, specifically crowding and sharing methods. Using a constructed model of crowding, this study determines why crowding methods over the last two decades have not made effective niching methods. A series of tests and design modifications results in the development of a highly effective form of crowding, called determin...
Genetic Algorithms for Changing Environments
 Parallel Problem Solving from Nature 2
, 1992
"... Genetic algorithms perform an adaptive search by maintaining a population of candidate solutions that are allocated dynamically to promising regions of the search space. The distributed nature of the genetic search provides a natural source of power for searching in changing environments. As long as ..."
Abstract

Cited by 111 (3 self)
 Add to MetaCart
Genetic algorithms perform an adaptive search by maintaining a population of candidate solutions that are allocated dynamically to promising regions of the search space. The distributed nature of the genetic search provides a natural source of power for searching in changing environments. As long as sufficient diversity remains in the population the genetic algorithm can respond to a changing response surface by reallocating future trials. However, the tendency of genetic algorithms to converge rapidly reduces their ability to identify regions of the search space that might suddenly become more attractive as the environment changes. This paper presents a modification of the standard generational genetic algorithm that is designed to maintain the diversity required to track a changing response surface. An experimental study shows some promise for the new technique. 1. INTRODUCTION Genetic algorithms (GAs) have been shown to be a useful alternative to traditional search and optimization...
Genetic algorithms for tracking changing environments
 Proceedings of the Fifth International Conference on Genetic Algorithms
, 1993
"... In this paper, we explore the use of alternative mutation strategies as a means of increasing diversity so that the GA can track the optimum of a changing environment. This paper contrasts three different strategies: the Standard GA using a constant level of mutation, a mechanism called Random Immig ..."
Abstract

Cited by 92 (0 self)
 Add to MetaCart
In this paper, we explore the use of alternative mutation strategies as a means of increasing diversity so that the GA can track the optimum of a changing environment. This paper contrasts three different strategies: the Standard GA using a constant level of mutation, a mechanism called Random Immigrants, that replaces part of the population each generation with randomly generated values, and an adaptive mechanism called Triggered Hypermutation, that increases the mutation rate whenever there is a degradation in the performance of the timeaveraged best performance. The study examines each of these strategies in the context of several kinds of environmental change, including linear translation of the optimum, random movement of the optimum, and oscillation between two significantly different landscapes. These first results should lead to the development of a single mechanism that can work well in both stationary and nonstationary environments. 1
Simultaneous design of membership functions and rule sets for fuzzy controllers using genetic algorithms
 IEEE Transactions on Fuzzy Systems
, 1995
"... Abstract This paper examines the applicability of genetic algorithms (GA’s) in the simultaneous design of membership functions and rule sets for fuzzy logic controllers. Previous work using genetic algorithms has focused on the development of rule sets or high performance membership functions; howe ..."
Abstract

Cited by 68 (1 self)
 Add to MetaCart
Abstract This paper examines the applicability of genetic algorithms (GA’s) in the simultaneous design of membership functions and rule sets for fuzzy logic controllers. Previous work using genetic algorithms has focused on the development of rule sets or high performance membership functions; however, the interdependence between these two components suggests a simultaneous design procedure would be a more appropriate methodology. When GA’s have been used to develop both, it has been done serially, e.g., design the membership functions and then use them in the design of the rule set. This, however, means that the membership functions were optimized for the initial rule set and not the rule set designed subsequently. GA’s are fully capable of creating complete fuzzy controllers given the equations of motion of the system, eliminating the need for human input in the design loop. This new method has been applied to two problems, a cart controller and a truck controller. Beyond the development of these controllers, we also examine the design of a robust controller for the cart problem and its ability to overcome faulty rules. I.
CaseBased Initialization of Genetic Algorithms
, 1993
"... In this paper, we introduce a casebased method of initializing genetic algorithms that are used to guide search in changing environments. This is incorporated in an anytime learning system. Anytime learning is a general approach to continuous learning in a changing environment. The agent's learning ..."
Abstract

Cited by 66 (6 self)
 Add to MetaCart
In this paper, we introduce a casebased method of initializing genetic algorithms that are used to guide search in changing environments. This is incorporated in an anytime learning system. Anytime learning is a general approach to continuous learning in a changing environment. The agent's learning module continuously tests new strategies against a simulation model of the task environment, and dynamically updates the knowledge base used by the agent on the basis of the results. The execution module includes a monitor that can dynamically modify the simulation model based on its observations of the external environment; an update to the simulation model causes the learning system to restart learning. Previous work has shown that genetic algorithms provide an appropriate search mechanism for anytime learning. This paper extends the approach by including strategies, which are learned under similar environmental conditions, in the initial population of the genetic algorithm. Experiments s...
Delta Coding: An Iterative Search Strategy for Genetic Algorithms
 Proceedings of the Fourth International Conference on Genetic Algorithms
, 1991
"... A new search strategy for genetic algorithms is introduced which allows iterative searches with complete reinitialization of the population preserving the progress already made toward solving an optimization task. Delta coding is a simple search strategy based on the idea that the encoding used ..."
Abstract

Cited by 32 (1 self)
 Add to MetaCart
A new search strategy for genetic algorithms is introduced which allows iterative searches with complete reinitialization of the population preserving the progress already made toward solving an optimization task. Delta coding is a simple search strategy based on the idea that the encoding used by a genetic algorithm can express a distance away from some previous partial solution. Delta values are added to a partial solution before evaluating the fitness; the delta encoding forms a new hypercube of equal or smaller size that is constructed around the most recent partial solution.
A MicroGenetic Algorithm for Multiobjective Optimization
"... In this paper, we propose a multiobjective optimization approach based on a micro genetic algorithm (microGA) which is a genetic algorithm with a very small population (four individuals were used in our experiment) and a reinitialization process. We use three forms of elitism and a memory to genera ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
In this paper, we propose a multiobjective optimization approach based on a micro genetic algorithm (microGA) which is a genetic algorithm with a very small population (four individuals were used in our experiment) and a reinitialization process. We use three forms of elitism and a memory to generate the initial population of the microGA. Our approach is tested with several standard functions found in the specialized literature. The results obtained are very encouraging, since they show that this simple approach can produce an important portion of the Pareto front at a very low computational cost.
Understanding interactions among genetic algorithm parameters
 in Foundations of Genetic Algorithms 5
, 1999
"... Genetic algorithms (GAs) are multidimensional and stochastic search methods, involving complex interactions among their parameters. For last two decades, researchers have been trying to understand the mechanics of GA parameter interactions by using various techniquescareful `functional ' decomposi ..."
Abstract

Cited by 24 (3 self)
 Add to MetaCart
Genetic algorithms (GAs) are multidimensional and stochastic search methods, involving complex interactions among their parameters. For last two decades, researchers have been trying to understand the mechanics of GA parameter interactions by using various techniquescareful `functional ' decomposition of parameter interactions, empirical studies, and Markov chain analysis. Although the complexities in these interactions are getting clearer with such analyses, it still remains an open question in the mind of a newcomer to the eld or to a GApractitioner as to what values of GA parameters (such as population size, choice of GA operators, operator probabilities, and others) to use in an arbitrary problem. In this paper, we investigate the performance of simple tripartite GAs on a number of simple to complex test problems from a practical standpoint. Since in a realworld situation, the overall time to run a GA is more or less dominated by the time consumed by objective function evaluations, we compare di erent GAs for a xed number of function evaluations. Based on probability calculations and simulation results, it is observed that for solving simple problems (unimodal or small modality problems) the mutation operator plays an important role, although GAs with the crossover operator alone can also solve these problems. However, the two operators (when applied alone) have two di erent working zones for the population size. For complex problems involving massive multimodality and misleadingness (deception), the crossover operator is the key search operator. Based on these studies, it is recommended that when in doubt, the use of the crossover operator with an adequate population size is a reliable approach.
Combined Genetic Algorithm Optimization and Regularized Orthogonal Least Squares Learning for Radial Basis Function Networks
, 1999
"... The paper presents a twolevel learning method for radial basis function (RBF) networks. A regularized orthogonal least squares (ROLS) algorithm is employed at the lower level to construct RBF networks while the two key learning parameters, the regularization parameter and the RBF width, are optimiz ..."
Abstract

Cited by 24 (6 self)
 Add to MetaCart
The paper presents a twolevel learning method for radial basis function (RBF) networks. A regularized orthogonal least squares (ROLS) algorithm is employed at the lower level to construct RBF networks while the two key learning parameters, the regularization parameter and the RBF width, are optimized using a genetic algorithm (GA) at the upper level. Nonlinear time series modeling and prediction is used as an example to demonstrate the effectiveness of this hierarchical learning approach.
Multiobjective Optimization using a MicroGenetic Algorithm
"... In this paper, we propose a micro genetic algorithm with three forms of elitism for multiobjective optimization. We show how this relatively simple algorithm coupled with an external file and a diversity approach based on geographical distribution can generate efficiently the Pareto fronts of severa ..."
Abstract

Cited by 22 (9 self)
 Add to MetaCart
In this paper, we propose a micro genetic algorithm with three forms of elitism for multiobjective optimization. We show how this relatively simple algorithm coupled with an external file and a diversity approach based on geographical distribution can generate efficiently the Pareto fronts of several difficult test functions (both constrained and unconstrained). A metric based on the average distance to the Pareto optimal set is used to compare our results against two evolutionary multiobjective optimization techniques recently proposed in the literature.