Results 1 
7 of
7
1 Benchmark Functions for CEC’2013 Special Session and Competition on Niching Methods for Multimodal Function Optimization
"... Evolutionary Algorithms (EAs) in their original forms are usually designed for locating a single global solution. These algorithms typically converge to a single solution because of the global selection scheme used. Nevertheless, many realworld problems are “multimodal ” by nature, i.e., multiple sa ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
(Show Context)
Evolutionary Algorithms (EAs) in their original forms are usually designed for locating a single global solution. These algorithms typically converge to a single solution because of the global selection scheme used. Nevertheless, many realworld problems are “multimodal ” by nature, i.e., multiple satisfactory solutions exist. It may be desirable to locate many such satisfactory solutions so that a decision maker can choose one that is most proper in his/her problem domain. Numerous techniques have been developed in the past for locating multiple optima (global or local). These techniques are commonly referred to as “niching ” methods. A niching method can be incorporated into a standard EA to promote and maintain formation of multiple stable subpopulations within a single population, with an aim to locate multiple globally optimal or suboptimal solutions. Many niching methods have
Cooperative Coevolution with Differential Grouping for Large Scale Optimization
"... Abstract—Cooperative coevolution has been introduced into evolutionary algorithms with the aim of solving increasingly complex optimization problems through a divideandconquer paradigm. In theory, the idea of coadapted subcomponents is desirable for solving largescale optimization problems. How ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Abstract—Cooperative coevolution has been introduced into evolutionary algorithms with the aim of solving increasingly complex optimization problems through a divideandconquer paradigm. In theory, the idea of coadapted subcomponents is desirable for solving largescale optimization problems. However in practice, without prior knowledge about the problem, it is not clear how the problem should be decomposed. In this paper we propose an automatic decomposition strategy called differential grouping that can uncover the underlying interaction structure of the decision variables and form subcomponents such that the interdependence between them is kept to a minimum. We show mathematically how such a decomposition strategy can be derived from a definition of partial separability. The empirical studies show that such nearoptimal decomposition can greatly improve the solution quality on largescale global optimization problems. Finally, we show how such an automated decomposition allows for a better approximation of the contribution of various subcomponents, leading to a more efficient assignment of the computational budget to various subcomponents. Index Terms—cooperative coevolution, largescale optimization, problem decomposition, nonseparability, numerical optimization
Linkage Learning Using . . . Dependency Graph
, 2012
"... The goal of linkage learning in genetic and evolutionary algorithms is to identify the interactions between variables of a problem. Knowing the linkage information helps search algorithms to find the optimum solution efficiently and reliably in hard problems. This paper presents a simple approach fo ..."
Abstract
 Add to MetaCart
The goal of linkage learning in genetic and evolutionary algorithms is to identify the interactions between variables of a problem. Knowing the linkage information helps search algorithms to find the optimum solution efficiently and reliably in hard problems. This paper presents a simple approach for linkage learning based on the graph theory. A graph is used as the structure to keep the the pairwise dependencies between variables of the problem. We call this graph ‘the underlying dependency graph of the problem’. Then maximum spanning tree (MST) of the dependency graph is found. It is shown that MST contains all the necessary linkage if the dependency graph is built upon enough population. In this approach, pairwise dependencies calculated based on a perturbation based identification method, are used as the variable dependencies. The proposed approach has the advantage of being capable of learning the linkage without the need for the costly fittodata evaluations for model search. It is parameterless and the algorithm description is simple and straight forward. The proposed technique is tested on several benchmark problems and it is shown to be able to compete with similar approaches. Based on the experimental results it can successfully find the linkage groups in a polynomial number of fitness evaluations.
The Use of Explicit Building Blocks in Evolutionary Computation
"... This paper proposes a new algorithm to identify and compose building blocks. Building blocks are interpreted as common subsequences between good individuals. The proposed algorithm can extract building blocks from a population explicitly. Explicit building blocks are identified from shared alleles a ..."
Abstract
 Add to MetaCart
This paper proposes a new algorithm to identify and compose building blocks. Building blocks are interpreted as common subsequences between good individuals. The proposed algorithm can extract building blocks from a population explicitly. Explicit building blocks are identified from shared alleles among multiple chromosomes. These building blocks are stored in an archive. They are recombined to generate offspring. The additively decomposable problems and hierarchical decomposable problems are used to validate the algorithm. The results are compared with the Bayesian Optimization Algorithm, the Hierarchical Bayesian Optimization Algorithm, and the Chisquare Matrix. This proposed algorithm is simple, effective, and fast. The experimental results confirm that building block identification is an important process that guides the recombination procedure to improve the solutions. In addition, the method efficiently solves hard problems.
Center for Connected Learning and ComputerBased Modeling
"... Lightweight agents distributed in space have the potential to solve many complex problems. In this paper, we examine a model where agents represent individuals in a genetic algorithm (GA) solving a shared problem. We examine two questions: (1) How does the network density of connections between agen ..."
Abstract
 Add to MetaCart
(Show Context)
Lightweight agents distributed in space have the potential to solve many complex problems. In this paper, we examine a model where agents represent individuals in a genetic algorithm (GA) solving a shared problem. We examine two questions: (1) How does the network density of connections between agents affect the performance of the systems? (2) How does the interaction topology affect the performance of the system? In our model, agents exist in either a random network topology with longdistance communication, or a locationbased topology, where agents only communicate with near neighbors. We examine both fixed and dynamic networks. Within the context of our investigation, our initial results indicate that relatively low network density achieves the same results as a panmictic, or fully connected, population. Additionally, we find that dynamic networks outperform fixed networks, and that random network topologies outperform proximitybased network topologies. We conclude by showing how this model can be useful not only for multiagent learning, but also for genetic algorithms, agentbased simulation and models of diffusion of innovation.