Results 11  20
of
28
Enhancing the Efficiency of The ECGA
 Proceedings of the X Parallel Problem Solving From Nature (PPSN2008
, 2008
"... Evolutionary Algorithms are largely used search and optimization procedures. They have been successfully applied for several problems and with proper care on the design process they can solve hard problems accurately, efficiently and reliably. The proper design of the algorithm turns some problems f ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
Evolutionary Algorithms are largely used search and optimization procedures. They have been successfully applied for several problems and with proper care on the design process they can solve hard problems accurately, efficiently and reliably. The proper design of the algorithm turns some problems from intractable to tractable. We can go even further, using efficiency enhancements to turn them from tractable to practical. In this paper we show preliminary results of two efficiency enhancements proposed for Extended Compact Genetic Algorithm. First, a model building enhancement was used to reduce the complexity of the process from O(n 3) to O(n 2), speeding up the algorithm by 1000 times on a 4096 bits problem. Then, a localsearch hybridization was used to reduce the population size by at least 32 times, reducing the memory and running time required by the algorithm. These results draw the first steps toward a competent and efficient Genetic Algorithm.
Towards billion bit optimization via efficient genetic algorithms
, 2007
"... This paper presents a highly efficient, fully parallelized implementation of the compact genetic algorithm to solve very large scale problems with millions to billions of variables. The paper presents principled results demonstrating the scalable solution of a difficult test function on instances ov ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
This paper presents a highly efficient, fully parallelized implementation of the compact genetic algorithm to solve very large scale problems with millions to billions of variables. The paper presents principled results demonstrating the scalable solution of a difficult test function on instances over a billion variables using a parallel implementation of compact genetic algorithm (cGA). The problem addressed is a noisy, blind problem over a vector of binary decision variables. Noise is added equaling up to a tenth of the deterministic objective function variance of the problem, thereby making it difficult for simple hillclimbers to find the optimal solution. The compact GA, on the other hand, is able to find the optimum in the presence of noise quickly, reliably, and accurately, and the solution scalability follows known convergence theories. These results on noisy problem together with other results on problems involving varying modularity, hierarchy, and overlap foreshadow routine solution of billionvariable problems across the landscape of search problems. 1
Let’s Get Ready to Rumble Redux: Crossover Versus Mutation Head to Head on Exponentially Scaled Problems
"... This paper analyzes the relative advantages between crossover and mutation on a class of deterministic and stochastic additively separable problems with substructures of nonuniform salience. This study assumes that the recombination and mutation operators have the knowledge of the building blocks ( ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
This paper analyzes the relative advantages between crossover and mutation on a class of deterministic and stochastic additively separable problems with substructures of nonuniform salience. This study assumes that the recombination and mutation operators have the knowledge of the building blocks (BBs) and effectively exchange or search among competing BBs. Facetwise models of convergence time and population sizing have been used to determine the scalability of each algorithm. The analysis shows that for deterministic exponentiallyscaled additively separable, problems, the BBwise mutation is more efficient than crossover yielding a speedup of o(ℓ log ℓ), where ℓ is the problem size. For the noisy exponentiallyscaled problems, the outcome depends on whether scaling on noise is dominant. When scaling dominates, mutation is more efficient than crossover yielding a speedup of o(ℓ log ℓ). On the other hand, when noise dominates, crossover is more efficient than mutation yielding a speedup of o(ℓ).
Empirical Analysis of Ideal Recombination on Random Decomposable Problems
"... This paper analyzes the behavior of a selectorecombinative genetic algorithm (GA) with an ideal crossover on a class of random additively decomposable problems (rADPs). Specifically, additively decomposable problems of order k whose subsolution fitnesses are sampled from the standard uniform distrib ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
This paper analyzes the behavior of a selectorecombinative genetic algorithm (GA) with an ideal crossover on a class of random additively decomposable problems (rADPs). Specifically, additively decomposable problems of order k whose subsolution fitnesses are sampled from the standard uniform distribution U[0, 1] are analyzed. The scalability of the selectorecombinative GA is investigated for 10,000 rADP instances. The validity of facetwise models in bounding the population size, run duration, and the number of function evaluations required to successfully solve the problems is also verified. Finally, rADP instances that are easiest and most difficult are also investigated.
Fluctuating Crosstalk, Deterministic Noise, and GA
"... This paper extends previous work showing how fluctuating crosstalk in a deterministic fitness function introduces noise into genetic algorithms. In that work, we modeled fluctuating crosstalk or nonlinear interactions among building blocks via higherorder Walsh coefficients. The fluctuating crossta ..."
Abstract
 Add to MetaCart
(Show Context)
This paper extends previous work showing how fluctuating crosstalk in a deterministic fitness function introduces noise into genetic algorithms. In that work, we modeled fluctuating crosstalk or nonlinear interactions among building blocks via higherorder Walsh coefficients. The fluctuating crosstalk behaved like exogenous noise and could be handled by increasing the population size and run duration. This behavior held until the strength of the crosstalk far exceeded the underlying fitness variance by a certain factor empirically observed. This paper extends that work by considering fluctuating crosstalk effects on genetic algorithm scalability using smallerordered Walsh coefficients on two extremes of building block scaling: uniformlyscaled and exponentiallyscaled building blocks. Uniformlyscaled building blocks prove to be more sensitive to fluctuating crosstalk than do exponentiallyscaled building blocks in terms of function evaluations and run duration but less sensitive to population sizing for large buildingblock interactions. Our results also have implications for the relative performance of buildingblockwise mutation over crossover. Categories and Subject Descriptors
Hierarchical Classification Problems Demand Effective Building Block Identification and Processing in LCSs
, 2004
"... This paper introduces a class of hierarchically structured classification problems that call for effective building block identification and processing in XCS and learning classifier systems in general. 1 ..."
Abstract
 Add to MetaCart
This paper introduces a class of hierarchically structured classification problems that call for effective building block identification and processing in XCS and learning classifier systems in general. 1
InitialPopulation Bias in the Univariate . . .
, 2009
"... This paper analyzes the effects of an initialpopulation bias on the performance of the univariate marginal distribution algorithm (UMDA). The analysis considers two test problems: (1) onemax and (2) noisy onemax. Theoretical models are provided and verified with experiments. Intuitively, biasing th ..."
Abstract
 Add to MetaCart
This paper analyzes the effects of an initialpopulation bias on the performance of the univariate marginal distribution algorithm (UMDA). The analysis considers two test problems: (1) onemax and (2) noisy onemax. Theoretical models are provided and verified with experiments. Intuitively, biasing the initial population toward the global optimum should improve performance of UMDA, whereas biasing the initial population away from the global optimum should have the opposite effect. Both theoretical and experimental results confirm this intuition. Effects of mutation and sampling are also analyzed and the performance of UMDA is compared to that of the mutationbased hill climbing. While for deterministic onemax the hill climbing is shown to deal with the initial bias very well, for noisy onemax performance of the hill climbing is poor regardless of the bias.
Operators: a Probabilistic Model Building Approach
, 2005
"... This paper presents an approach to combine competent crossover and mutation operators via probabilistic model building. Both operators are based on the probabilistic model building procedure of the extended compact genetic algorithm (eCGA). The model sampling procedure of eCGA, which mimics the beha ..."
Abstract
 Add to MetaCart
This paper presents an approach to combine competent crossover and mutation operators via probabilistic model building. Both operators are based on the probabilistic model building procedure of the extended compact genetic algorithm (eCGA). The model sampling procedure of eCGA, which mimics the behavior of an idealized recombination—where the building blocks (BBs) are exchanged without disruption—is used as the competent crossover operator. On the other hand, a recently proposed BBwise mutation operator—which uses the BB partition information to perform local search in the BB space—is used as the competent mutation operator. The resulting algorithm, called hybrid extended compact genetic algorithm (heCGA), makes use of the problem decomposition information for (1) effective recombination of BBs and (2) effective local search in the BB neighborhood. The proposed approach is tested on different problems that combine the core of three well known problem difficulty dimensions: deception, scaling, and noise. The results show that, in the absence of domain knowledge, the hybrid approach is more robust than either singleoperatorbased approach. 1
Clustering and Mutual Information
"... Genetic Algorithms are a class of metaheuristics with applications on several fields including biology, engineering and even arts. However, simple Genetic Algorithms may suffer from exponential scalability on hard problems. Estimation of Distribution Algorithms, a special class of Genetic Algorithms ..."
Abstract
 Add to MetaCart
(Show Context)
Genetic Algorithms are a class of metaheuristics with applications on several fields including biology, engineering and even arts. However, simple Genetic Algorithms may suffer from exponential scalability on hard problems. Estimation of Distribution Algorithms, a special class of Genetic Algorithms, can build complex models of the iterations among variables in the problem, solving several intractable problems in tractable polynomial time. However, the model building process can be computationally expensive and efficiency enhancements are oftentimes necessary to make tractable problems practical. This paper presents a new model building approach, called ClusterMI, inspired both on the Extended Compact Genetic Algorithm and the Dependency Structure Matrix Genetic Algorithm. The new approach has a more efficient model building process, resulting in speed ups of 10 times for moderate size problems and potentially thousands of times for large problems. Moreover, the new approach may be easily extended to perform incremental evolution, eliminating the burden of representing the population explicitly.