Results 1  10
of
51
A Tutorial for Competent Memetic Algorithms: Model, Taxonomy, and Design Issues
 IEEE Transactions on Evolutionary Computation
, 2005
"... We recommend you cite the published version. ..."
(Show Context)
Enhancing differential evolution performance with local search for high dimensional function optimization
, 2005
"... In this paper, we proposed Fittest Individual Refinement (FIR), a crossover based local search method for Differential Evolution (DE). The FIR scheme accelerates DE by enhancing its search capabilitythrough exploration of the neighborhood of the best solution in successive generations. The proposed ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
(Show Context)
In this paper, we proposed Fittest Individual Refinement (FIR), a crossover based local search method for Differential Evolution (DE). The FIR scheme accelerates DE by enhancing its search capabilitythrough exploration of the neighborhood of the best solution in successive generations. The proposed memetic version of DE (augmented byFIR) is expected to obtain an acceptable solution with a lower number of evaluations particularlyfor higher dimensional functions. Using two different implementations DEfirDE and DEfirSPX we showed that proposed FIR increases the convergence velocityof DE for well known benchmark functions as well as improves the robustness of DE against variation of population. Experiments using multimodal landscape generator showed our proposed algorithms consistentlyoutperformed their parent algorithms. A performance comparison with reported results of well known real coded memetic algorithms is also presented.
Review All Currently Known Publications On Approaches Which Solve the Moving Peaks Problem
"... Evolutionary algorithms (EA) have been by far the most common approach to the Moving Peaks problem – 9 of 19 reviewed papers are EAbased, applied first by by the creator of the benchmark in [7]. This seminal paper solves the new benchmark problem using a memorybased approach, which divides the re ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
Evolutionary algorithms (EA) have been by far the most common approach to the Moving Peaks problem – 9 of 19 reviewed papers are EAbased, applied first by by the creator of the benchmark in [7]. This seminal paper solves the new benchmark problem using a memorybased approach, which divides the realvalue encoded population into two parts. One part is randomised after a change and stores individuals in the memory while the other part is responsible for the search, for which it retrieves individuals from the repository. In a proposed alternative, the population is split into three islands with all populations contributing to the memory while only one of the populations retrieves individuals from this memory. The individuals in the memory are counted as part of the population, which has a size of 100. Only one best individual is added to memory from time to time (in this case, every 10 generations), and several replacement strategies are explored. The two bestperforming strategies are replacing the most similar individual and replacing the most similar individual if it is of worse fitness. Individuals are retrieved from memory after each change, which is detected through recalculation of the fitness of the solutions in memory. The experiments on the Moving Peaks problem, using five peaks, compare the performance of the implementa
A memetic algorithm with adaptive hill climbing strategy for dynamic optimization problems
 SOFT COMPUTING
, 2009
"... Dynamic optimization problems challenge traditional evolutionary algorithms seriously since they, once converged, cannot adapt quickly to environmental changes. This paper investigates the application of memetic algorithms, a class of hybrid evolutionary algorithms, for dynamic optimization problem ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
Dynamic optimization problems challenge traditional evolutionary algorithms seriously since they, once converged, cannot adapt quickly to environmental changes. This paper investigates the application of memetic algorithms, a class of hybrid evolutionary algorithms, for dynamic optimization problems. An adaptive hill climbing method is proposed as the local search technique in the framework of memetic algorithms, which combines the features of greedy crossoverbased hill climbing and steepest mutationbased hill climbing. In order to address the convergence problem, two diversity maintaining methods, called adaptive dual mapping and triggered random immigrants respectively, are also introduced into the proposed memetic algorithm for dynamic optimization problems. Based on a series of dynamic problems generated from several stationary benchmark problems, experiments are carried out to investigate the performance of the proposed memetic algorithm in comparison with some peer evolutionary algorithms.The experimental results show the efficiency of the proposed memetic algorithm in dynamic environments.
Local and global order 3/2 convergence of a surrogate evolutionary algorithm
 In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2005
, 2005
"... A QuasiMonteCarlo method based on the computation of a surrogate model of the fitness function is proposed, and its convergence at superlinear rate 3/2 is proved under rather mild assumptions on the fitness function – but assuming that the starting point lies within a small neighborhood of a glob ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
(Show Context)
A QuasiMonteCarlo method based on the computation of a surrogate model of the fitness function is proposed, and its convergence at superlinear rate 3/2 is proved under rather mild assumptions on the fitness function – but assuming that the starting point lies within a small neighborhood of a global maximum. A memetic algorithm is then constructed, that performs both a random exploration of the search space and the exploitation of the bestsofar points using the previous surrogate local algorithm, coupled through selection. Under the same mild hypotheses, the global convergence of the memetic algorithm, at the same 3/2 rate, is proved.
Applying Extremal Optimisation To Dynamic Optimisation Problems
"... Extremal Optimisation is a recently conceived addition to the range of stochastic solvers. Due to its deliberate lack of convergence behaviour, it can be expected to solve dynamic problems without having to be informed when a change occurs. Moreover, the severity of change does not seriously affect ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Extremal Optimisation is a recently conceived addition to the range of stochastic solvers. Due to its deliberate lack of convergence behaviour, it can be expected to solve dynamic problems without having to be informed when a change occurs. Moreover, the severity of change does not seriously affect algorithm performance, allowing for unpredictable fluctuations without affecting the outcome. Experimental studies on three example problems confirmed this assumption but also raised some issues concerning the interaction of solver mechanisms with problem intricacies, a phenomenon shared by many algorithm implementations. Many of the problems used as benchmarks had not been solved with EO before. While EO is a very lightweight stochastic process, it is at its most effective when the choice of next move is modelled skilfully according to the problem characteristics. Some insights into effective neighbourhood modelling were revealed during the experimentations.
Pattern sequencing problems by clustering search
 LECTURE NOTES IN ARTIFICIAL INTELLIGENCE SERIES
, 2006
"... Modern search methods for optimization consider hybrid search metaheuristics those employing general optimizers working together with a problemspecific local search procedure. The hybridism comes from the balancing of global and local search procedures. A challenge in such algorithms is to discove ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Modern search methods for optimization consider hybrid search metaheuristics those employing general optimizers working together with a problemspecific local search procedure. The hybridism comes from the balancing of global and local search procedures. A challenge in such algorithms is to discover efficient strategies to cover all the search space, applying local search only in actually promising search areas. This paper proposes the Clustering Search (*CS): a generic way of combining search metaheuristics with clustering to detect promising search areas before applying local search procedures. The clustering process aims to gather similar information about the problem at hand into groups, maintaining a representative solution associated to this information. Two applications to combinatorial optimization are examined, showing the flexibility and competitiveness of the method.
A Fuzzy Association RuleBased Classification Model for HighDimensional Problems with Genetic Rule Selection and Lateral Tuning
 IEEE TRANSACTIONS ON FUZZY SYSTEMS
, 2011
"... The inductive learning of fuzzy rule based classification systems suffers from exponential growth of the fuzzy rule search space when the number of patterns and/or variables becomes high. This growth makes the learning process more difficult and, in most cases, it leads to problems of scalability (i ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
The inductive learning of fuzzy rule based classification systems suffers from exponential growth of the fuzzy rule search space when the number of patterns and/or variables becomes high. This growth makes the learning process more difficult and, in most cases, it leads to problems of scalability (in terms of the time and memory consumed) and/or complexity (with respect to the number of rules obtained and the number of variables included in each rule). In this work, we propose a fuzzy association rulebased classification method for highdimensional problems based on three stages to obtain an accurate and compact fuzzy rule based classifier with a low computational cost. This method limits the order of the associations in the association rule extraction and considers the use of subgroup discovery based on an Improved Weighted Relative Accuracy measure to preselect the most interesting rules before a genetic postprocessing process for rule selection and parameter tuning. The results obtained over twentysix realworld datasets of different sizes and with different numbers of variables demonstrate the effectiveness of the proposed approach.