Results 1  10
of
327
Biogeographybased optimization
 IEEE Transactions on Evolutionary Computation
, 2008
"... Abstract—We propose a novel variation to biogeographybased optimization (BBO), which is an evolutionary algorithm (EA) developed for global optimization. The new algorithm employs oppositionbased learning (OBL) alongside BBO’s migration rates to create oppositional BBO (OB O). Additionally, a new o ..."
Abstract

Cited by 117 (28 self)
 Add to MetaCart
(Show Context)
Abstract—We propose a novel variation to biogeographybased optimization (BBO), which is an evolutionary algorithm (EA) developed for global optimization. The new algorithm employs oppositionbased learning (OBL) alongside BBO’s migration rates to create oppositional BBO (OB O). Additionally, a new opposition method named quasireflection is introduced. Quasireflection is based on opposite numbers theory and we mathematically prove that it has the highest expected probability of being closer to the problem solution among all OBL methods. The oppositional algorithm is further revised by the addition of dynamic domain scaling and weighted reflection. Simulations have been performed to validate the performance of quasiopposition as well as a mathematical analysis for a singledimensional problem. Empirical results demonstrate that with the assistance of quasireflection, OB O significantly outperforms BBO in terms of success rate and the number of fitness function evaluations required to find an optimal solution. Index Terms—Biogeographybased optimization (BBO), evolutionary algorithms, oppositionbased learning, opposite numbers, quasiopposite numbers, quasireflected numbers, probability. I.
Quantuminspired Evolutionary Algorithm for a Class of Combinatorial Optimization
 IEEE TRANS. EVOLUTIONARY COMPUTATION
, 2002
"... This paper proposes a novel evolutionary algorithm inspired by quantum computing, called a quantuminspired evolutionary algorithm (QEA), which is based on the concept and principles of quantum computing, such as a quantum bit and superposition of states. Like other evolutionary algorithms, QEA is a ..."
Abstract

Cited by 110 (7 self)
 Add to MetaCart
This paper proposes a novel evolutionary algorithm inspired by quantum computing, called a quantuminspired evolutionary algorithm (QEA), which is based on the concept and principles of quantum computing, such as a quantum bit and superposition of states. Like other evolutionary algorithms, QEA is also characterized by the representation of the individual, the evaluation function, and the population dynamics. However, instead of binary, numeric, or symbolic representation, QEA uses a Qbit, defined as the smallest unit of information, for the probabilistic representation and a Qbit individual as a string of Qbits. A Qgate is introduced as a variation operator to drive the individuals toward better solutions. To demonstrate its effectiveness and applicability, experiments are carried out on the knapsack problem, which is a wellknown combinatorial optimization problem. The results show that QEA performs well, even with a small population, without premature convergence as compared to the conventional genetic algorithm.
Differential evolution algorithm with strategy adaptation for global numerical optimization
 IEEE Trans. Evol. Comput
, 2009
"... Abstract—Differential evolution (DE) is an efficient and powerful populationbased stochastic search technique for solving optimization problems over continuous space, which has been widely applied in many scientific and engineering fields. However, the success of DE in solving a specific problem c ..."
Abstract

Cited by 107 (8 self)
 Add to MetaCart
(Show Context)
Abstract—Differential evolution (DE) is an efficient and powerful populationbased stochastic search technique for solving optimization problems over continuous space, which has been widely applied in many scientific and engineering fields. However, the success of DE in solving a specific problem crucially depends on appropriately choosing trial vector generation strategies and their associated control parameter values. Employing a trialanderror scheme to search for the most suitable strategy and its associated parameter settings requires high computational costs. Moreover, at different stages of evolution, different strategies coupled with different parameter settings may be required in order to achieve the best performance. In this paper, we propose a selfadaptive DE (SaDE) algorithm, in which both trial vector generation strategies and their associated control parameter values are gradually selfadapted by learning from their previous experiences in generating promising solutions. Consequently, a more suitable generation strategy along with its parameter settings can be determined adaptively to match different phases of the search process/evolution. The performance of the SaDE algorithm is extensively evaluated (using codes available from P. N. Suganthan) on a suite of 26 boundconstrained numerical optimization problems and compares favorably with the conventional DE and several stateoftheart parameter adaptive DE variants. Index Terms—Differential evolution (DE), global numerical optimization, parameter adaptation, selfadaptation, strategy adaptation. I.
Evolutionary Programming Using Mutations Based on the Lévy Probability Distribution
, 2004
"... This paper studies evolutionary programming with mutations based on the Lvy probability distribution. The Lvy probability distribution has an infinite second moment and is, therefore, more likely to generate an offspring that is farther away from its parent than the commonly employed Gaussian mutati ..."
Abstract

Cited by 61 (8 self)
 Add to MetaCart
This paper studies evolutionary programming with mutations based on the Lvy probability distribution. The Lvy probability distribution has an infinite second moment and is, therefore, more likely to generate an offspring that is farther away from its parent than the commonly employed Gaussian mutation. Such likelihood depends on a parameter in the Lvy distribution. We propose an evolutionary programming algorithm using adaptive as well as nonadaptive Lvy mutations. The proposed algorithm was applied to multivariate functional optimization. Empirical evidence shows that, in the case of functions having many local optima, the performance of the proposed algorithm was better than that of classical evolutionary programming using Gaussian mutation.
Adaptive Particle Swarm Optimization
, 2008
"... This paper proposes an adaptive particle swarm optimization (APSO) with adaptive parameters and elitist learning strategy (ELS) based on the evolutionary state estimation (ESE) approach. The ESE approach develops an ‘evolutionary factor’ by using the population distribution information and relative ..."
Abstract

Cited by 55 (2 self)
 Add to MetaCart
(Show Context)
This paper proposes an adaptive particle swarm optimization (APSO) with adaptive parameters and elitist learning strategy (ELS) based on the evolutionary state estimation (ESE) approach. The ESE approach develops an ‘evolutionary factor’ by using the population distribution information and relative particle fitness information in each generation, and estimates the evolutionary state through a fuzzy classification method. According to the identified state and taking into account various effects of the algorithmcontrolling parameters, adaptive control strategies are developed for the inertia weight and acceleration coefficients for faster convergence speed. Further, an adaptive ‘elitist learning strategy ’ (ELS) is designed for the best particle to jump out of possible local optima and/or to refine its accuracy, resulting in substantially improved quality of global solutions. The APSO algorithm is tested on 6 unimodal and multimodal functions, and the experimental results demonstrate that the APSO generally outperforms the compared PSOs, in terms of solution accuracy, convergence speed and algorithm reliability.
TimeSeries Forecasting Using Flexible Neural Tree Model
, 2004
"... Timeseries forecasting is an important research and application area. Much effort has been devoted over the past several decades to develop and improve the timeseries forecasting models. This paper introduces a new timeseries forecasting model based on the flexible neural tree (FNT). The FNT mode ..."
Abstract

Cited by 55 (21 self)
 Add to MetaCart
Timeseries forecasting is an important research and application area. Much effort has been devoted over the past several decades to develop and improve the timeseries forecasting models. This paper introduces a new timeseries forecasting model based on the flexible neural tree (FNT). The FNT model is generated initially as a flexible multilayer feedforward neural network and evolved using an evolutionary procedure. Very often it is a difficult task to select the proper input variables or timelags for constructing a timeseries model. Our research demonstrates that the FNT model is capable of handing the task automatically. The performance and effectiveness of the proposed method are evaluated using time series prediction problems and compared with those of related methods.
Fast evolutionary programming
 Proceeding on Fifth Annual Conference on Evolutionary Programming
, 1996
"... AbstmctThis paper presents a study of parallel evolutionary programming (EP). The paper is divided into two parts. The first part proposes a concept of parallel EP. Four numerical fmctions are used to compare the performance between the serial algorithm and the parallel algorithm. In the second par ..."
Abstract

Cited by 49 (4 self)
 Add to MetaCart
(Show Context)
AbstmctThis paper presents a study of parallel evolutionary programming (EP). The paper is divided into two parts. The first part proposes a concept of parallel EP. Four numerical fmctions are used to compare the performance between the serial algorithm and the parallel algorithm. In the second part, we apply parallel Ep to a more complicated problem an evolving neural networks pmhlem. The results from this problem show that the parallel version h not only faster than the serial version, but the parallel version also more reliably finds optimal solutions. I.
Two improved differential evolution schemes for faster global search
 in Proc. ACMSIGEVO GECCO
, 2005
"... Differential evolution (DE) is well known as a simple and efficient scheme for global optimization over continuous spaces. In this paper we present two new, improved variants of DE. Performance comparisons of the two proposed methods are provided against (a) the original DE, (b) the canonical partic ..."
Abstract

Cited by 39 (9 self)
 Add to MetaCart
(Show Context)
Differential evolution (DE) is well known as a simple and efficient scheme for global optimization over continuous spaces. In this paper we present two new, improved variants of DE. Performance comparisons of the two proposed methods are provided against (a) the original DE, (b) the canonical particle swarm optimization (PSO), and (c) two PSOvariants. The new DEvariants are shown to be statistically significantly better on a sevenfunction test bed for the following performance measures: solution quality, time to find the solution, frequency of finding the solution, and scalability. Categories and Subject Descriptors
Differential Evolution Using a NeighborhoodBased Mutation Operator
, 2009
"... Differential evolution (DE) is well known as a simple and efficient scheme for global optimization over continuous spaces. It has reportedly outperformed a few evolutionary algorithms (EAs) and other search heuristics like the particle swarm optimization (PSO) when tested over both benchmark and re ..."
Abstract

Cited by 35 (8 self)
 Add to MetaCart
Differential evolution (DE) is well known as a simple and efficient scheme for global optimization over continuous spaces. It has reportedly outperformed a few evolutionary algorithms (EAs) and other search heuristics like the particle swarm optimization (PSO) when tested over both benchmark and realworld problems. DE, however, is not completely free from the problems of slow and/or premature convergence. This paper describes a family of improved variants of the DE/targettobest/1/bin scheme, which utilizes the concept of the neighborhood of each population member. The idea of small neighborhoods, defined over the indexgraph of parameter vectors, draws inspiration from the community of the PSO algorithms. The proposed schemes balance the exploration and exploitation abilities of DE without imposing serious additional burdens in terms of function evaluations. They are shown to be statistically significantly better than or at least comparable to several existing DE variants as well as a few other significant evolutionary computing techniques over a test suite of 24 benchmark functions. The paper also investigates the applications of the new DE variants to two reallife problems concerning parameter estimation for frequency modulated sound waves and spread spectrum radar polyphase code design.
Evolving Evolutionary Algorithms Using Linear Genetic Programming
 Evolutionary Computation
, 2005
"... A new model for evolving Evolutionary Algorithms is proposed in this paper. The model is based on the Linear Genetic Programming (LGP) technique. Every LGP chromosome encodes an EA which is used for solving a particular problem. Several Evolutionary Algorithms for function optimization, the Trav ..."
Abstract

Cited by 32 (7 self)
 Add to MetaCart
(Show Context)
A new model for evolving Evolutionary Algorithms is proposed in this paper. The model is based on the Linear Genetic Programming (LGP) technique. Every LGP chromosome encodes an EA which is used for solving a particular problem. Several Evolutionary Algorithms for function optimization, the Traveling Salesman Problem and the Quadratic Assignment Problem are evolved by using the considered model. Numerical experiments show that the evolved Evolutionary Algorithms perform similarly and sometimes even better than standard approaches for several wellknown benchmarking problems.