Results 1  10
of
171
Biogeographybased optimization
 IEEE Transactions on Evolutionary Computation
, 2008
"... Abstract—We propose a novel variation to biogeographybased optimization (BBO), which is an evolutionary algorithm (EA) developed for global optimization. The new algorithm employs oppositionbased learning (OBL) alongside BBO’s migration rates to create oppositional BBO (OB O). Additionally, a new o ..."
Abstract

Cited by 47 (15 self)
 Add to MetaCart
Abstract—We propose a novel variation to biogeographybased optimization (BBO), which is an evolutionary algorithm (EA) developed for global optimization. The new algorithm employs oppositionbased learning (OBL) alongside BBO’s migration rates to create oppositional BBO (OB O). Additionally, a new opposition method named quasireflection is introduced. Quasireflection is based on opposite numbers theory and we mathematically prove that it has the highest expected probability of being closer to the problem solution among all OBL methods. The oppositional algorithm is further revised by the addition of dynamic domain scaling and weighted reflection. Simulations have been performed to validate the performance of quasiopposition as well as a mathematical analysis for a singledimensional problem. Empirical results demonstrate that with the assistance of quasireflection, OB O significantly outperforms BBO in terms of success rate and the number of fitness function evaluations required to find an optimal solution. Index Terms—Biogeographybased optimization (BBO), evolutionary algorithms, oppositionbased learning, opposite numbers, quasiopposite numbers, quasireflected numbers, probability. I.
Fast evolutionary programming
 Proceeding on Fifth Annual Conference on Evolutionary Programming
, 1996
"... AbstmctThis paper presents a study of parallel evolutionary programming (EP). The paper is divided into two parts. The first part proposes a concept of parallel EP. Four numerical fmctions are used to compare the performance between the serial algorithm and the parallel algorithm. In the second par ..."
Abstract

Cited by 36 (4 self)
 Add to MetaCart
AbstmctThis paper presents a study of parallel evolutionary programming (EP). The paper is divided into two parts. The first part proposes a concept of parallel EP. Four numerical fmctions are used to compare the performance between the serial algorithm and the parallel algorithm. In the second part, we apply parallel Ep to a more complicated problem an evolving neural networks pmhlem. The results from this problem show that the parallel version h not only faster than the serial version, but the parallel version also more reliably finds optimal solutions. I.
TimeSeries Forecasting Using Flexible Neural Tree Model
, 2004
"... Timeseries forecasting is an important research and application area. Much effort has been devoted over the past several decades to develop and improve the timeseries forecasting models. This paper introduces a new timeseries forecasting model based on the flexible neural tree (FNT). The FNT mode ..."
Abstract

Cited by 33 (15 self)
 Add to MetaCart
Timeseries forecasting is an important research and application area. Much effort has been devoted over the past several decades to develop and improve the timeseries forecasting models. This paper introduces a new timeseries forecasting model based on the flexible neural tree (FNT). The FNT model is generated initially as a flexible multilayer feedforward neural network and evolved using an evolutionary procedure. Very often it is a difficult task to select the proper input variables or timelags for constructing a timeseries model. Our research demonstrates that the FNT model is capable of handing the task automatically. The performance and effectiveness of the proposed method are evaluated using time series prediction problems and compared with those of related methods.
Evolving Evolutionary Algorithms Using Multi Expression Programming
 Proceedings of The 7 th European Conference on Artificial Life
, 2003
"... Finding the optimal parameter setting (i.e. the optimal population size, the optimal mutation probability, the optimal evolutionary model etc) for an Evolutionary Algorithm (EA) is a di#cult task. Instead of evolving only the parameters of the algorithm we will evolve an entire EA capable of sol ..."
Abstract

Cited by 30 (18 self)
 Add to MetaCart
Finding the optimal parameter setting (i.e. the optimal population size, the optimal mutation probability, the optimal evolutionary model etc) for an Evolutionary Algorithm (EA) is a di#cult task. Instead of evolving only the parameters of the algorithm we will evolve an entire EA capable of solving a particular problem. For this purpose the Multi Expression Programming (MEP) technique is used. Each MEP chromosome will encode multiple EAs. An nongenerational EA for function optimization is evolved in this paper. Numerical experiments show the e#ectiveness of this approach.
Evolutionary Programming Using Mutations Based on the Lévy Probability Distribution
, 2004
"... This paper studies evolutionary programming with mutations based on the Lvy probability distribution. The Lvy probability distribution has an infinite second moment and is, therefore, more likely to generate an offspring that is farther away from its parent than the commonly employed Gaussian mutati ..."
Abstract

Cited by 29 (8 self)
 Add to MetaCart
This paper studies evolutionary programming with mutations based on the Lvy probability distribution. The Lvy probability distribution has an infinite second moment and is, therefore, more likely to generate an offspring that is farther away from its parent than the commonly employed Gaussian mutation. Such likelihood depends on a parameter in the Lvy distribution. We propose an evolutionary programming algorithm using adaptive as well as nonadaptive Lvy mutations. The proposed algorithm was applied to multivariate functional optimization. Empirical evidence shows that, in the case of functions having many local optima, the performance of the proposed algorithm was better than that of classical evolutionary programming using Gaussian mutation.
Evolving Evolutionary Algorithms Using Linear Genetic Programming
 Evolutionary Computation
, 2005
"... A new model for evolving Evolutionary Algorithms is proposed in this paper. The model is based on the Linear Genetic Programming (LGP) technique. Every LGP chromosome encodes an EA which is used for solving a particular problem. Several Evolutionary Algorithms for function optimization, the Trav ..."
Abstract

Cited by 24 (4 self)
 Add to MetaCart
A new model for evolving Evolutionary Algorithms is proposed in this paper. The model is based on the Linear Genetic Programming (LGP) technique. Every LGP chromosome encodes an EA which is used for solving a particular problem. Several Evolutionary Algorithms for function optimization, the Traveling Salesman Problem and the Quadratic Assignment Problem are evolved by using the considered model. Numerical experiments show that the evolved Evolutionary Algorithms perform similarly and sometimes even better than standard approaches for several wellknown benchmarking problems.
An evolution strategy using a continuous version of the Graycode neighbourhood distribution
 Lecture Notes in Computer Science, proceedings of GECCO 2004
, 2004
"... Abstract. We derive a continuous probability distribution which generates neighbours of a point in an interval in a similar way to the bitwise mutation of a Gray code binary string. This distribution has some interesting scalefree properties which are analogues of properties of the Gray code neighb ..."
Abstract

Cited by 20 (6 self)
 Add to MetaCart
Abstract. We derive a continuous probability distribution which generates neighbours of a point in an interval in a similar way to the bitwise mutation of a Gray code binary string. This distribution has some interesting scalefree properties which are analogues of properties of the Gray code neighbourhood structure. A simple (1+1)ES using the new distribution is proposed and evaluated on a set of benchmark problems, on which it performs remarkably well. The critical parameter is the precision of the distribution, which corresponds to the string length in the discrete case. The algorithm is also tested on a difficult realworld problem from medical imaging, on which it also performs well. Some observations concerning the scalefree properties of the distribution are made, although further analysis is required to understand why this simple algorithm works so well. 1
Two improved differential evolution schemes for faster global search
 in Proc. ACMSIGEVO GECCO
, 2005
"... Differential evolution (DE) is well known as a simple and efficient scheme for global optimization over continuous spaces. In this paper we present two new, improved variants of DE. Performance comparisons of the two proposed methods are provided against (a) the original DE, (b) the canonical partic ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
Differential evolution (DE) is well known as a simple and efficient scheme for global optimization over continuous spaces. In this paper we present two new, improved variants of DE. Performance comparisons of the two proposed methods are provided against (a) the original DE, (b) the canonical particle swarm optimization (PSO), and (c) two PSOvariants. The new DEvariants are shown to be statistically significantly better on a sevenfunction test bed for the following performance measures: solution quality, time to find the solution, frequency of finding the solution, and scalability. Categories and Subject Descriptors
Adaptive Particle Swarm Optimization
, 2008
"... This paper proposes an adaptive particle swarm optimization (APSO) with adaptive parameters and elitist learning strategy (ELS) based on the evolutionary state estimation (ESE) approach. The ESE approach develops an ‘evolutionary factor’ by using the population distribution information and relative ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
This paper proposes an adaptive particle swarm optimization (APSO) with adaptive parameters and elitist learning strategy (ELS) based on the evolutionary state estimation (ESE) approach. The ESE approach develops an ‘evolutionary factor’ by using the population distribution information and relative particle fitness information in each generation, and estimates the evolutionary state through a fuzzy classification method. According to the identified state and taking into account various effects of the algorithmcontrolling parameters, adaptive control strategies are developed for the inertia weight and acceleration coefficients for faster convergence speed. Further, an adaptive ‘elitist learning strategy ’ (ELS) is designed for the best particle to jump out of possible local optima and/or to refine its accuracy, resulting in substantially improved quality of global solutions. The APSO algorithm is tested on 6 unimodal and multimodal functions, and the experimental results demonstrate that the APSO generally outperforms the compared PSOs, in terms of solution accuracy, convergence speed and algorithm reliability.
Clonal selection algorithms: A comparative case study using effective mutation potentials
 in 4th International Conference on Artificial Immune Systems (ICARIS), LNCS 4163
, 2005
"... Abstract. This paper presents a comparative study of two important Clonal Selection Algorithms (CSAs): CLONALG and optIA. To deeply understand the performance of both algorithms, we deal with four different classes of problems: toy problems (onecounting and trap functions), pattern recognition, nu ..."
Abstract

Cited by 15 (7 self)
 Add to MetaCart
Abstract. This paper presents a comparative study of two important Clonal Selection Algorithms (CSAs): CLONALG and optIA. To deeply understand the performance of both algorithms, we deal with four different classes of problems: toy problems (onecounting and trap functions), pattern recognition, numerical optimization problems and NPcomplete problem (the 2D HP model for protein structure prediction problem). Two possible versions of CLONALG have been implemented and tested. The experimental results show a global better performance of optIA with respect to CLONALG. Considering the results obtained, we can claim that CSAs represent a new class of Evolutionary Algorithms for effectively performing searching, learning and optimization tasks.