Results 1  10
of
165
Biogeographybased optimization
 IEEE Transactions on Evolutionary Computation
, 2008
"... Abstract—We propose a novel variation to biogeographybased optimization (BBO), which is an evolutionary algorithm (EA) developed for global optimization. The new algorithm employs oppositionbased learning (OBL) alongside BBO’s migration rates to create oppositional BBO (OB O). Additionally, a new o ..."
Abstract

Cited by 46 (15 self)
 Add to MetaCart
Abstract—We propose a novel variation to biogeographybased optimization (BBO), which is an evolutionary algorithm (EA) developed for global optimization. The new algorithm employs oppositionbased learning (OBL) alongside BBO’s migration rates to create oppositional BBO (OB O). Additionally, a new opposition method named quasireflection is introduced. Quasireflection is based on opposite numbers theory and we mathematically prove that it has the highest expected probability of being closer to the problem solution among all OBL methods. The oppositional algorithm is further revised by the addition of dynamic domain scaling and weighted reflection. Simulations have been performed to validate the performance of quasiopposition as well as a mathematical analysis for a singledimensional problem. Empirical results demonstrate that with the assistance of quasireflection, OB O significantly outperforms BBO in terms of success rate and the number of fitness function evaluations required to find an optimal solution. Index Terms—Biogeographybased optimization (BBO), evolutionary algorithms, oppositionbased learning, opposite numbers, quasiopposite numbers, quasireflected numbers, probability. I.
TimeSeries Forecasting Using Flexible Neural Tree Model
, 2004
"... Timeseries forecasting is an important research and application area. Much effort has been devoted over the past several decades to develop and improve the timeseries forecasting models. This paper introduces a new timeseries forecasting model based on the flexible neural tree (FNT). The FNT mode ..."
Abstract

Cited by 33 (15 self)
 Add to MetaCart
Timeseries forecasting is an important research and application area. Much effort has been devoted over the past several decades to develop and improve the timeseries forecasting models. This paper introduces a new timeseries forecasting model based on the flexible neural tree (FNT). The FNT model is generated initially as a flexible multilayer feedforward neural network and evolved using an evolutionary procedure. Very often it is a difficult task to select the proper input variables or timelags for constructing a timeseries model. Our research demonstrates that the FNT model is capable of handing the task automatically. The performance and effectiveness of the proposed method are evaluated using time series prediction problems and compared with those of related methods.
Evolving Evolutionary Algorithms Using Multi Expression Programming
 Proceedings of The 7 th European Conference on Artificial Life
, 2003
"... Finding the optimal parameter setting (i.e. the optimal population size, the optimal mutation probability, the optimal evolutionary model etc) for an Evolutionary Algorithm (EA) is a di#cult task. Instead of evolving only the parameters of the algorithm we will evolve an entire EA capable of sol ..."
Abstract

Cited by 30 (18 self)
 Add to MetaCart
Finding the optimal parameter setting (i.e. the optimal population size, the optimal mutation probability, the optimal evolutionary model etc) for an Evolutionary Algorithm (EA) is a di#cult task. Instead of evolving only the parameters of the algorithm we will evolve an entire EA capable of solving a particular problem. For this purpose the Multi Expression Programming (MEP) technique is used. Each MEP chromosome will encode multiple EAs. An nongenerational EA for function optimization is evolved in this paper. Numerical experiments show the e#ectiveness of this approach.
Evolutionary Programming Using Mutations Based on the Lévy Probability Distribution
, 2004
"... This paper studies evolutionary programming with mutations based on the Lvy probability distribution. The Lvy probability distribution has an infinite second moment and is, therefore, more likely to generate an offspring that is farther away from its parent than the commonly employed Gaussian mutati ..."
Abstract

Cited by 29 (8 self)
 Add to MetaCart
This paper studies evolutionary programming with mutations based on the Lvy probability distribution. The Lvy probability distribution has an infinite second moment and is, therefore, more likely to generate an offspring that is farther away from its parent than the commonly employed Gaussian mutation. Such likelihood depends on a parameter in the Lvy distribution. We propose an evolutionary programming algorithm using adaptive as well as nonadaptive Lvy mutations. The proposed algorithm was applied to multivariate functional optimization. Empirical evidence shows that, in the case of functions having many local optima, the performance of the proposed algorithm was better than that of classical evolutionary programming using Gaussian mutation.
Evolving Evolutionary Algorithms Using Linear Genetic Programming
 Evolutionary Computation
, 2005
"... A new model for evolving Evolutionary Algorithms is proposed in this paper. The model is based on the Linear Genetic Programming (LGP) technique. Every LGP chromosome encodes an EA which is used for solving a particular problem. Several Evolutionary Algorithms for function optimization, the Trav ..."
Abstract

Cited by 24 (4 self)
 Add to MetaCart
A new model for evolving Evolutionary Algorithms is proposed in this paper. The model is based on the Linear Genetic Programming (LGP) technique. Every LGP chromosome encodes an EA which is used for solving a particular problem. Several Evolutionary Algorithms for function optimization, the Traveling Salesman Problem and the Quadratic Assignment Problem are evolved by using the considered model. Numerical experiments show that the evolved Evolutionary Algorithms perform similarly and sometimes even better than standard approaches for several wellknown benchmarking problems.
An evolution strategy using a continuous version of the Graycode neighbourhood distribution
 Lecture Notes in Computer Science, proceedings of GECCO 2004
, 2004
"... Abstract. We derive a continuous probability distribution which generates neighbours of a point in an interval in a similar way to the bitwise mutation of a Gray code binary string. This distribution has some interesting scalefree properties which are analogues of properties of the Gray code neighb ..."
Abstract

Cited by 20 (6 self)
 Add to MetaCart
Abstract. We derive a continuous probability distribution which generates neighbours of a point in an interval in a similar way to the bitwise mutation of a Gray code binary string. This distribution has some interesting scalefree properties which are analogues of properties of the Gray code neighbourhood structure. A simple (1+1)ES using the new distribution is proposed and evaluated on a set of benchmark problems, on which it performs remarkably well. The critical parameter is the precision of the distribution, which corresponds to the string length in the discrete case. The algorithm is also tested on a difficult realworld problem from medical imaging, on which it also performs well. Some observations concerning the scalefree properties of the distribution are made, although further analysis is required to understand why this simple algorithm works so well. 1
Two improved differential evolution schemes for faster global search
 in Proc. ACMSIGEVO GECCO
, 2005
"... Differential evolution (DE) is well known as a simple and efficient scheme for global optimization over continuous spaces. In this paper we present two new, improved variants of DE. Performance comparisons of the two proposed methods are provided against (a) the original DE, (b) the canonical partic ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
Differential evolution (DE) is well known as a simple and efficient scheme for global optimization over continuous spaces. In this paper we present two new, improved variants of DE. Performance comparisons of the two proposed methods are provided against (a) the original DE, (b) the canonical particle swarm optimization (PSO), and (c) two PSOvariants. The new DEvariants are shown to be statistically significantly better on a sevenfunction test bed for the following performance measures: solution quality, time to find the solution, frequency of finding the solution, and scalability. Categories and Subject Descriptors
Clonal selection algorithms: A comparative case study using effective mutation potentials
 in 4th International Conference on Artificial Immune Systems (ICARIS), LNCS 4163
, 2005
"... Abstract. This paper presents a comparative study of two important Clonal Selection Algorithms (CSAs): CLONALG and optIA. To deeply understand the performance of both algorithms, we deal with four different classes of problems: toy problems (onecounting and trap functions), pattern recognition, nu ..."
Abstract

Cited by 15 (7 self)
 Add to MetaCart
Abstract. This paper presents a comparative study of two important Clonal Selection Algorithms (CSAs): CLONALG and optIA. To deeply understand the performance of both algorithms, we deal with four different classes of problems: toy problems (onecounting and trap functions), pattern recognition, numerical optimization problems and NPcomplete problem (the 2D HP model for protein structure prediction problem). Two possible versions of CLONALG have been implemented and tested. The experimental results show a global better performance of optIA with respect to CLONALG. Considering the results obtained, we can claim that CSAs represent a new class of Evolutionary Algorithms for effectively performing searching, learning and optimization tasks.
Differential Evolution Using a NeighborhoodBased Mutation Operator
, 2009
"... Differential evolution (DE) is well known as a simple and efficient scheme for global optimization over continuous spaces. It has reportedly outperformed a few evolutionary algorithms (EAs) and other search heuristics like the particle swarm optimization (PSO) when tested over both benchmark and re ..."
Abstract

Cited by 15 (7 self)
 Add to MetaCart
Differential evolution (DE) is well known as a simple and efficient scheme for global optimization over continuous spaces. It has reportedly outperformed a few evolutionary algorithms (EAs) and other search heuristics like the particle swarm optimization (PSO) when tested over both benchmark and realworld problems. DE, however, is not completely free from the problems of slow and/or premature convergence. This paper describes a family of improved variants of the DE/targettobest/1/bin scheme, which utilizes the concept of the neighborhood of each population member. The idea of small neighborhoods, defined over the indexgraph of parameter vectors, draws inspiration from the community of the PSO algorithms. The proposed schemes balance the exploration and exploitation abilities of DE without imposing serious additional burdens in terms of function evaluations. They are shown to be statistically significantly better than or at least comparable to several existing DE variants as well as a few other significant evolutionary computing techniques over a test suite of 24 benchmark functions. The paper also investigates the applications of the new DE variants to two reallife problems concerning parameter estimation for frequency modulated sound waves and spread spectrum radar polyphase code design.
Search biases in constrained evolutionary optimization
 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART C—APPLICATIONS AND REVIEWS
, 2005
"... A common approach to constraint handling in evolutionary optimization is to apply a penalty function to bias the search towards a feasible solution. It has been proposed that the subjective setting of various penalty parameters can be avoided using a multiobjective formulation. This paper analyse ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
A common approach to constraint handling in evolutionary optimization is to apply a penalty function to bias the search towards a feasible solution. It has been proposed that the subjective setting of various penalty parameters can be avoided using a multiobjective formulation. This paper analyses and explains in depth why and when the multiobjective approach to constraint handling is expected to work or fail. Furthermore, an improved evolutionary algorithm based on evolution strategies and differential variation is proposed. Extensive experimental studies have been carried out. Our results reveal that the unbiased multiobjective approach to constraint handling may not be as effective as one may have assumed.