Results 1 
2 of
2
What Makes a Constrained Problem Difficult to Solve by an Evolutionary Algorithm
, 2004
"... An empirical study about the features that prevent an Evolutionary Algorithm to reach the feasible region or even get the global optimum when it is used to solve global optimization constrained optimization problems is presented. For the experiments we use a Simple Multimembered Evolution Strateg ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
An empirical study about the features that prevent an Evolutionary Algorithm to reach the feasible region or even get the global optimum when it is used to solve global optimization constrained optimization problems is presented. For the experiments we use a Simple Multimembered Evolution Strategy which provides very competitive results in the well known benchmark of 13 test functions. Also, we add 11 new problems which have features we hypothesize that decrease the performance of the algorithm (nonlinear equality constraints and dimensionality).
Special Session on Constrained RealParameter Optimization
, 2006
"... Most optimization problems have constraints of different types (e.g., physical, time, geometric, etc.) which modify the shape of the search space. During the last few years, a wide variety of metaheuristics have been designed and applied to solve constrained optimization problems. Evolutionary algor ..."
Abstract
 Add to MetaCart
Most optimization problems have constraints of different types (e.g., physical, time, geometric, etc.) which modify the shape of the search space. During the last few years, a wide variety of metaheuristics have been designed and applied to solve constrained optimization problems. Evolutionary algorithms and most other metaheuristics, when used for optimization, naturally operate as unconstrained search techniques. Therefore, they require an additional mechanism to incorporate constraints into their fitness function. Historically, the most common approach to incorporate constraints (both in evolutionary algorithms and in mathematical programming) is the penalty functions, which were originally proposed in the 1940s and later expanded by many researchers. Penalty functions have, in general, several limitations. Particularly, they are not a very good choice when trying to solve problem in which the optimum lies in the boundary between the feasible and the infeasible regions or when the feasible region is disjoint. Additionally, penalty functions require a careful finetuning to determine the most appropriate penalty factors to be used with our metaheuristics. In order to overcome the limitations of penalty functions approach, researchers have proposed a number of diverse approaches to handle constraints such as fitness approximation in constrained