Results 1  10
of
53
A Comprehensive Survey of Fitness Approximation in Evolutionary Computation
, 2003
"... Evolutionary algorithms (EAs) have received increasing interests both in the academy and industry. One main difficulty in applying EAs to realworld applications is that EAs usually need a large number of fitness evaluations before a satisfying result can be obtained. However, fitness evaluations ar ..."
Abstract

Cited by 96 (6 self)
 Add to MetaCart
Evolutionary algorithms (EAs) have received increasing interests both in the academy and industry. One main difficulty in applying EAs to realworld applications is that EAs usually need a large number of fitness evaluations before a satisfying result can be obtained. However, fitness evaluations are not always straightforward in many realworld applications. Either an explicit fitness function does not exist, or the evaluation of the fitness is computationally very expensive. In both cases, it is necessary to estimate the fitness function by constructing an approximate model. In this paper, a comprehensive survey of the research on fitness approximation in evolutionary computation is presented. Main issues like approximation levels, approximate model management schemes, model construction techniques are reviewed. To conclude, open questions and interesting issues in the field are discussed.
Evaluationrelaxation schemes for genetic and evolutionary algorithms
, 2002
"... Genetic and evolutionary algorithms have been increasingly applied to solve complex, large scale search problems with mixed success. Competent genetic algorithms have been proposed to solve hard problems quickly, reliably and accurately. They have rendered problems that were difficult to solve by th ..."
Abstract

Cited by 60 (28 self)
 Add to MetaCart
Genetic and evolutionary algorithms have been increasingly applied to solve complex, large scale search problems with mixed success. Competent genetic algorithms have been proposed to solve hard problems quickly, reliably and accurately. They have rendered problems that were difficult to solve by the earlier GAs to be solvable, requiring only a subquadratic number of function evaluations. To facilitate solving largescale complex problems, and to further enhance the performance of competent GAs, various efficiencyenhancement techniques have been developed. This study investigates one such class of efficiencyenhancement technique called evaluation relaxation. Evaluationrelaxation schemes replace a highcost, lowerror fitness function with a lowcost, higherror fitness function. The error in fitness functions comes in two flavors: Bias and variance. The presence of bias and variance in fitness functions is considered in isolation and strategies for increasing efficiency in both cases are developed. Specifically, approaches for choosing between two fitness functions with either differing variance or differing bias values have been developed. This thesis also investigates fitness inheritance as an evaluation
Managing Approximate Models in Evolutionary Aerodynamic Design Optimization
 In Proceedings of IEEE Congress on Evolutionary Computation
, 2001
"... Approximate models have to be used in evolutionary optimization when the original fitness function is computationally very expensive. Unfortunately, the convergence property of the evolutionary algorithm is unclear when an approximate model is used for fitness evaluation because approximation errors ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
Approximate models have to be used in evolutionary optimization when the original fitness function is computationally very expensive. Unfortunately, the convergence property of the evolutionary algorithm is unclear when an approximate model is used for fitness evaluation because approximation errors are involved in the model. What is worse, the approximate model may introduce false optima that lead the evolutionary algorithm to a wrong solution. To address this problem, individual and generation based evolution control are introduced to ensure that the evolutionary algorithm using approximate fitness functions will converge correctly. A framework for managing approximate models in generationbased evolution control is proposed. This framework is well suited for parallel evolutionary optimization in which evaluation of the fitness function is timeconsuming. Simulations on two benchmark problems and one example of aerodynamic design optimization demonstrate that the proposed algorithm is able to achieve a correct solution as well as a significantly reduced computation time. 1
Reducing fitness evaluations using clustering techniques and neural networks ensembles
 Proceedings of the Genetic and Evolutionary Computation Conference  GECCO, volume 1 of LNCS
, 2004
"... Abstract. In many realworld applications of evolutionary computation, it is essential to reduce the number of fitness evaluations. To this end, computationally efficient models can be constructed for fitness evaluations to assist the evolutionary algorithms. When approximate models are involved in ..."
Abstract

Cited by 18 (5 self)
 Add to MetaCart
Abstract. In many realworld applications of evolutionary computation, it is essential to reduce the number of fitness evaluations. To this end, computationally efficient models can be constructed for fitness evaluations to assist the evolutionary algorithms. When approximate models are involved in evolution, it is very important to determine which individuals should be reevaluated using the original fitness function to guarantee a faster and correct convergence of the evolutionary algorithm. In this paper, the knearestneighbor method is applied to group the individuals of a population into a number of clusters. For each cluster, only the individual that is closest to the cluster center will be evaluated using the expensive original fitness function. The fitness of other individuals are estimated using a neural network ensemble, which is also used to detect possible serious prediction errors. Simulation results from three test functions show that the proposed method exhibits better performance than the strategy where only the best individuals according to the approximate model are reevaluated. 1
Neural network regularization and ensembling using multiobjective evolutionary algorithms
 In: Congress on Evolutionary Computation (CEC’04), IEEE
, 2004
"... Abstract — Regularization is an essential technique to improve generalization of neural networks. Traditionally, regularization is conduced by including an additional term in the cost function of a learning algorithm. One main drawback of these regularization techniques is that a hyperparameter that ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
Abstract — Regularization is an essential technique to improve generalization of neural networks. Traditionally, regularization is conduced by including an additional term in the cost function of a learning algorithm. One main drawback of these regularization techniques is that a hyperparameter that determines to which extension the regularization in¤uences the learning algorithm must be determined beforehand. This paper addresses the neural network regularization problem from a multiobjective optimization point of view. During the optimization, both structure and parameters of the neural network will be optimized. A slightly modi£ed version of two multiobjective optimization algorithms, the dynamic weighted aggregation (DWA) method and the elitist nondominated sorting genetic algorithm (NSGAII) are used and compared. An evolutionary multiobjective approach to neural network regularization has a number of advantages compared to the traditional methods. First, a number of models with a spectrum of model complexity can be obtained in one optimization run instead of only one single solution. Second, an ef£cient new regularization term can be introduced, which is not applicable to gradientbased learning algorithms. As a natural byproduct of the multiobjective optimization approach to neural network regularization, neural network ensembles can be easily constructed using the obtained networks with different levels of model complexity. Thus, the model complexity of the ensemble can be adjusted by adjusting the weight of each member network in the ensemble. Simulations are carried out on a test function to illustrate the feasibility of the proposed ideas. I.
Coevolution of Fitness Predictors
 IEEE Transactions on Evolutionary Computation
, 2008
"... Abstract—We present an algorithm that coevolves fitness predictors, optimized for the solution population, which reduce fitness evaluation cost and frequency, while maintaining evolutionary progress. Fitness predictors differ from fitness models in that they may or may not represent the objective fi ..."
Abstract

Cited by 13 (8 self)
 Add to MetaCart
Abstract—We present an algorithm that coevolves fitness predictors, optimized for the solution population, which reduce fitness evaluation cost and frequency, while maintaining evolutionary progress. Fitness predictors differ from fitness models in that they may or may not represent the objective fitness, opening opportunities to adapt selection pressures and diversify solutions. The use of coevolution addresses three fundamental challenges faced in past fitness approximation research: 1) the model learning investment; 2) the level of approximation of the model; and 3) the loss of accuracy. We discuss applications of this approach and demonstrate its impact on the symbolic regression problem. We show that coevolved predictors scale favorably with problem complexity on a series of randomly generated test problems. Finally, we present additional empirical results that demonstrate that fitness prediction can also reduce solution bloat and find solutions more reliably. Index Terms—Bloat Reduction, coevolution, fitness modeling, symbolic regression.
Evolution Strategies assisted by Gaussian Processes with Improved PreSelection Criterion
 in IEEE Congress on Evolutionary Computation,CEC 2003
, 2003
"... In many engineering optimization problems, the number of fitness function evaluations is limited by time and cost. These problems pose a special challenge to the field of evolutionary computation, since existing evolutionary methods require a very large number of problem function evaluations. One po ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
In many engineering optimization problems, the number of fitness function evaluations is limited by time and cost. These problems pose a special challenge to the field of evolutionary computation, since existing evolutionary methods require a very large number of problem function evaluations. One popular way to address this challenge is the application of approximation models as a surrogate of the real fitness function. We propose a model assisted Evolution Strategy, which uses a Gaussian Process approximation model to preselect the most promising solutions. To refine the preselection process we determine the likelihood of each individual to improve the overall best found solution. Due to this, the new algorithm has a much better convergence behavior and achieves better results than standard evolutionary optimization approaches with less fitness evaluations. Numerical results from extensive simulations on several high dimensional test functions including multimodal functions are presented.
Hierarchical SurrogateAssisted Evolutionary Optimization Framework
 In Evolutionary Computation, 2004. CEC2004. Congress on
, 2004
"... This paper presents enhancements to a surrogateassisted evolutionary optimization framework proposed earlier in the literature for solving computationally expensive design problems on a limited computational budget [1]. The main idea of our former framework was to couple evolutionary algorithms with ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
This paper presents enhancements to a surrogateassisted evolutionary optimization framework proposed earlier in the literature for solving computationally expensive design problems on a limited computational budget [1]. The main idea of our former framework was to couple evolutionary algorithms with a feasible sequential quadratic programming solver in the spirit of Lamarckian learning, including a trustregion approach for interleaving the true fitness function with computationally cheap local surrogate models during gradientbased search. In this paper, we propose a hierarchical surrogateassisted evolutionary optimization framework for accelerating the convergence rate of the original surrogateassisted evolutionary optimization framework. Instead of using the exact highfidelity fitness function during evolutionary search, a Kriging global surrogate model is used to screen the population for individuals that will undergo Lamarckian learning. Numerical results are presented for two multimodal benchmark test functions to show that the proposed approach leads to a further acceleration of the evolutionary search process.
Efficient genetic algorithms using discretization scheduling
, 2002
"... In many applications of genetic algorithms, there is a tradeoff between speed and accuracy in fitness evaluations when evaluations are relaxed from using numerical methods such as numerical integration. In these types of applications, the cost and accuracy vary from discretization errors when implic ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
In many applications of genetic algorithms, there is a tradeoff between speed and accuracy in fitness evaluations when evaluations are relaxed from using numerical methods such as numerical integration. In these types of applications, the cost and accuracy vary from discretization errors when implicit or explicit quadrature is used to estimate the function evaluations. There may be several functions with different grid sizing to obtain a given solution quality. This thesis examines discretization scheduling, or how to vary the discretization within the genetic algorithm in order to use the least amount of computation time for a solution of a desired quality. The effectiveness of discretization scheduling can be determined by comparing its computation time to the computation time of a GA using a constant discretization. Time budgeting is used to estimate the computational resources needed, and there are three ingredients for the discretization scheduling: population sizing, estimated time for each function evaluation and predicted convergence time analysis. Idealized one and twodimensional experiments and an inverse groundwater application illustrate the computational savings to be achieved from using discretization scheduling.
Automatic tuning of agentbased models using genetic algorithms
 In MABS
, 2005
"... When developping multiagent systems (MAS) or models in the context of agentbased simulation (ABS), the tuning of the model constitutes a crucial step of the design process. Indeed, agentbased models are generally characterized by lots of parameters, which together determine the global dynamics of ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
When developping multiagent systems (MAS) or models in the context of agentbased simulation (ABS), the tuning of the model constitutes a crucial step of the design process. Indeed, agentbased models are generally characterized by lots of parameters, which together determine the global dynamics of the system. Moreover, small changes made to a single parameter sometimes lead to a radical modification of the dynamics of the whole system. The development and the parameter setting of an agentbased model can thus become long and tedious if we have no accurate, automatic and systematic strategy to explore this parameter space. That’s the development of such a strategy that we work on, suggesting the use of genetic algorithms. The idea is to capture in the fitness function the goal of the design process (efficiency for MAS that realize a given function, realism for agentbased models, etc.) and to make the model automatically evolve in that direction. However the use of genetic algorithms (GA) in the context of ABS raises specific difficulties that we develop in this article, explaining possible solutions and illustrating them on a simple and wellknown model: the foodforaging by a colony of ants. We apply the method to a more complex example. We work on the simulation of the glycolysis and the phosphotranferase systems in Escherichia coli. In this work, we are interested in testing the hypothesis of hyperstructures, which are believed to improve the behavior of a cell. We try to determine under what conditions this may be true, and how these hyperstructures may function. 1