Results 1  10
of
25
Single and Multiobjective Evolutionary Optimization Assisted by Gaussian Random Field Metamodels
"... This paper presents and analyzes in detail an efficient search method based on Evolutionary Algorithms (EA) assisted by local Gaussian Random Field Metamodels (GRFM). It is created for the use in optimization problems with computationally expensive evaluation function(s). The role of GRFM is to pred ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
This paper presents and analyzes in detail an efficient search method based on Evolutionary Algorithms (EA) assisted by local Gaussian Random Field Metamodels (GRFM). It is created for the use in optimization problems with computationally expensive evaluation function(s). The role of GRFM is to predict objective function values for new candidate solutions by exploiting information recorded during previous evaluations. Moreover, GRFM are able to provide estimates of the confidence of their predictions.
Local metamodels for optimization using evolution strategies
 Parallel Problem Solving from Nature  PPSN IX
, 2006
"... Abstract. We employ local metamodels to enhance the efficiency of evolution strategies in the optimization of computationally expensive problems. The method involves the combination of second order local regression metamodels with the Covariance Matrix Adaptation Evolution Strategy. Experiments on ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Abstract. We employ local metamodels to enhance the efficiency of evolution strategies in the optimization of computationally expensive problems. The method involves the combination of second order local regression metamodels with the Covariance Matrix Adaptation Evolution Strategy. Experiments on benchmark problems demonstrate that the proposed metamodels have the potential to reliably account for the ranking of the offspring population resulting in significant computational savings. The results show that the use of local metamodels significantly increases the efficiency of already competitive evolution strategies. 1
A memorybased rash optimizer
 IN AAAI06 WORKSHOP ON HEURISTIC SEARCH, MEMORY BASED HEURISTICS AND THEIR APPLICATIONS
, 2006
"... This paper presents a memorybased Reactive Affine Shaker (MRASH) algorithm for global optimization. The Reactive Affine Shaker is an adaptive search algorithm based only on the function values. MRASH is an extension of RASH in which good starting points to RASH are suggested online by using Bayes ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
This paper presents a memorybased Reactive Affine Shaker (MRASH) algorithm for global optimization. The Reactive Affine Shaker is an adaptive search algorithm based only on the function values. MRASH is an extension of RASH in which good starting points to RASH are suggested online by using Bayesian Locally Weighted Regression (BLWR). Both techniques use the memory about the previous history of the search to guide the future exploration but in very different ways. RASH compiles the previous experience into a local search area where sample points are drawn, while locallyweighted regression saves the entire previous history to be mined extensively when an additional sample point is generated. Because of the high computational cost related to the BLWR model, it is applied only to evaluate the potential of an initial point for a local search run. The experimental results, focussed onto the case when the dominant computational cost is the evaluation of the target f function, show that MRASH is indeed capable of leading to good results for a smaller number of function evaluations.
Using Gaussian Processes to Optimize Expensive Functions.
"... Abstract. The task of finding the optimum of some function f(x) is commonly accomplished by generating and testing sample solutions iteratively, choosing each new sample x heuristically on the basis of results to date. We use Gaussian processes to represent predictions and uncertainty about the true ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Abstract. The task of finding the optimum of some function f(x) is commonly accomplished by generating and testing sample solutions iteratively, choosing each new sample x heuristically on the basis of results to date. We use Gaussian processes to represent predictions and uncertainty about the true function, and describe how to use these predictions to choose where to take each new sample in an optimal way. By doing this we were able to solve a difficult optimization problem finding weights in a neural network controller to simultaneously balance two vertical poles using an order of magnitude fewer samples than reported elsewhere. 1
A Study on Metamodeling Techniques, Ensembles, and MultiSurrogates in Evolutionary Computation ABSTRACT
"... SurrogateAssisted Memetic Algorithm(SAMA) is a hybrid evolutionary algorithm, particularly a memetic algorithm that employs surrogate models in the optimization search. Since most of the objective function evaluations in SAMA are approximated, the search performance of SAMA is likely to be affected ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
SurrogateAssisted Memetic Algorithm(SAMA) is a hybrid evolutionary algorithm, particularly a memetic algorithm that employs surrogate models in the optimization search. Since most of the objective function evaluations in SAMA are approximated, the search performance of SAMA is likely to be affected by the characteristics of the models used. In this paper, we study the search performance of using different metamodeling techniques, ensembles, and multisurrogates in SAMA. In particular, we consider the SAMATRF, a SAMA model management framework that incorporates a trust region scheme for interleaving use of exact objective function with computationally cheap local metamodels during local searches. Four different metamodels, namely Gaussian Process (GP), Radial Basis Function (RBF), Polynomial Regression (PR), and Extreme Learning Machine
CuriosityDriven Optimization
 IN IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC
, 2011
"... The principle of artificial curiosity directs active exploration towards the most informative or most interesting data. We show its usefulness for global black box optimization when data point evaluations are expensive. Gaussian process regression is used to model the fitness function based on all ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
The principle of artificial curiosity directs active exploration towards the most informative or most interesting data. We show its usefulness for global black box optimization when data point evaluations are expensive. Gaussian process regression is used to model the fitness function based on all available observations so far. For each candidate point this model estimates expected fitness reduction, and yields a novel closedform expression of expected information gain. A new type of Paretofront algorithm continually pushes the boundary of candidates not dominated by any other known data according to both criteria, using multiobjective evolutionary search. This makes the explorationexploitation tradeoff explicit, and permits maximally informed data selection. We illustrate the robustness of our approach in a number of experimental scenarios.
A Pareto Following Variation Operator for FastConverging Multiobjective Evolutionary Algorithms
"... One of the major difficulties when applying Multiobjective Evolutionary Algorithms (MOEA) to real world problems is the large number of objective function evaluations. Approximate (or surrogate) methods offer the possibility of reducing the number of evaluations, without reducing solution quality. A ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
One of the major difficulties when applying Multiobjective Evolutionary Algorithms (MOEA) to real world problems is the large number of objective function evaluations. Approximate (or surrogate) methods offer the possibility of reducing the number of evaluations, without reducing solution quality. Artificial Neural Network (ANN) based models are one approach that have been used to approximate the future front from the current available fronts with acceptable accuracy levels. However, the associated computational costs limit their effectiveness. In this work, we introduce a simple approach that has comparatively smaller computational cost and we have developed this model as a variation operator that can be used in any kind of multiobjective optimizer. When designing this model, we have considered the whole search procedure as a dynamic system that takes available objective values in current front as input and generates approximated design variables for the next front as output. Initial simulation experiments have produced encouraging results in comparison to NSGAII. Our motivation was to increase the speed of the hosting optimizer. We have compared the performance of the algorithm with respect to the total number of function evaluation and Hypervolume metric. This variation operator has worst case complexity of O(nkN 3), where N is the population size, n and k is the number of design variables and objectives respectively.
Global Optimization Using Hybrid Approach
 SMO'07), 7th WSEAS International Conference on simulation, modelling and optimization
"... Abstract:The paper deals with a global optimization algorithm using hybrid approach. To take the advantage of global search capability the evolution strategy (ES) with some modifications in recombination formulas and elites keeping is used first to find the nearoptimal solutions. The sequential qu ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract:The paper deals with a global optimization algorithm using hybrid approach. To take the advantage of global search capability the evolution strategy (ES) with some modifications in recombination formulas and elites keeping is used first to find the nearoptimal solutions. The sequential quadratic programming(SQP) is then used to find the exact solution from the solutions found by ES. One merit of the algorithm is that the solutions for multimodal problems can be found in a single run. Eight popular test problems are used to test the proposed algorithm. The results are satisfactory in quality and efficiency. KeyWords:Global optimization algorithm, hybrid approach, evolution strategy 1
OPTIMIZATION METHODS APPLIED TO AERODYNAMIC FLOW CONTROL
, 2012
"... Abstract. This study deals with the use of optimization algorithms to determine efficient parameters of flow control devices. To improve the performance of systems characterized by detached flows and vortex shedding, the use of flow control devices such as oscillatory jets, are intensively studied, ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. This study deals with the use of optimization algorithms to determine efficient parameters of flow control devices. To improve the performance of systems characterized by detached flows and vortex shedding, the use of flow control devices such as oscillatory jets, are intensively studied, using numerical as well as experimental methods. However, the determination of efficient control parameters is still a bottleneck for industrial problems. Therefore, we propose to couple a global optimization algorithm with an unsteady flow simulation to derive efficient flow control rules. We consider as testcase the turbulent flow over a backward facing step, including a synthetic jet actuator. The aim is to reduce the timeaveraged recirculation length behind the step by optimizing the jet blowing/suction amplitude and frequency. The Unsteady ReynoldsAveraged NavierStokes (URANS) equations are solved within a Mixed finiteElement/finiteVolume (MEV) framework using the nearwall lowReynolds number oneequation SpalartAllmaras turbulence closure. The steady flow simulation without control is first validated by comparison with experimental and numerical data. Then, the optimization method EGO (Efficient Global Optimization), based on the construction of a Gaussian surrogate model, is coupled with the solver and applied to the unsteady flow with actuation. It is shown that the timeaveraged recirculation length can be shortened when suitable control parameters are used. Jérémie Labroquère, Régis Duvigneau hal00742940, version 1 17 Oct 2012 1
ASAGA: An Adaptive SurrogateAssisted Genetic Algorithm
"... Genetic algorithms (GAs) used in complex optimization domains usually need to perform a large number of fitness function evaluations in order to get nearoptimal solutions. In real world application domains such as the engineering design problems, such evaluations might be extremely expensive comput ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Genetic algorithms (GAs) used in complex optimization domains usually need to perform a large number of fitness function evaluations in order to get nearoptimal solutions. In real world application domains such as the engineering design problems, such evaluations might be extremely expensive computationally. It is therefore common to estimate or approximate the fitness using certain methods. A popular method is to construct a so called surrogate or metamodel to approximate the original fitness function, which can simulate the behavior of the original fitness function but can be evaluated much faster. It is usually difficult to determine which approximate model should be used and/or what the frequency of usage should be. The answer also varies depending on the individual problem. To solve this problem, an adaptive fitness approximation GA (ASAGA) is presented. ASAGA adaptively chooses the appropriate model type; adaptively adjusts the model complexity and the frequency of model usage according to time spent and model accuracy. ASAGA also introduces a stochastic penalty function method to handle constraints. Experiments show that ASAGA outperforms nonadaptive surrogateassisted GAs with statistical significance.