Results 1 
6 of
6
Combining convergence and diversity in evolutionary multiobjective optimization
 Evolutionary Computation
, 2002
"... Over the past few years, the research on evolutionary algorithms has demonstrated their niche in solving multiobjective optimization problems, where the goal is to �nd a number of Paretooptimal solutions in a single simulation run. Many studies have depicted different ways evolutionary algorithms c ..."
Abstract

Cited by 121 (11 self)
 Add to MetaCart
Over the past few years, the research on evolutionary algorithms has demonstrated their niche in solving multiobjective optimization problems, where the goal is to �nd a number of Paretooptimal solutions in a single simulation run. Many studies have depicted different ways evolutionary algorithms can progress towards the Paretooptimal set with a widely spread distribution of solutions. However, none of the multiobjective evolutionary algorithms (MOEAs) has a proof of convergence to the true Paretooptimal solutions with a wide diversity among the solutions. In this paper, we discuss why a number of earlier MOEAs do not have such properties. Based on the concept ofdominance, new archiving strategies are proposed that overcome this fundamental problem and provably lead to MOEAs that have both the desired convergence and distribution properties. A number of modi�cations to the baseline algorithm are also suggested. The concept ofdominance introduced in this paper is practical and should make the proposed algorithms useful to researchers and practitioners alike.
Archiving with Guaranteed Convergence and Diversity in MultiObjective Optimization
 In Proceedings of the Genetic and Evolutionary Computation Conference
, 2002
"... Over the past few years, the research on evolutionary algorithms has demonstrated their niche in solving multiobjective optimization problems, where the goal is to find a number of Paretooptimal solutions in a single simulation run. However, none of the multiobjective evolutionary algorithm ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
Over the past few years, the research on evolutionary algorithms has demonstrated their niche in solving multiobjective optimization problems, where the goal is to find a number of Paretooptimal solutions in a single simulation run. However, none of the multiobjective evolutionary algorithms (MOEAs) has a proof of convergence to the true Paretooptimal solutions with a wide diversity among the solutions. In this paper we discuss why a number of earlier MOEAs do not have such properties. A new archiving strategy is proposed that maintains a subset of the generated solutions. It guarantees convergence and diversity according to welldefined criteria, i.e. #dominance and #Pareto optimality.
Adaptive Diversity Maintenance and Convergence Guarantee in Multiobjective Evolutionary Algorithms
 Proceedings of the 2003 Congress on Evolutionary Computation (CEC 2003), IEEE
, 2003
"... Abstract The issue of obtaining a wellconverged and welldistributed set of Pareto optimal solutions efficiently and automatically is crucial in multiobjective evolutionary algorithms (MOEAs). Many studies have proposed different evolutionary algorithms that can progress towards Pareto optimal se ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Abstract The issue of obtaining a wellconverged and welldistributed set of Pareto optimal solutions efficiently and automatically is crucial in multiobjective evolutionary algorithms (MOEAs). Many studies have proposed different evolutionary algorithms that can progress towards Pareto optimal sets with a widespread distribution of solutions. However, most mathematically convergent MOEAs desire certain prior knowledge about the objective space in order to efficiently maintain widespread solutions. In this paper, we propose, based on our novel Edominance concept, an Adaptive Rectangle Archiving (ARA) strategy that overcomes this important problem. The MOEA with this archiving technique provably converges to welldistributed Pareto optimal solutions without prior knowledge. ARA complements the existing archiving techniques, and is useful to both researchers and practitioners. 1
Stochastic convergence of random search to fixed size Pareto set approximations. Arxiv preprint arXiv:0711.2949
, 2007
"... This paper presents the first convergence result for random search algorithms to a subset of the Pareto set of given maximum size k with bounds on the approximation quality ǫ. The core of the algorithm is a new selection criterion based on a hypothetical multilevel grid on the objective space. It is ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
This paper presents the first convergence result for random search algorithms to a subset of the Pareto set of given maximum size k with bounds on the approximation quality ǫ. The core of the algorithm is a new selection criterion based on a hypothetical multilevel grid on the objective space. It is shown that, when using this criterion for accepting new search points, the sequence of solution archives converges with probability one to a subset of the Pareto set that ǫdominates the entire Pareto set. The obtained approximation quality ǫ is equal to the size of the grid cells on the finest level of resolution that allows an approximation with at most k points in the family of grids considered. While the convergence result is of general theoretical interest, the archiving algorithm might be of high practical value for any type iterative multiobjective optimization method, such as evolutionary algorithms or other metaheuristics, which all rely on the usage of a finite online memory to store the best solutions found so far as the current approximation of the Pareto set. 1
Hertog. Enhancement of sandwich algorithms for approximating higherdimensional convex Pareto sets
 INFORMS Journal on Computing
"... In many fields, we come across problems where we want to optimize several conflicting objectives simultaneously. To find a good solution for such multiobjective optimization problems, an approximation of the Pareto set is often generated. In this paper, we consider the approximation of Pareto sets ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In many fields, we come across problems where we want to optimize several conflicting objectives simultaneously. To find a good solution for such multiobjective optimization problems, an approximation of the Pareto set is often generated. In this paper, we consider the approximation of Pareto sets for problems with three or more convex objectives and with convex constraints. For these problems, sandwich algorithms can be used to determine an inner and outer approximation between which the Pareto set is ’sandwiched’. Using these two approximations, we can calculate an upper bound on the approximation error. This upper bound can be used to determine which parts of the approximations must be improved and to provide a quality guarantee to the decision maker. In this paper, we extend higher dimensional sandwich algorithms in three different ways. Firstly, we introduce the new concept of adding dummy points to the inner approximation of a Pareto set. By using these dummy points, we can determine accurate inner and outer approximations more efficiently, i.e., using less timeconsuming optimizations. Secondly, we introduce a new method for the calculation of an error measure which is
Efficient Parent Selection for ApproximationGuided Evolutionary MultiObjective Optimization
"... Abstract—The Pareto front of a multiobjective optimization problem is typically very large and can only be approximated. ApproximationGuided Evolution (AGE) is a recently presented evolutionary multiobjective optimization algorithm that aims at minimizing iteratively the approximation factor, whi ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—The Pareto front of a multiobjective optimization problem is typically very large and can only be approximated. ApproximationGuided Evolution (AGE) is a recently presented evolutionary multiobjective optimization algorithm that aims at minimizing iteratively the approximation factor, which measures how well the current population approximates the Pareto front. It outperforms stateoftheart algorithms for problems with many objectives. However, AGE’s performance is not competitive on problems with very few objectives. We study the reason for this behavior and observe that AGE selects parents uniformly at random, which has a detrimental effect on its performance. We then investigate different algorithmspecific selection strategies for AGE. The main difficulty here is finding a computationally efficient selection scheme which does not harm AGEs linear runtime in the number of objectives. We present several improved selections schemes that are computationally efficient and substantially improve AGE on lowdimensional objective spaces, but have no negative effect in highdimensional objective spaces. I.