Results 1  10
of
34
MPAES: A Memetic Algorithm for Multiobjective Optimization
, 2000
"... A memetic algorithm for tackling multiobjective optimization problems is presented. The algorithm employs the proven local search strategy used in the Pareto archived evolution strategy (PAES) and combines it with the use of a population and recombination. Verification of the new algorithm is carri ..."
Abstract

Cited by 52 (5 self)
 Add to MetaCart
A memetic algorithm for tackling multiobjective optimization problems is presented. The algorithm employs the proven local search strategy used in the Pareto archived evolution strategy (PAES) and combines it with the use of a population and recombination. Verification of the new algorithm is carried out by testing it on a set of multiobjective 0/1 knapsack problems. On each problem instance, comparison is made between the new memetic algorithm, the (1+1)PAES local searcher, and the strength Pareto evolutionary algorithm (SPEA) of Zitzler and Thiele. 1 Introduction In recent years, genetic algorithms (GAs) have been applied more and more to multiobjective problems. For a comprehensive overview, see [2]. Undoubtedly, as an extremely general metaheuristic, GAs are well qualified to tackle problems of a great variety. This asset, coupled with the possession of a population, seems to make them particularly attractive for use in multiobjective problems, where a number of solutions appro...
A MultiObjective Algorithm based upon Particle Swarm Optimisation, an Efficient Data Structure and Turbulence.
, 2002
"... This paper introduces a MultiObjective Algorithm (MOA) based upon the Particle Swarm Optimisation (PSO) heuristic. ..."
Abstract

Cited by 39 (1 self)
 Add to MetaCart
This paper introduces a MultiObjective Algorithm (MOA) based upon the Particle Swarm Optimisation (PSO) heuristic.
MultiObjective Optimization Using Genetic Algorithms: A Tutorial
"... abstract – Multiobjective formulations are a realistic models for many complex engineering optimization problems. Customized genetic algorithms have been demonstrated to be particularly effective to determine excellent solutions to these problems. In many reallife problems, objectives under consid ..."
Abstract

Cited by 27 (0 self)
 Add to MetaCart
abstract – Multiobjective formulations are a realistic models for many complex engineering optimization problems. Customized genetic algorithms have been demonstrated to be particularly effective to determine excellent solutions to these problems. In many reallife problems, objectives under consideration conflict with each other, and optimizing a particular solution with respect to a single objective can result in unacceptable results with respect to the other objectives. A reasonable solution to a multiobjective problem is to investigate a set of solutions, each of which satisfies the objectives at an acceptable level without being dominated by any other solution. In this paper, an overview and tutorial is presented describing genetic algorithms developed specifically for these problems with multiple objectives. They differ from traditional genetic algorithms by using specialized fitness functions, introducing methods to promote solution diversity, and other approaches. 1.
Bounded Archiving using the Lebesgue Measure
, 2003
"... Many modern multiobjective evolutionary algorithms (MOEAs) store the points discovered during optimization in an external archive, separate from the main population, as a source of innovation and/or for presentation at the end of a run. Maintaining a bound on the size of the archive may be desirable ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
Many modern multiobjective evolutionary algorithms (MOEAs) store the points discovered during optimization in an external archive, separate from the main population, as a source of innovation and/or for presentation at the end of a run. Maintaining a bound on the size of the archive may be desirable or necessary for several reasons, but choosing which points to discard and which to keep in the archive, as they are discovered, is not trivial. In this paper we briefly review the stateoftheart in bounded archiving, and present a new method based on locally maximizing the hypervolume dominated by the archive. The new archiver is shown to outperform existing methods, on several problem instances, with respect to the quality of the archive obtained when judged using three distinct quality measures.
Vector Evaluated Differential Evolution for Multiobjective Optimization
 In Proceedings of the 2004 Congress on Evolutionary Computation (CEC 2004
, 2004
"... A parallel, multipopulation Differential Evolution algorithm for multiobjective optimization is introduced. The algorithm is equipped with a domination selection operator to enhance its performance by favoring nondominated individuals in the populations. Preliminary experimental results on widel ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
A parallel, multipopulation Differential Evolution algorithm for multiobjective optimization is introduced. The algorithm is equipped with a domination selection operator to enhance its performance by favoring nondominated individuals in the populations. Preliminary experimental results on widely used test problems are promising. Comparisons with the VEGA approach are provided and discussed.
A MOPSO algorithm based exclusively on pareto dominance concepts
 In Third International Conference on Evolutionary MultiCriterion Optimization, EMO 2005
, 2005
"... to multiobjective problems it is unclear how global guides for particles should be selected. Previous work has relied on metric information in objective space, although this is at variance with the notion of dominance which is used to assess the quality of solutions. Here we propose methods based e ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
to multiobjective problems it is unclear how global guides for particles should be selected. Previous work has relied on metric information in objective space, although this is at variance with the notion of dominance which is used to assess the quality of solutions. Here we propose methods based exclusively on dominance for selecting guides from a nondominated archive. The methods are evaluated on standard test problems and we find that probabilistic selection favouring archival particles that dominate few particles provides good convergence towards and coverage of the Pareto front. We demonstrate that the scheme is robust to changes in objective scaling. We propose and evaluate methods for confining particles to the feasible region, and find that allowing particles to explore regions close to the constraint boundaries is important to ensure convergence to the Pareto front. 1
Multiobjective Optimization Using Parallel Vector Evaluated Particle Swarm Optimization
 In Proceedings of the IASTED International Conference on Artificial Intelligence and Applications (AIA 2004
, 2004
"... This paper studies a parallel version of the Vector Evaluated Particle Swarm Optimization (VEPSO) method for multiobjective problems. Experiments on well known and widely used test problems are performed, aiming at investigating both the efficiency of VEPSO as well as the advantages of the parallel ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
This paper studies a parallel version of the Vector Evaluated Particle Swarm Optimization (VEPSO) method for multiobjective problems. Experiments on well known and widely used test problems are performed, aiming at investigating both the efficiency of VEPSO as well as the advantages of the parallel implementation. The obtained results are compared with the corresponding results of the Vector Evaluated Genetic Algorithm approach, yielding the superiority of VEPSO.
Covering paretooptimal fronts by subswarms in multiobjective particle swarm optimization
 In 2004 Congress on Evolutionary Computation (CEC’2004
, 2004
"... Abstract — Covering the whole set of Paretooptimal solutions is a desired task of multiobjective optimization methods. Because in general it is not possible to determine this set, a restricted amount of solutions are typically delivered in the output to decision makers. In this paper, we propose a ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
Abstract — Covering the whole set of Paretooptimal solutions is a desired task of multiobjective optimization methods. Because in general it is not possible to determine this set, a restricted amount of solutions are typically delivered in the output to decision makers. In this paper, we propose a new method using multiobjective particle swarm optimization to cover the Paretooptimal front. The method works in two phases. In phase 1 the goal is to obtain a good approximation of the Paretofront. In a second run subswarms are generated to cover the Paretofront. The method is evaluated using different test functions and compared with an existing covering method using a real world example in antenna design. I.
Pareto Evolutionary Neural Networks
 IEEE Transactions on Neural Networks
, 2003
"... For the purposes of forecasting (or classification) tasks neural networks (NNs) are typically trained with respect to Euclidean distance minimisation. This is commonly the case irrespective of any other end user preferences. In a number of situations, most notably time series forecasting, users ma ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
For the purposes of forecasting (or classification) tasks neural networks (NNs) are typically trained with respect to Euclidean distance minimisation. This is commonly the case irrespective of any other end user preferences. In a number of situations, most notably time series forecasting, users may have other objectives in addition to Euclidean distance minimisation. Recent studies in the NN domain have confronted this problem by propagating a linear sum of errors. However this approach implicitly assumes a priori knowledge of the error surface defined by the problem, which, typically, is not the case.
Some multiobjective optimizers are better than others
 In IEEE Congress on Evolutionary Computation
, 2003
"... Abstract The NoFreeLunch (NFL) theorems hold for general multiobjective fitness spaces, in the sense that, over a space of problems which is closed under permutation, any two algorithms will produce the same set of multiobjective samples. However, there are salient ways in which NFL does not gene ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Abstract The NoFreeLunch (NFL) theorems hold for general multiobjective fitness spaces, in the sense that, over a space of problems which is closed under permutation, any two algorithms will produce the same set of multiobjective samples. However, there are salient ways in which NFL does not generally hold in multiobjective optimization. Previously we have shown that a ‘free lunch ’ can arise when comparative metrics (rather than absolute metrics) are used for performance measurement. Here we show that NFL does not generally apply in multiobjective optimization when absolute performance metrics are used. This is because multiobjective optimizers usually combine a generator with an archiver. The generator corresponds to the ‘algorithm ’ in the NFL sense, but the archiver filters the sample generated by the algorithm in a way that undermines the NFL assumptions. Essentially, if two multiobjective approaches have different archivers, their average performance may differ. We prove this, and hence show that we can say, without qualification, that some multiobjective approaches are better than others. 1