Results 11  20
of
49
Adaptive operator selection with bandits for a multiobjective evolutionary algorithm based on decomposition
 IEEE Transactions on Evolutionary Computation
, 2014
"... Abstract—Adaptive operator selection (AOS) is used to determine the application rates of different operators in an online manner based on their recent performances within an optimization process. This paper proposes a banditbased AOS method, fitnessraterankbased multiarmed bandit (FRRMAB). In ..."
Abstract

Cited by 6 (6 self)
 Add to MetaCart
(Show Context)
Abstract—Adaptive operator selection (AOS) is used to determine the application rates of different operators in an online manner based on their recent performances within an optimization process. This paper proposes a banditbased AOS method, fitnessraterankbased multiarmed bandit (FRRMAB). In order to track the dynamics of the search process, it uses a sliding window to record the recent fitness improvement rates achieved by the operators, while employing a decaying mechanism to increase the selection probability of the best operator. Not much work has been done on AOS in multiobjective evolutionary computation since it is very difficult to measure the fitness improvements quantitatively in most Paretodominancebased multiobjective evolutionary algorithms. Multiobjective evolutionary algorithm based on decomposition (MOEA/D) decomposes a multiobjective optimization problem into a number of scalar optimization subproblems and optimizes them simultaneously. Thus, it is natural and feasible to use AOS in MOEA/D. We investigate several important issues in using FRRMAB in MOEA/D. Our experimental results demonstrate that FRRMAB is robust and its operator selection is reasonable. Comparison experiments also indicate that FRRMAB can significantly improve the performance of MOEA/D. Index Terms—Adaptive operator selection (AOS), decomposition, multiarmed bandit, multiobjective evolutionary algorithm based on decomposition (MOEA/D), multiobjective optimization.
MultiObjective Approaches to Optimal Testing Resource Allocation in Modular Software Systems
, 2009
"... Abstract—Software testing is an important issue in software engineering. As software systems become increasingly large and complex, the problem of how to optimally allocate the limited testing resource during the testing phase has become more important, and difficult. Traditional Optimal Testing Res ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
Abstract—Software testing is an important issue in software engineering. As software systems become increasingly large and complex, the problem of how to optimally allocate the limited testing resource during the testing phase has become more important, and difficult. Traditional Optimal Testing Resource Allocation Problems (OTRAPs) involve seeking an optimal allocation of a limited amount of testing resource to a number of activities with respect to some objectives (e.g., reliability, or cost). We suggest solving OTRAPs with MultiObjective Evolutionary Algorithms (MOEAs). Specifically, we formulate OTRAPs as two types of multiobjective problems. First, we consider the reliability of the system and the testing cost as two objectives. Second, the total testing resource consumed is also taken into account as the third objective. The advantages of MOEAs over stateoftheart single objective approaches to OTRAPs will be shown through empirical studies. Our study has revealed that a wellknown MOEA, namely Nondominated Sorting Genetic Algorithm II (NSGAII), performs well on the first problem formulation, but fails on the second one. Hence, a Harmonic Distance Based MultiObjective Evolutionary Algorithm (HaDMOEA) is proposed and evaluated in this paper. Comprehensive experimental studies on both parallelseries, and starstructure modular software systems have shown the superiority of HaDMOEA over NSGAII for OTRAPs. Index Terms—Multiobjective evolutionary algorithm, parallelseries
Problem definitions for performance assessment of multiobjective optimization algorithms
, 2007
"... Optimizing multiple conflicting objectives results in more than one optimal solution (known as Paretooptimal solutions). Although only one of these solutions will be adopted at the end, the recent trend in evolutionary and classical multiobjective optimization studies have focused on approximating ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
Optimizing multiple conflicting objectives results in more than one optimal solution (known as Paretooptimal solutions). Although only one of these solutions will be adopted at the end, the recent trend in evolutionary and classical multiobjective optimization studies have focused on approximating the set of Paretooptimal solutions. It is believed that such a set of solutions will collectively provide good insight to the different tradeoff regions on the Paretooptimal front, thereby aiding a better and more confident decision making at the end. However, the type of Paretooptimal approximation being sought strongly depends on the decision maker; here, aspects such as convergence to the Paretooptimal front and the maintenance of solution diversity are important. Thus, to assess the performance of such optimization algorithms, the preferences of the decision maker must be taken into account. Evolutionary Multiobjective Optimization (EMO) methodologies were suggested in the early 1990s for this task, and since then a number of performance assessment methods have been suggested. Most of the existing simulation studies that compare different EMO methodologies are based on a limited subset of performance measures. After more than 10 years of research and development into efficient EMO algorithms, the time is now ripe for the
Convergence acceleration operator for multiobjective optimization
 IEEE Trans. Evol. Comput
, 2009
"... Abstract — A convergence acceleration operator (CAO) is described which enhances the search capability and the speed of convergence of the host multiobjective optimization algorithm. The operator acts directly in the objective space to suggest improvements to solutions obtained by a multiobjective e ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Abstract — A convergence acceleration operator (CAO) is described which enhances the search capability and the speed of convergence of the host multiobjective optimization algorithm. The operator acts directly in the objective space to suggest improvements to solutions obtained by a multiobjective evolutionary algorithm (MOEA). The suggested improved objective vectors are then mapped into the decision variable space and tested. This method improves upon prior work in a number of important respects, such as mapping technique and solution improvement. Further, the paper discusses implications for manyobjective problems and studies the impact of the use of the CAO as the number of objectives increases. The CAO is incorporated with two leading MOEAs, the nondominated sorting genetic algorithm and the strength Pareto evolutionary algorithm and tested. Results show that the hybridized algorithms consistently improve the speed of convergence of the original algorithm while maintaining the desired distribution of solutions. It is shown that the operator is a transferable component that can be hybridized with any MOEA. Index Terms — Evolutionary multiobjective optimization, neural networks. I.
Optimal µDistributions for the Hypervolume Indicator for Problems With Linear BiObjective Fronts: Exact and Exhaustive Results
 SIMULATED EVOLUTION AND LEARNING (SEAL2010), DEC 2010, KANPUR, INDIA. 2010
, 2010
"... ..."
(Show Context)
Approximating the Set of Pareto Optimal Solutions in Both the Decision and Objective Spaces by an Estimation of Distribution Algorithm
, 2008
"... Most existing multiobjective evolutionary algorithms aim at approximating the PF, the distribution of the Pareto optimal solutions in the objective space. In many reallife applications, however, a good approximation to the PS, the distribution of the Pareto optimal solutions in the decision space, ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Most existing multiobjective evolutionary algorithms aim at approximating the PF, the distribution of the Pareto optimal solutions in the objective space. In many reallife applications, however, a good approximation to the PS, the distribution of the Pareto optimal solutions in the decision space, is also required by a decision maker. This paper considers a class of MOPs, in which the dimensionalities of the PS and PF are different so that a good approximation to the PF might not approximate the PS very well. It proposes a probabilistic model based multiobjective evolutionary algorithm, called MMEA, for approximating the PS and the PF simultaneously for a MOP in this class. In the modelling phase of MMEA, the population is clustered into a number of subpopulations based on their distribution in the objective space, the PCA technique is used to detect the dimensionality of the centroid of each subpopulation, and then a probabilistic model is built for modelling the distribution of the Pareto optimal solutions in the decision space. Such modelling procedure could promote the population diversity in both the decision and objective spaces. To ease the burden of setting the number of subpopulations, a dynamic strategy for periodically adjusting it has been adopted in MMEA. The experimental comparison between MMEA and the two other methods, KP1 and OmniOptimizer on a set of test instances, some of which are proposed in this paper, have been made in this paper. It is clear from the experiments that MMEA has a big advantage over the two other methods in approximating both the PS and the PF of a MOP when the PS is a nonlinear manifold, although it might not be able to perform significantly better in the case when the PS is a linear manifold. Index Terms Multiobjective optimization, Pareto optimality, estimation of distribution algorithm, principal component analysis.
Hybrid Line Search for Multiobjective Optimization
"... Abstract. The aggregation of objectives in multiple criteria programming is one of the simplest and widely used approach. But it is well known that these techniques sometimes fail in different aspects for determining the Pareto frontier. This paper proposes a new line search based approach for multi ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Abstract. The aggregation of objectives in multiple criteria programming is one of the simplest and widely used approach. But it is well known that these techniques sometimes fail in different aspects for determining the Pareto frontier. This paper proposes a new line search based approach for multicriteria optimization. The objectives are aggregated and the problem is transformed into a single objective optimization problem. Then the line search method is applied and an approximate efficient point is lacated. Once the first Pareto solution is obtained, a simplified version of the former one is used in the context of Pareto dominance to obtain a set of efficient points, which will assure a thorough distribution of solutions on the Pareto frontier. In the current form, the proposed technique is well suitable for problems having multiple objectives (it is not limited to biobjective problems). the functions to be optimized must be continuous twice differentiable. In order to assess the effectiveness of this approach, some experiments were performed and compared with two recent well known populationbased metaheuristics ParEGO [8] and NSGA II [2]. When compared to ParEGO and NSGA II, the proposed approach not only assures a better convergence to the Pareto frontier but also illustrates a good distribution of solutions. From a computational point of view, of the line search converge within a short time (average about 150 milliseconds) and the generation of well distributed solutions on the Pareto frontier is also very fast (about 20 milliseconds). Apart from this, the proposed technique is very simple, easy to implement and use to solve multiobjective problems. 1
Scalarization versus Indicatorbased Selection in MultiObjective CMA Evolution Strategies
"... Abstract—While scalarization approaches to multicriteria optimization become infeasible in the case of many objectives, for few objectives the benefits of populationbased methods compared to a set of independent singleobjective optimization trials on scalarized functions are not obvious. The multio ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
Abstract—While scalarization approaches to multicriteria optimization become infeasible in the case of many objectives, for few objectives the benefits of populationbased methods compared to a set of independent singleobjective optimization trials on scalarized functions are not obvious. The multiobjective covariance matrix adaptation evolution strategy (MOCMAES) is a powerful algorithm for realvalued multicriteria optimization. This populationbased approach combines mutation and strategy adaptation from the elitist CMAES with multiobjective selection. We empirically compare the steadystate MOCMAES with different scalarization algorithms, in which the elitist CMAES is used as singleobjective optimizer. Although only bicriteria benchmark problems are considered, the MOCMAES performs best in the overall comparison. However, if the scalarized problems have a structure that can easily be exploited by the CMAES and that is less apparent in the vectorvalued fitness function, the CMAES with scalarization outperforms the populationbased approach. I.
A Pareto Following Variation Operator for FastConverging Multiobjective Evolutionary Algorithms
"... One of the major difficulties when applying Multiobjective Evolutionary Algorithms (MOEA) to real world problems is the large number of objective function evaluations. Approximate (or surrogate) methods offer the possibility of reducing the number of evaluations, without reducing solution quality. A ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
One of the major difficulties when applying Multiobjective Evolutionary Algorithms (MOEA) to real world problems is the large number of objective function evaluations. Approximate (or surrogate) methods offer the possibility of reducing the number of evaluations, without reducing solution quality. Artificial Neural Network (ANN) based models are one approach that have been used to approximate the future front from the current available fronts with acceptable accuracy levels. However, the associated computational costs limit their effectiveness. In this work, we introduce a simple approach that has comparatively smaller computational cost and we have developed this model as a variation operator that can be used in any kind of multiobjective optimizer. When designing this model, we have considered the whole search procedure as a dynamic system that takes available objective values in current front as input and generates approximated design variables for the next front as output. Initial simulation experiments have produced encouraging results in comparison to NSGAII. Our motivation was to increase the speed of the hosting optimizer. We have compared the performance of the algorithm with respect to the total number of function evaluation and Hypervolume metric. This variation operator has worst case complexity of O(nkN 3), where N is the population size, n and k is the number of design variables and objectives respectively.