Results 1  10
of
12
Bayesian Optimization Algorithm: From Single Level to Hierarchy
, 2002
"... There are four primary goals of this dissertation. First, design a competent optimization algorithm capable of learning and exploiting appropriate problem decomposition by sampling and evaluating candidate solutions. Second, extend the proposed algorithm to enable the use of hierarchical decompositi ..."
Abstract

Cited by 101 (19 self)
 Add to MetaCart
(Show Context)
There are four primary goals of this dissertation. First, design a competent optimization algorithm capable of learning and exploiting appropriate problem decomposition by sampling and evaluating candidate solutions. Second, extend the proposed algorithm to enable the use of hierarchical decomposition as opposed to decomposition on only a single level. Third, design a class of difficult hierarchical problems that can be used to test the algorithms that attempt to exploit hierarchical decomposition. Fourth, test the developed algorithms on the designed class of problems and several realworld applications. The dissertation proposes the Bayesian optimization algorithm (BOA), which uses Bayesian networks to model the promising solutions found so far and sample new candidate solutions. BOA is theoretically and empirically shown to be capable of both learning a proper decomposition of the problem and exploiting the learned decomposition to ensure robust and scalable search for the optimum across a wide range of problems. The dissertation then identifies important features that must be incorporated into the basic BOA to solve problems that are not decomposable on a single level, but that can still be solved by decomposition over multiple levels of difficulty. Hierarchical
Parallel estimation of distribution algorithms
, 2002
"... The thesis deals with the new evolutionary paradigm based on the concept of Estimation of Distribution Algorithms (EDAs) that use probabilistic model of promising solutions found so far to obtain new candidate solutions of optimized problem. There are six primary goals of this thesis: 1. Suggestion ..."
Abstract

Cited by 25 (4 self)
 Add to MetaCart
The thesis deals with the new evolutionary paradigm based on the concept of Estimation of Distribution Algorithms (EDAs) that use probabilistic model of promising solutions found so far to obtain new candidate solutions of optimized problem. There are six primary goals of this thesis: 1. Suggestion of a new formal description of EDA algorithm. This high level concept can be used to compare the generality of various probabilistic models by comparing the properties of underlying mappings. Also, some convergence issues are discussed and theoretical ways for further improvements are proposed. 2. Development of new probabilistic model and methods capable of dealing with continuous parameters. The resulting Mixed Bayesian Optimization Algorithm (MBOA) uses a set of decision trees to express the probability model. Its main advantage against the mostly used IDEA and EGNA approach is its backward compatibility with discrete domains, so it is uniquely capable of learning linkage between mixed continuousdiscrete genes. MBOA handles the discretization of continuous parameters as an integral part of the learning process, which outperforms the histogrambased
Bayesian Optimization Algorithms for MultiObjective Optimization
 in Parallel Problem Solving From Nature  PPSN VII, ser. Lecture Notes in Computer Science
, 2002
"... In recent years, several researchers have concentrated on using probabilistic models in evolutionary algorithms. These Estimation of Distribution Algorithms (EDA) incorporate methods for automated learning of correlations between variables of th encoded solutions. The process of sampling new individ ..."
Abstract

Cited by 23 (3 self)
 Add to MetaCart
In recent years, several researchers have concentrated on using probabilistic models in evolutionary algorithms. These Estimation of Distribution Algorithms (EDA) incorporate methods for automated learning of correlations between variables of th encoded solutions. The process of sampling new individuals from a probabilistic model respects these mutual dependencies such that disruption of important building blocks is avoided, in comparison with classical recombination operators. The goal of this paper is to investigate the usefulness of this concept in multiobjective optimization, where the aim is to approximate the set of Paretooptimal solutions. We integrate the model building and sampling techniques of a special EDA called Bayesian Optimization Algorithm, based on binary decision trees, into an evolutionary multiobjective optimizer using a special selection scheme. The behavior of the resulting Bayesian Multiobjective Optimization Algorithm (BMOA) is empirically investigated on the multiobjective knapsack problem.
Automatic Mutation Test Input Data Generation via Ant Colony ABSTRACT
"... Faultbased testing is often advocated to overcome limitations of other testing approaches; however it is also recognized as being expensive. On the other hand, evolutionary algorithms have been proved suitable for reducing the cost of data generation in the context of coverage based testing. In thi ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
(Show Context)
Faultbased testing is often advocated to overcome limitations of other testing approaches; however it is also recognized as being expensive. On the other hand, evolutionary algorithms have been proved suitable for reducing the cost of data generation in the context of coverage based testing. In this paper, we propose a new evolutionary approach based on ant colony optimization for automatic test input data generation in the context of mutation testing to reduce the cost of such a test strategy. In our approach the ant colony optimization algorithm is enhanced by a probability density estimation technique. We compare our proposal with other evolutionary algorithms, e.g., Genetic Algorithm. Our preliminary results on JAVA testbeds show that our approach performed significantly better than other alternatives.
Multiobjective hBOA, clustering, and scalability
 In Proceedings of the Genetic and Evolutionary Computation Conference GECCO2005
, 2005
"... This paper describes a scalable algorithm for solving multiobjective decomposable problems by combining the hierarchical Bayesian optimization algorithm (hBOA) with the nondominated sorting genetic algorithm (NSGAII) and clustering in the objective space. It is first argued that for good scalabilit ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
(Show Context)
This paper describes a scalable algorithm for solving multiobjective decomposable problems by combining the hierarchical Bayesian optimization algorithm (hBOA) with the nondominated sorting genetic algorithm (NSGAII) and clustering in the objective space. It is first argued that for good scalability, clustering or some other form of niching in the objective space is necessary and the size of each niche should be approximately equal. Multiobjective hBOA (mohBOA) is then described that combines hBOA, NSGAII and clustering in the objective space. The algorithm mohBOA differs from the multiobjective variants of BOA and hBOA proposed in the past by including clustering in the objective space and allocating an approximately equally sized portion of the population to each cluster. The algorithm mohBOA is shown to scale up well on a number of problems on which standard multiobjective evolutionary algorithms perform poorly.
Evolutionary computations based on bayesian classifiers
 INTERNATIONAL JOURNAL OF APPLIED MATHEMATICS AND COMPUTER SCIENCE
, 2004
"... Evolutionary computation is a discipline that has been emerging for at least 40 or 50 years. All methods within this discipline are characterized by maintaining a set of possible solutions (individuals) to make them successively evolve to fitter solutions generation after generation. Examples of evo ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
Evolutionary computation is a discipline that has been emerging for at least 40 or 50 years. All methods within this discipline are characterized by maintaining a set of possible solutions (individuals) to make them successively evolve to fitter solutions generation after generation. Examples of evolutionary computation paradigms are the broadly known Genetic Algorithms (GAs) and Estimation of Distribution Algorithms (EDAs). This paper contributes to the further development of this discipline by introducing a new evolutionary computation method based on the learning and later simulation of a Bayesian classifier in every generation. In the method we propose, at each iteration the selected group of individuals of the population is divided into different classes depending on their respective fitness value. Afterwards, a Bayesian classifier—either naive Bayes, seminaive Bayes, tree augmented naive Bayes or a similar one—is learned to model the corresponding supervised classification problem. The simulation of the latter Bayesian classifier provides individuals that form the next generation. Experimental results are presented to compare the performance of this new method with different types of EDAs and GAs. The problems
A modelbased evolutionary algorithm for biobjective optimization
 in Proceedings of the Congress on Evolutionary Computation (CEC
, 2005
"... Abstract The Pareto optimal solutions to a multiobjective optimization problem often distribute very regularly in both the decision space and the objective space. Most existing evolutionary algorithms do not explicitly take advantage of such a regularity. This paper proposed a modelbased evolution ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
(Show Context)
Abstract The Pareto optimal solutions to a multiobjective optimization problem often distribute very regularly in both the decision space and the objective space. Most existing evolutionary algorithms do not explicitly take advantage of such a regularity. This paper proposed a modelbased evolutionary algorithm (MMOEA) for biobjective optimization problems. Inspired by the ideas from estimation of distribution algorithms, MMOEA uses a probability model to capture the regularity of the distribution of the Pareto optimal solutions. The Local PCA and the leastsquares method are employed for building the model. New solutions are sampled from the model thus built. At alternate generations, MMOEA uses crossover and mutation to produce new solutions. The selection in MMOEA is the same as in NSGAII. Therefore, MOEA can be regarded as a combination of EDA and NSGAII. The preliminary experimental results show that MMOEA performs better than NSGAII. 1
Combining modelbased and geneticsbased offspring generation for multiobjective optimization using a convergence criterion
 in Proceedings of the Congress on Evolutionary Computation (CEC
, 2006
"... Abstract — In our previous work [1], it has been shown that the performance of evolutionary multiobjective algorithms can be greatly enhanced if the regularity in the distribution of Paretooptimal solutions is taken advantage using a probabilistic model. This paper suggests a new hybrid multiobje ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
(Show Context)
Abstract — In our previous work [1], it has been shown that the performance of evolutionary multiobjective algorithms can be greatly enhanced if the regularity in the distribution of Paretooptimal solutions is taken advantage using a probabilistic model. This paper suggests a new hybrid multiobjective evolutionary algorithm by introducing a convergence based criterion to determine when the modelbased method and when the geneticsbased method should be used to generate offspring in each generation. The basic idea is that the geneticsbased method, i.e., crossover and mutation, should be used when the population is far away from the Pareto front and no obvious regularity in population distribution can be observed. When the population moves towards the Pareto front, the distribution of the individuals will show increasing regularity and in this case, the modelbased method should be used to generate offspring. The proposed hybrid method is verified on widely used test problems and our simulation results show that the method is effective in achieving Paretooptimal solutions compared to two stateoftheart evolutionary multiobjective algorithms: NSGAII and SPEA2, and our pervious method in [1]. I.
Hybrid Estimation of Distribution Algorithm for Multiobjective Knapsack
 Problem,” EvoCOP 2004, LNCS 3004
, 2004
"... Abstract. We propose a hybrid estimation of distribution algorithm (MOHEDA) for solving the multiobjective 0/1 knapsack problem (MOKP). Local search based on weighted sum method is proposed, and random repair method (RRM) is used to handle the constraints. Moreover, for the purpose of diversity pres ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We propose a hybrid estimation of distribution algorithm (MOHEDA) for solving the multiobjective 0/1 knapsack problem (MOKP). Local search based on weighted sum method is proposed, and random repair method (RRM) is used to handle the constraints. Moreover, for the purpose of diversity preservation, a new and fast clustering method, called stochastic clustering method (SCM), is also introduced for mixturebased modelling. The experimental results indicate that MOHEDA outperforms several other stateoftheart algorithms. 1