Results 1  10
of
38
InformationGeometric Optimization Algorithms: A Unifying Picture via Invariance Principles
, 2011
"... We present a canonical way to turn any smooth parametric family of probability distributions on an arbitrary search space X into a continuoustime blackbox optimization method on X, the informationgeometric optimization (IGO) method. Invariance as a major design principle keeps the number of arbitr ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
We present a canonical way to turn any smooth parametric family of probability distributions on an arbitrary search space X into a continuoustime blackbox optimization method on X, the informationgeometric optimization (IGO) method. Invariance as a major design principle keeps the number of arbitrary choices to a minimum. The resulting method conducts a natural gradient ascent using an adaptive, timedependent transformation of the objective function, and makes no particular assumptions on the objective function to be optimized. The IGO method produces explicit IGO algorithms through time discretization. The crossentropy method is recovered in a particular case with a large time step, and can be extended into a smoothed, parametrizationindependent maximum likelihood update. When applied to specific families of distributions on discrete or continuous spaces, the IGO framework allows to naturally recover versions
Linear and Combinatorial Optimizations by Estimation of Distribution Algorithms
, 2002
"... Estimation of Distribution Algorithms (EDAs) is a new area of Evolutionary Computation. In EDAs there is neither crossover nor mutation operators. New population is generated by sampling the probability distribution, which is estimated from a database containing selected individuals of the previous ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
Estimation of Distribution Algorithms (EDAs) is a new area of Evolutionary Computation. In EDAs there is neither crossover nor mutation operators. New population is generated by sampling the probability distribution, which is estimated from a database containing selected individuals of the previous generation. Different approaches have been proposed for the estimation of probability distribution. In this paper we provide a review of different EDA approaches and show how to apply UMDA with Laplace correction to Subset Sum, OneMax function and nQueen problems of linear and combinatorial optimizations. The experimental results of the three problems comparing the performance of UMDA with that of Genetic Algorithm(GA) are provided. In our experiment UMDA outperforms GA for linear problems.
On the performance of Estimation of Distribution Algorithms applied to Software Testing
, 2003
"... One of the most important issues in software testing is the generation of the input cases used during the test. Due to the expensive cost of this task its automatization has become a key aspect. ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
One of the most important issues in software testing is the generation of the input cases used during the test. Due to the expensive cost of this task its automatization has become a key aspect.
Program evolution by integrating EDP and GP
 In Genetic and Evolutionary Computation Conference
, 2004
"... Abstract. This paper discusses the performance of a hybrid system which consists of EDP and GP. EDP, Estimation of Distribution Programming, is the program evolution method based on the probabilistic model, where the probability distribution of a program is estimated by using a Bayesian network, and ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
Abstract. This paper discusses the performance of a hybrid system which consists of EDP and GP. EDP, Estimation of Distribution Programming, is the program evolution method based on the probabilistic model, where the probability distribution of a program is estimated by using a Bayesian network, and a population evolves repeating estimation of distribution and program generation without crossover and mutation. Applying the hybrid system of EDP and GP to various problems, we discovered some important tendencies in the behavior of this hybrid system. The hybrid system was not only superior to pure GP in a search performance but also had interesting features in program evolution. More tests revealed how and when EDP and GP compensate for each other. We show some experimental results of program evolution by the hybrid system and discuss the characteristics of both EDP and GP.
The convergence behavior of PBIL algorithm: a preliminar approach.
"... In this technical report the simplest version of PBIL algorithm is applied to the minimization of the counting ones function in two dimensions. ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
In this technical report the simplest version of PBIL algorithm is applied to the minimization of the counting ones function in two dimensions.
Reinforcement Learning Estimation of Distribution Algorithm
 Proceedings of the Genetic and Evolutionary Computation Conference 2003 (GECCO2003), Lecture Notes in Computer Science (LNCS) 2724
, 2003
"... Abstract. This paper proposes an algorithm for combinatorial optimizations that uses reinforcement learning and estimation of joint probability distribution of promising solutions to generate a new population of solutions. We call it Reinforcement Learning Estimation of Distribution Algorithm (RELED ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
(Show Context)
Abstract. This paper proposes an algorithm for combinatorial optimizations that uses reinforcement learning and estimation of joint probability distribution of promising solutions to generate a new population of solutions. We call it Reinforcement Learning Estimation of Distribution Algorithm (RELEDA). For the estimation of the joint probability distribution we consider each variable as univariate. Then we update the probability of each variable by applying reinforcement learning method. Though we consider variables independent of one another, the proposed method can solve problems of highly correlated variables. To compare the efficiency of our proposed algorithm with other Estimation of Distribution Algorithms (EDAs) we provide the experimental results of the two problems: four peaks problem and bipolar function. 1
Multiobjective Combinatorial Optimisation with Coincidence Algorithm
"... Abstract — Most optimization algorithms that use probabilistic models focus on extracting the information from good solutions found in the population. A selection method discards the belowaverage solutions. They do not contribute any information to be used to update the models. This work proposes a ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
(Show Context)
Abstract — Most optimization algorithms that use probabilistic models focus on extracting the information from good solutions found in the population. A selection method discards the belowaverage solutions. They do not contribute any information to be used to update the models. This work proposes a new algorithm, Combinatorial Optimization with Coincidence (COIN) that makes use of both good and notgood solutions. A Generator represents a probabilistic model of the required solution, is used to sample candidate solutions. Reward and punishment schemes are incorporated in updating the generator. The updating values are defined by selecting the good and notgood solutions. It has been observed that the notgood solutions contribute to avoid producing the bad solutions. The multiobjective version of COIN is also introduced. Several benchmarks of multiobjective problems of real world industrial applications are reported. I.
Selection of the Most Useful Subset of Genes for Gene ExpressionBased Classification
 in Proceedings of the IEEE Congress on Evolutionary Computation
, 2004
"... Recently, there has been a growing interest in classification of patient samples based on gene expressions. Here the classification task is made more difficult by the noisy nature of the data, and by the overwhelming number of genes relative to the number of available training samples in the data se ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Recently, there has been a growing interest in classification of patient samples based on gene expressions. Here the classification task is made more difficult by the noisy nature of the data, and by the overwhelming number of genes relative to the number of available training samples in the data set. Moreover, many of these genes are irrelevant for classification and have negative effect on the accuracy and on the required learning time for the classifier. In this paper, we propose a new evolutionary computation method to select the most useful subset of genes for molecular classification. We apply this method to three benchmark data sets and present our unbiased experimental results. I.
Population Based Incremental Learning with Guided Mutation Versus Genetic Algorithms
 Iterated Prisoners Dilemma. Proceedings of the Congress on Evolutionary Computation 2005 (CEC2005
, 2005
"... Abstract Axelrod’s original experiments for evolving IPD player strategies involved the use of a basic GA. In this paper we examine how well a simple GA performs against the more recent Population Based Incremental Learning system under similar conditions. We find that GA performs slightly better t ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
Abstract Axelrod’s original experiments for evolving IPD player strategies involved the use of a basic GA. In this paper we examine how well a simple GA performs against the more recent Population Based Incremental Learning system under similar conditions. We find that GA performs slightly better than standard PBIL under most conditions. This differnce in performance can be mitigated and reversed through the use of a ‘guided’ mutation operator. I.
A diversity maintaining populationbased incremental learning algorithm
 Information Sciences
, 2008
"... In this paper we propose a new probability update rule and sampling procedure for populationbased incremental learning. These proposed methods are based on the concept of opposition as a means for controlling the amount of diversity within a given sample population. We prove that under this scheme ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
In this paper we propose a new probability update rule and sampling procedure for populationbased incremental learning. These proposed methods are based on the concept of opposition as a means for controlling the amount of diversity within a given sample population. We prove that under this scheme we are able to asymptotically guarantee a higher diversity, which allows for a greater exploration of the search space. The presented probabilistic algorithm is specifically for applications in the binary domain. The benchmark data used for the experiments are commonly used deceptive and attractor basin functions as well as 10 common Travelling Salesman problem instances. Our experimental results focus on the effect of parameters and problem size on the accuracy of the algorithm as well as on a comparison to traditional populationbased incremental learning. We show that the new algorithm is able to effectively utilize the increased diversity of opposition which leads to significantly improved results over traditional populationbased incremental learning. Preprint submitted to ElsevierKey words: Populationbased incremental learning, oppositionbased computing, diversity maintenance, diversity control. 1