Results 1  10
of
33
A Survey of Optimization by Building and Using Probabilistic Models
 COMPUTATIONAL OPTIMIZATION AND APPLICATIONS
, 1999
"... This paper summarizes the research on populationbased probabilistic search algorithms based on modeling promising solutions by estimating their probability distribution and using the constructed model to guide the further exploration of the search space. It settles the algorithms in the field of ge ..."
Abstract

Cited by 275 (82 self)
 Add to MetaCart
This paper summarizes the research on populationbased probabilistic search algorithms based on modeling promising solutions by estimating their probability distribution and using the constructed model to guide the further exploration of the search space. It settles the algorithms in the field of genetic and evolutionary computation where they have been originated. All methods are classified into a few classes according to the complexity of the class of models they use. Algorithms from each of these classes are briefly described and their strengths and weaknesses are discussed.
Escaping Hierarchical Traps with Competent Genetic Algorithms
 Proceedings of the Genetic and Evolutionary Computation Conference (GECCO2001
, 2001
"... To solve hierarchical problems, one must be able to learn the linkage, represent partial solutions efficiently, and assure effective niching. We propose the hierarchical ... ..."
Abstract

Cited by 85 (46 self)
 Add to MetaCart
To solve hierarchical problems, one must be able to learn the linkage, represent partial solutions efficiently, and assure effective niching. We propose the hierarchical ...
Bayesian Optimization Algorithm, Decision Graphs, and Occam's Razor
 Proceedings of the Genetic and Evolutionary Computation Conference (GECCO2001), 519–526. Also IlliGAL
, 2001
"... This paper discusses the use of various scoring metrics in the Bayesian optimization algorithm (BOA) which uses Bayesian networks to model promising solutions and generate the new ones. The use of decision graphs in Bayesian networks to improve the performance of the BOA is proposed. To favor simple ..."
Abstract

Cited by 37 (20 self)
 Add to MetaCart
This paper discusses the use of various scoring metrics in the Bayesian optimization algorithm (BOA) which uses Bayesian networks to model promising solutions and generate the new ones. The use of decision graphs in Bayesian networks to improve the performance of the BOA is proposed. To favor simple models, a complexity measure is incorporated into the BayesianDirichlet metric for Bayesian networks with decision graphs. The presented modi cations are compared on a number of interesting problems.
Parallel estimation of distribution algorithms
, 2002
"... The thesis deals with the new evolutionary paradigm based on the concept of Estimation of Distribution Algorithms (EDAs) that use probabilistic model of promising solutions found so far to obtain new candidate solutions of optimized problem. There are six primary goals of this thesis: 1. Suggestion ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
The thesis deals with the new evolutionary paradigm based on the concept of Estimation of Distribution Algorithms (EDAs) that use probabilistic model of promising solutions found so far to obtain new candidate solutions of optimized problem. There are six primary goals of this thesis: 1. Suggestion of a new formal description of EDA algorithm. This high level concept can be used to compare the generality of various probabilistic models by comparing the properties of underlying mappings. Also, some convergence issues are discussed and theoretical ways for further improvements are proposed. 2. Development of new probabilistic model and methods capable of dealing with continuous parameters. The resulting Mixed Bayesian Optimization Algorithm (MBOA) uses a set of decision trees to express the probability model. Its main advantage against the mostly used IDEA and EGNA approach is its backward compatibility with discrete domains, so it is uniquely capable of learning linkage between mixed continuousdiscrete genes. MBOA handles the discretization of continuous parameters as an integral part of the learning process, which outperforms the histogrambased
Evolutionary Optimization and the Estimation of Search Distributions with Applications to Graph Bipartitioning
 Journal of Approximate Reasoning
, 2002
"... We present a theory of population based optimization methods using approximations of search distributions. We prove convergence of the search distribution to the global optima for the Factorized Distribution Algorithm FDA if the search distribution is a Boltzmann distribution and the size of the pop ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
We present a theory of population based optimization methods using approximations of search distributions. We prove convergence of the search distribution to the global optima for the Factorized Distribution Algorithm FDA if the search distribution is a Boltzmann distribution and the size of the population is large enough. Convergence is defined in a strong sense  the global optima are attractors of a dynamical system describing mathematically the algorithm. We investigate an adaptive annealing schedule and show its similarity to truncation selection. The inverse temperature beta is changed inversely proportionally to the standard deviation of the population. We extend FDA by using a Bayesian hyper parameter. The hyper parameter is related to mutation in evolutionary algorithms. We derive an upper bound on the hyper parameter to ensure that FDA still generates the optima with high probability. We discuss the relation of the FDA approach to methods used in statistical physics to approximate a Boltzmann distribution and to belief propagation in probabilistic reasoning. In the last part, we apply the algorithm to an important practical problem, the bipartioning of large graphs. We assume that the graphs are sparsely connected. Our empirical results are as good or even better than any other method used for this problem.
On Stability of Fixed Points of Limit Models of Univariate Marginal Distribution Algorithm and Factorized Distribution Algorithm
 IEEE Trans. on Evolutionary Computation, Accepted
, 2003
"... Abstract—This paper aims to study the advantages of using higher order statistics in estimation distribution of algorithms (EDAs). We study two EDAs with twotournament selection for discrete optimization problems. One is the univariate marginal distribution algorithm (UMDA) using only firstorder s ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
Abstract—This paper aims to study the advantages of using higher order statistics in estimation distribution of algorithms (EDAs). We study two EDAs with twotournament selection for discrete optimization problems. One is the univariate marginal distribution algorithm (UMDA) using only firstorder statistics and the other is the factorized distribution algorithm (FDA) using higher order statistics. We introduce the heuristic functions and the limit models of these two algorithms and analyze stability of these limit models. It is shown that the limit model of UMDA can be trapped at any local optimal solution for some initial probability models. However, degenerate probability density functions (pdfs) at some local optimal solutions are unstable in the limit model of FDA. In particular, the degenerate pdf at the global optimal solution is the unique asymptotically stable point in the limit model of FDA for the optimization of an additively decomposable function. Our results suggest that using higher order statistics could improve the chance of finding the global optimal solution. Index Terms—Estimation of distribution algorithms (EDAs), factorized distribution algorithm (FDA), heuristic function, stability, univariate marginal distribution algorithm (UMDA). I.
Sporadic model building for efficiency enhancement of hierarchical BOA
, 2007
"... Efficiency enhancement techniques—such as parallelization and hybridization—are among the most important ingredients of practical applications of genetic and evolutionary algorithms and that is why this research area represents an important niche of evolutionary computation. This paper describes and ..."
Abstract

Cited by 13 (8 self)
 Add to MetaCart
Efficiency enhancement techniques—such as parallelization and hybridization—are among the most important ingredients of practical applications of genetic and evolutionary algorithms and that is why this research area represents an important niche of evolutionary computation. This paper describes and analyzes sporadic model building, which can be used to enhance the efficiency of the hierarchical Bayesian optimization algorithm (hBOA) and other estimation of distribution algorithms (EDAs) that use complex multivariate probabilistic models. With sporadic model building, the structure of the probabilistic model is updated once in every few iterations (generations), whereas in the remaining iterations, only model parameters (conditional and marginal probabilities) are updated. Since the time complexity of updating model parameters is much lower than the time complexity of learning the model structure, sporadic model building decreases the overall time complexity of model building. The paper shows that for boundedly difficult nearly decomposable and hierarchical optimization problems, sporadic model building leads to a significant modelbuilding speedup, which decreases the asymptotic time complexity of model building in hBOA by a factor of Θ(n 0.26) to Θ(n 0.5), where n is the problem size. On the other hand, sporadic model building also increases the number of evaluations until convergence; nonetheless, if model building is the bottleneck, the evaluation slowdown is insignificant compared to the gains in the asymptotic complexity of model building. The paper also presents a dimensional model to provide a heuristic for scaling the structurebuilding period, which is the only parameter of the proposed sporadic modelbuilding approach. The paper then tests the proposed method and the rule for setting the structurebuilding period on the problem of finding ground states of 2D and 3D Ising spin glasses.
Using previous models to bias structural learning in the hierarchical BOA
, 2008
"... Estimation of distribution algorithms (EDAs) are stochastic optimization techniques that explore the space of potential solutions by building and sampling explicit probabilistic models of promising candidate solutions. While the primary goal of applying EDAs is to discover the global optimum or at l ..."
Abstract

Cited by 11 (9 self)
 Add to MetaCart
Estimation of distribution algorithms (EDAs) are stochastic optimization techniques that explore the space of potential solutions by building and sampling explicit probabilistic models of promising candidate solutions. While the primary goal of applying EDAs is to discover the global optimum or at least its accurate approximation, besides this, any EDA provides us with a sequence of probabilistic models, which in most cases hold a great deal of information about the problem. Although using problemspecific knowledge has been shown to significantly improve performance of EDAs and other evolutionary algorithms, this readily available source of problemspecific information has been practically ignored by the EDA community. This paper takes the first step towards the use of probabilistic models obtained by EDAs to speed up the solution of similar problems in future. More specifically, we propose two approaches to biasing model building in the hierarchical Bayesian optimization algorithm (hBOA) based on knowledge automatically learned from previous hBOA runs on similar problems. We show that the proposed methods lead to substantial speedups and argue that the methods should work well in other applications that require solving a large number of problems with similar structure.
An introduction and survey of estimation of distribution algorithms
 SWARM AND EVOLUTIONARY COMPUTATION
, 2011
"... ..."
Exploiting modularity, hierarchy, and repetition in variablelength problems
 Genetic and Evolutionary Computation – GECCO2004, Part I, volume 3102 of Lecture Notes in Computer Science
, 2004
"... Abstract. Current methods for evolutionary computation can reliably address problems for which the dependencies between variables are limited to a small order k. Furthermore, several recent methods can address certain hierarchical problems which feature dependencies between all variables. In additio ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Abstract. Current methods for evolutionary computation can reliably address problems for which the dependencies between variables are limited to a small order k. Furthermore, several recent methods can address certain hierarchical problems which feature dependencies between all variables. In addition to modularity and hierarchy, a third problem feature that can be exploited when present is repetition. To enable the study of these problem features in isolation, two test problems for modularity and hierarchy detection by variable length problems are introduced. To explore how a variable length method can exploit these three problem features, a module formation algorithm is investigated. It is found that the algorithm identifies all three forms of problem structure to a substantial degree, leading to significant performance improvements for both the hierarchical and repetitive test problems. The experimental results indicate that the simultaneous exploitation of hierarchy and repetition will require both positionspecific module testing and positionindependent module use. Modularity, hierarchy, repetition, SEQ problem, HSEQ problem 1 Introduction Currently, evolutionary computation can reliably address problems for whichthe order of the dependencies between variables is limited to a small number k,where two variables are called dependent if the fitness contribution of one variable