Results 1 - 10
of
56
Analyzing probabilistic models in hierarchical boa on traps and spin glasses
- Genetic and Evolutionary Computation Conference (GECCO-2007), I
, 2007
"... The hierarchical Bayesian optimization algorithm (hBOA) can solve nearly decomposable and hierarchical problems of bounded difficulty in a robust and scalable manner by building and sampling probabilistic models of promising solutions. This paper analyzes probabilistic models in hBOA on two common t ..."
Abstract
-
Cited by 25 (17 self)
- Add to MetaCart
(Show Context)
The hierarchical Bayesian optimization algorithm (hBOA) can solve nearly decomposable and hierarchical problems of bounded difficulty in a robust and scalable manner by building and sampling probabilistic models of promising solutions. This paper analyzes probabilistic models in hBOA on two common test problems: concatenated traps and 2D Ising spin glasses with periodic boundary conditions. We argue that although Bayesian networks with local structures can encode complex probability distributions, analyzing these models in hBOA is relatively straightforward and the results of such analyses may provide practitioners with useful information about their problems. The results show that the probabilistic models in hBOA closely correspond to the structure of the underlying problem, the models do not change significantly in subsequent iterations of BOA, and creating adequate probabilistic models by hand is not straightforward even with complete knowledge of the optimization problem. Categories and Subject Descriptors
Using previous models to bias structural learning in the hierarchical BOA
, 2008
"... Estimation of distribution algorithms (EDAs) are stochastic optimization techniques that explore the space of potential solutions by building and sampling explicit probabilistic models of promising candidate solutions. While the primary goal of applying EDAs is to discover the global optimum or at l ..."
Abstract
-
Cited by 20 (11 self)
- Add to MetaCart
(Show Context)
Estimation of distribution algorithms (EDAs) are stochastic optimization techniques that explore the space of potential solutions by building and sampling explicit probabilistic models of promising candidate solutions. While the primary goal of applying EDAs is to discover the global optimum or at least its accurate approximation, besides this, any EDA provides us with a sequence of probabilistic models, which in most cases hold a great deal of information about the problem. Although using problem-specific knowledge has been shown to significantly improve performance of EDAs and other evolutionary algorithms, this readily available source of problem-specific information has been practically ignored by the EDA community. This paper takes the first step towards the use of probabilistic models obtained by EDAs to speed up the solution of similar problems in future. More specifically, we propose two approaches to biasing model building in the hierarchical Bayesian optimization algorithm (hBOA) based on knowledge automatically learned from previous hBOA runs on similar problems. We show that the proposed methods lead to substantial speedups and argue that the methods should work well in other applications that require solving a large number of problems with similar structure.
An introduction and survey of estimation of distribution algorithms
- SWARM AND EVOLUTIONARY COMPUTATION
, 2011
"... ..."
(Show Context)
Sporadic model building for efficiency enhancement of hierarchical BOA
, 2007
"... Efficiency enhancement techniques—such as parallelization and hybridization—are among the most important ingredients of practical applications of genetic and evolutionary algorithms and that is why this research area represents an important niche of evolutionary computation. This paper describes and ..."
Abstract
-
Cited by 17 (9 self)
- Add to MetaCart
(Show Context)
Efficiency enhancement techniques—such as parallelization and hybridization—are among the most important ingredients of practical applications of genetic and evolutionary algorithms and that is why this research area represents an important niche of evolutionary computation. This paper describes and analyzes sporadic model building, which can be used to enhance the efficiency of the hierarchical Bayesian optimization algorithm (hBOA) and other estimation of distribution algorithms (EDAs) that use complex multivariate probabilistic models. With sporadic model building, the structure of the probabilistic model is updated once in every few iterations (generations), whereas in the remaining iterations, only model parameters (conditional and marginal probabilities) are updated. Since the time complexity of updating model parameters is much lower than the time complexity of learning the model structure, sporadic model building decreases the overall time complexity of model building. The paper shows that for boundedly difficult nearly decomposable and hierarchical optimization problems, sporadic model building leads to a significant model-building speedup, which decreases the asymptotic time complexity of model building in hBOA by a factor of Θ(n 0.26) to Θ(n 0.5), where n is the problem size. On the other hand, sporadic model building also increases the number of evaluations until convergence; nonetheless, if model building is the bottleneck, the evaluation slowdown is insignificant compared to the gains in the asymptotic complexity of model building. The paper also presents a dimensional model to provide a heuristic for scaling the structurebuilding period, which is the only parameter of the proposed sporadic model-building approach. The paper then tests the proposed method and the rule for setting the structure-building period on the problem of finding ground states of 2D and 3D Ising spin glasses.
Matching inductive search bias and problem structure in continuous estimation of distribution algorithms
- European Journal of Operational Research
"... Research into the dynamics of Genetic Algorithms (GAs) has led to the ¯eld of Estimation{of{Distribution Algorithms (EDAs). For discrete search spaces, EDAs have been developed that have obtained very promising results on a wide variety of problems. In this paper we investigate the conditions under ..."
Abstract
-
Cited by 16 (3 self)
- Add to MetaCart
(Show Context)
Research into the dynamics of Genetic Algorithms (GAs) has led to the ¯eld of Estimation{of{Distribution Algorithms (EDAs). For discrete search spaces, EDAs have been developed that have obtained very promising results on a wide variety of problems. In this paper we investigate the conditions under which the adaptation of this technique to continuous search spaces fails to perform optimization e±ciently. We show that without careful interpretation and adaptation of lessons learned from discrete EDAs, continuous EDAs will fail to perform e±cient optimization on even some of the simplest problems. We reconsider the most important lessons to be learned in the design of EDAs and subsequently show how we can use this knowledge to extend continuous EDAs that were obtained by straightforward adaptation from the discrete domain so as to obtain an improvement in performance. Experimental results are presented to illustrate this improvement and to additionally con¯rm experimentally that a proper adaptation of discrete EDAs to the continuous case indeed requires careful consideration. Key words: Estimation{of{distribution algorithms; Numerical optimization;
Compact genetic codes as a search strategy of evolutionary processes
- In Foundations of Genetic Algorithms 8 (FOGA VIII), LNCS
, 2005
"... Abstract. The choice of genetic representation crucially determines the capability of evolutionary processes to find complex solutions in which many variables interact. The question is how good genetic representations can be found and how they can be adapted online to account for what can be learned ..."
Abstract
-
Cited by 14 (2 self)
- Add to MetaCart
(Show Context)
Abstract. The choice of genetic representation crucially determines the capability of evolutionary processes to find complex solutions in which many variables interact. The question is how good genetic representations can be found and how they can be adapted online to account for what can be learned about the structure of the problem from previous samples. We address these questions in a scenario that we term indirect Estimation-of-Distribution: We consider a decorrelated search distribution (mutational variability) on a variable length genotype space. A one-to-one encoding onto the phenotype space then needs to induce an adapted phenotypic variability incorporating the dependencies between phenotypic variables that have been observed successful previously. Formalizing this in the framework of Estimation-of-Distribution Algorithms, an adapted phenotypic variability can be characterized as minimizing the Kullback-Leibler divergence to a population of previously selected individuals (parents). Our core result is a relation between the Kullback-Leibler divergence and the description length of the encoding in the specific scenario, stating that compact codes provide a way to minimize this divergence. A proposed class of Compression Evolutionary Algorithms and preliminary experiments with an L-system compression scheme illustrate the approach. We also discuss the implications for the self-adaptive evolution of genetic representations on the basis of neutrality (σ-evolution) towards compact codes. 1
The Ising Model on the Ring: Mutation versus Recombination
- PROC. OF THE GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE (GECCO 2004). LNCS 3102
, 2004
"... The investigation of genetic and evolutionary algorithms on Ising model problems gives much insight how these algorithms work as adaptation schemes. The Ising model on the ring has been considered as a typical example with a clear building block structure suited well for two-point crossover. It h ..."
Abstract
-
Cited by 12 (2 self)
- Add to MetaCart
(Show Context)
The investigation of genetic and evolutionary algorithms on Ising model problems gives much insight how these algorithms work as adaptation schemes. The Ising model on the ring has been considered as a typical example with a clear building block structure suited well for two-point crossover. It has been claimed that GAs based on recombination and appropriate diversity-preserving methods outperform by far EAs based only on mutation. Here, a rigorous analysis of the expected optimization time proves that mutation-based EAs are surprisingly e#ective. The (1 + ) EA with an appropriate -value is almost as e#cient as usual GAs. Moreover, it is proved that specialized GAs do even better and this holds for two-point crossover as well as for one-point crossover.
Optimising cancer chemotherapy using an estimation of distribution algorithm and genetic algorithms
- Proceedings of the 8th annual conference on Genetic and evolutionary computation
, 2006
"... This paper presents a methodology for using heuristic search methods to optimise cancer chemotherapy. Specifically, two evolutionary algorithms- Population Based Incremental Learning (PBIL), which is an Estimation of Distribution Algorithm (EDA), and Genetic Algorithms (GAs) have been applied to the ..."
Abstract
-
Cited by 10 (5 self)
- Add to MetaCart
(Show Context)
This paper presents a methodology for using heuristic search methods to optimise cancer chemotherapy. Specifically, two evolutionary algorithms- Population Based Incremental Learning (PBIL), which is an Estimation of Distribution Algorithm (EDA), and Genetic Algorithms (GAs) have been applied to the problem of finding effective chemotherapeutic treatments. To our knowledge, EDAs have been applied to fewer real world problems compared to GAs, and the aim of the present paper is to expand the application domain of this technique. We compare and analyse the performance of both algorithms and draw a conclusion as to which approach to cancer chemotherapy optimisation is more efficient and helpful in the decision-making activity led by the oncologists. Categories and Subject Descriptors
A Matrix Approach for Finding Extrema: PROBLEMS WITH MODULARITY, HIERARCHY, AND OVERLAP
, 2006
"... Unlike most simple textbook examples, the real world is full with complex systems, and researchers in many different fields are often confronted by problems arising from such systems. Simple heuristics or even enumeration works quite well on small and easy problems; however, to efficiently solve lar ..."
Abstract
-
Cited by 9 (0 self)
- Add to MetaCart
Unlike most simple textbook examples, the real world is full with complex systems, and researchers in many different fields are often confronted by problems arising from such systems. Simple heuristics or even enumeration works quite well on small and easy problems; however, to efficiently solve large and difficult problems, proper decomposition according to the complex system is the key. In this research project, investigating and analyzing interactions between components of complex systems shed some light on problem decomposition. By recognizing three bare-bone types of interactions—modularity, hierarchy, and overlap, theories and models are developed to dissect and inspect problem decomposition in the context of genetic algorithms. This dissertation presents a research project to develop a competent optimization method to solve boundedly difficult problems with modularity, hierarchy, and overlap by explicit problem decomposition. The proposed genetic algorithm design utilizes a matrix representation of an interaction graph to analyze and decompose the problem. The results from this thesis should benefit research both technically and scientifically. Technically, this thesis develops an automated dependency structure matrix clustering technique and utilizes it to design a competent black-box problem solver. Scientifically, the explicit interaction