Results 1  10
of
23
Bayesian Optimization Algorithm: From Single Level to Hierarchy
, 2002
"... There are four primary goals of this dissertation. First, design a competent optimization algorithm capable of learning and exploiting appropriate problem decomposition by sampling and evaluating candidate solutions. Second, extend the proposed algorithm to enable the use of hierarchical decompositi ..."
Abstract

Cited by 99 (18 self)
 Add to MetaCart
(Show Context)
There are four primary goals of this dissertation. First, design a competent optimization algorithm capable of learning and exploiting appropriate problem decomposition by sampling and evaluating candidate solutions. Second, extend the proposed algorithm to enable the use of hierarchical decomposition as opposed to decomposition on only a single level. Third, design a class of difficult hierarchical problems that can be used to test the algorithms that attempt to exploit hierarchical decomposition. Fourth, test the developed algorithms on the designed class of problems and several realworld applications. The dissertation proposes the Bayesian optimization algorithm (BOA), which uses Bayesian networks to model the promising solutions found so far and sample new candidate solutions. BOA is theoretically and empirically shown to be capable of both learning a proper decomposition of the problem and exploiting the learned decomposition to ensure robust and scalable search for the optimum across a wide range of problems. The dissertation then identifies important features that must be incorporated into the basic BOA to solve problems that are not decomposable on a single level, but that can still be solved by decomposition over multiple levels of difficulty. Hierarchical
Optimization in continuous domains by learning and simulation of Gaussian networks
"... This paper shows how the Gaussian network paradigm can be used to solve optimization problems in continuous domains. Some methods of structure learning from data and simulation of Gaussian networks are applied in the Estimation of Distribution Algorithm (EDA) as well as new methods based on in ..."
Abstract

Cited by 38 (4 self)
 Add to MetaCart
This paper shows how the Gaussian network paradigm can be used to solve optimization problems in continuous domains. Some methods of structure learning from data and simulation of Gaussian networks are applied in the Estimation of Distribution Algorithm (EDA) as well as new methods based on information theory are proposed. Experimental results are also presented. 1 Estimation of Distribution Algorithms approaches in continuous domains Figure 1 shows a schematic of the EDA approach for continuous domains. We will use x = (x 1 ; : : : ; xn ) to denote individuals, and D l to denote the population of N individuals in the lth generation. Similarly, D Se l will represent the population of the selected Se individuals from D l . In the EDA [9] our interest will be to estimate f(x j D Se ), that is, the joint probability density function over one individual x being among the selected individuals. We denote as f l (x) = f l (x j D Se l 1 ) the joint density of the lth genera...
Parallel estimation of distribution algorithms
, 2002
"... The thesis deals with the new evolutionary paradigm based on the concept of Estimation of Distribution Algorithms (EDAs) that use probabilistic model of promising solutions found so far to obtain new candidate solutions of optimized problem. There are six primary goals of this thesis: 1. Suggestion ..."
Abstract

Cited by 25 (4 self)
 Add to MetaCart
The thesis deals with the new evolutionary paradigm based on the concept of Estimation of Distribution Algorithms (EDAs) that use probabilistic model of promising solutions found so far to obtain new candidate solutions of optimized problem. There are six primary goals of this thesis: 1. Suggestion of a new formal description of EDA algorithm. This high level concept can be used to compare the generality of various probabilistic models by comparing the properties of underlying mappings. Also, some convergence issues are discussed and theoretical ways for further improvements are proposed. 2. Development of new probabilistic model and methods capable of dealing with continuous parameters. The resulting Mixed Bayesian Optimization Algorithm (MBOA) uses a set of decision trees to express the probability model. Its main advantage against the mostly used IDEA and EGNA approach is its backward compatibility with discrete domains, so it is uniquely capable of learning linkage between mixed continuousdiscrete genes. MBOA handles the discretization of continuous parameters as an integral part of the learning process, which outperforms the histogrambased
Mathematical Modelling of UMDAc Algorithm with Tournament Selection. Behaviour on Linear and Quadratic Functions
, 2002
"... This paper presents a theoretical study of the behaviour of the Univariate Marginal Distribution Algorithm for continuous domains (UMDAc ) in dimension n. To this end, the algorithm with tournament selection is modelled mathematically, assuming an infinite number of tournaments. The mathematical mod ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
This paper presents a theoretical study of the behaviour of the Univariate Marginal Distribution Algorithm for continuous domains (UMDAc ) in dimension n. To this end, the algorithm with tournament selection is modelled mathematically, assuming an infinite number of tournaments. The mathematical model is then used to study the algorithm's behaviour in the minimization of linear functions L(x) = a0 + i=1 a i x i and quadratic function Q(x) = i , with x = (x1 , . . . , xn ) and a i IR, i = 0, 1, . . . , n. Linear functions are used to model the algorithm when far from the optimum, while quadratic function is used to analyze the algorithm when near the optimum. The analysis shows that the algorithm performs poorly in the linear function L1 (x) = i=1 x i . In the case of quadratic function Q(x) the algorithm 's behaviour was analyzed for certain particular dimensions. After taking into account some simplifications we can conclude that when the algorithm starts near the optimum, UMDAc is able to reach it. Moreover the speed of convergence to the optimum decreases as the dimension increases.
Evolutionary Optimization and the Estimation of Search Distributions with Applications to Graph Bipartitioning
 Journal of Approximate Reasoning
, 2002
"... We present a theory of population based optimization methods using approximations of search distributions. We prove convergence of the search distribution to the global optima for the Factorized Distribution Algorithm FDA if the search distribution is a Boltzmann distribution and the size of the pop ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
We present a theory of population based optimization methods using approximations of search distributions. We prove convergence of the search distribution to the global optima for the Factorized Distribution Algorithm FDA if the search distribution is a Boltzmann distribution and the size of the population is large enough. Convergence is defined in a strong sense  the global optima are attractors of a dynamical system describing mathematically the algorithm. We investigate an adaptive annealing schedule and show its similarity to truncation selection. The inverse temperature beta is changed inversely proportionally to the standard deviation of the population. We extend FDA by using a Bayesian hyper parameter. The hyper parameter is related to mutation in evolutionary algorithms. We derive an upper bound on the hyper parameter to ensure that FDA still generates the optima with high probability. We discuss the relation of the FDA approach to methods used in statistical physics to approximate a Boltzmann distribution and to belief propagation in probabilistic reasoning. In the last part, we apply the algorithm to an important practical problem, the bipartioning of large graphs. We assume that the graphs are sparsely connected. Our empirical results are as good or even better than any other method used for this problem.
Analyzing the PBIL Algorithm by Means of Discrete Dynamical Systems
 Complex Systems
"... this paper the convergence behavior of the Population Based Incremental Learning algorithm (PBIL) is analyzed using discrete dynamical systems. A discrete dynamical system is associated with the PBIL algorithm. We demonstrate that the behavior of the PBIL algorithm follows the iterates of the discre ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
(Show Context)
this paper the convergence behavior of the Population Based Incremental Learning algorithm (PBIL) is analyzed using discrete dynamical systems. A discrete dynamical system is associated with the PBIL algorithm. We demonstrate that the behavior of the PBIL algorithm follows the iterates of the discrete dynamical system for a long time when the parameter # is near zero. We show that all the points of the search space are fixed points of the dynamical system, and that the local optimum points for the function to optimize coincide with the stable fixed points. Hence it can be deduced that the PBIL algorithm converges to the global optimum in unimodal functions. 1. Introduction
Inexact graph matching by means of Estimation of Distribution Algorithms
"... Estimation of Distribution Algorithms (EDAs) are a quite recent topic in optimization techniques. They combine two technical disciplines of soft computing methodologies: probabilistic reasoning and evolutionary computing. Several algorithms and approaches have already been proposed by different auth ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
Estimation of Distribution Algorithms (EDAs) are a quite recent topic in optimization techniques. They combine two technical disciplines of soft computing methodologies: probabilistic reasoning and evolutionary computing. Several algorithms and approaches have already been proposed by different authors, but up to now there are very few papers showing their potential and comparing them to other evolutionary computational methods and algorithms such as Genetic Algorithms (GAs). This paper focuses on the problem of inexact graph matching which is NPhard and requires techniques to find an approximate acceptable solution. This problem arises when a non bijective correspondence is searched between two graphs. A typical instance of this problem corresponds to the case where graphs are used for structural pattern recognition in images. EDA algorithms are well suited for this type of problems.
Estimation of distribution algorithms for testing object oriented software
 In IEEE Congress on Evolutionary Computation (CEC
, 2007
"... Software ..."
(Show Context)
Linear and Combinatorial Optimizations by Estimation of Distribution Algorithms
, 2002
"... Estimation of Distribution Algorithms (EDAs) is a new area of Evolutionary Computation. In EDAs there is neither crossover nor mutation operators. New population is generated by sampling the probability distribution, which is estimated from a database containing selected individuals of the previous ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
Estimation of Distribution Algorithms (EDAs) is a new area of Evolutionary Computation. In EDAs there is neither crossover nor mutation operators. New population is generated by sampling the probability distribution, which is estimated from a database containing selected individuals of the previous generation. Different approaches have been proposed for the estimation of probability distribution. In this paper we provide a review of different EDA approaches and show how to apply UMDA with Laplace correction to Subset Sum, OneMax function and nQueen problems of linear and combinatorial optimizations. The experimental results of the three problems comparing the performance of UMDA with that of Genetic Algorithm(GA) are provided. In our experiment UMDA outperforms GA for linear problems.
Feature subset selection by genetic algorithms and estimation of distribution algorithms. A case study in the survival of cirrhotic patients treated with TIPS
, 2000
"... The transjugular intrahepatic portosystemic shunt (TIPS) is an interventional treatment for cirrhotic patients with portal hypertension. In the light of our medical staff's experience, the consequences of TIPS are not homogeneous for all the patients and a subgroup dies in the first six months ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
The transjugular intrahepatic portosystemic shunt (TIPS) is an interventional treatment for cirrhotic patients with portal hypertension. In the light of our medical staff's experience, the consequences of TIPS are not homogeneous for all the patients and a subgroup dies in the first six months after TIPS placement. Actually, there is no risk indicator to identify this subgroup of patients before treatment. An investigation for predicting the survival of cirrhotic patients treated with TIPS is carried out using a clinical database with 107 cases and 77 attributes. Four supervised machine learning classifiers are applied to discriminate between both subgroups of patients. The application of several Feature Subset Selection (FSS) techniques has significantly improved the predictive accuracy of these classifiers and considerably reduced the amount of attributes in the classification models. Among FSS techniques, FSSTREE, a new randomized algorithm inspired on the new EDA (Estimation of Di...