Results 1  10
of
12
The CorrelationTriggered Adaptive Variance Scaling IDEA
 IN PROCEEDINGS OF THE 8TH CONFERENCE ON GENETIC AND EVOLUTIONARY COMPUTATION
, 2006
"... It has previously been shown analytically and experimentally that continuous Estimation of Distribution Algorithms (EDAs) based on the normal pdf can easily suffer from premature convergence. This paper takes a principled first step towards solving this problem. First, prerequisites for the successf ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
It has previously been shown analytically and experimentally that continuous Estimation of Distribution Algorithms (EDAs) based on the normal pdf can easily suffer from premature convergence. This paper takes a principled first step towards solving this problem. First, prerequisites for the successful use of search distributions in EDAs are presented. Then, an adaptive variance scaling theme is introduced that aims at reducing the risk of premature convergence. Integrating the scheme into the iterated density–estimation evolutionary algorithm (IDEA) yields the correlationtriggered adaptive variance scaling IDEA (CTAVSIDEA). The CTAVSIDEA is compared to the original IDEA and the Evolution Strategy with Covariance Matrix Adaptation (CMAES) on a wide range of unimodal testproblems by means of a scalability analysis. It is found that the average number of fitness evaluations grows subquadratically with the dimensionality, competitively with the CMAES. In addition, CTAVSIDEA is indeed found to enlarge the class of problems that continuous EDAs can solve reliably.
A mathematical modelling technique for the analysis of the dynamics of a simple continuous EDA
 in IEEE Congress on Evolutionary Computation, CEC 2006
, 2006
"... Abstract — This paper presents some initial attempts to mathematically model the dynamics of a continuous Estimation of Distribution Algorithm (EDA) based on Gaussian distributions. Case studies are conducted on both unimodal and multimodal problems to highlight the effectiveness of the proposed tec ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Abstract — This paper presents some initial attempts to mathematically model the dynamics of a continuous Estimation of Distribution Algorithm (EDA) based on Gaussian distributions. Case studies are conducted on both unimodal and multimodal problems to highlight the effectiveness of the proposed technique and explore some fundamental issues of the EDA. With some general assumptions, we can show that, for onedimensional unimodal problems and with the (µ, λ) scheme: (1). The convergence behaviour of the EDA is independent of the test function except its general shape; (2). When starting far away from the global optimum, the EDA may get stuck; (3). Given a certain selection pressure, there is a unique parameter value that could help the EDA achieve desirable performance; for onedimensional multimodal problems: (1). The EDA could get stuck with the (µ, λ) scheme; (2). The EDA will never get stuck with the (µ+λ) scheme. I.
Convergence Phases, Variance Trajectories, and Runtime Analysis of Continuous EDAs
, 2007
"... Considering the available body of literature on continuous EDAs, one must state that many important questions are still unanswered, e.g.: How do continuous EDAs really work, and how can we increase their efficiency further? The first question must be answered on the basis of formal models, but despi ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Considering the available body of literature on continuous EDAs, one must state that many important questions are still unanswered, e.g.: How do continuous EDAs really work, and how can we increase their efficiency further? The first question must be answered on the basis of formal models, but despite some recent results, the majority of contributions to the field is experimental. The second question should be answered by exploiting the insights that have been gained from formal models. We contribute to the theoretical literature on continuous EDAs by focussing on a simple, yet important, question: How should the variances used to sample offspring from change over an EDA run? To answer this question, the convergence process is separated into three phases and it is shown that for each phase, a preferable strategy exists for setting the variances. It is highly likely that the use of variances that have been estimated with maximum likelihood is not optimal. Thus, variance modification policies are not just a nice addon. In the light of our findings, they become an integral component of continuous EDAs, and they should consider the specific requirements of all phases of the optimization process.
A diversity maintaining populationbased incremental learning algorithm
 Information Sciences
, 2008
"... In this paper we propose a new probability update rule and sampling procedure for populationbased incremental learning. These proposed methods are based on the concept of opposition as a means for controlling the amount of diversity within a given sample population. We prove that under this scheme ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
In this paper we propose a new probability update rule and sampling procedure for populationbased incremental learning. These proposed methods are based on the concept of opposition as a means for controlling the amount of diversity within a given sample population. We prove that under this scheme we are able to asymptotically guarantee a higher diversity, which allows for a greater exploration of the search space. The presented probabilistic algorithm is specifically for applications in the binary domain. The benchmark data used for the experiments are commonly used deceptive and attractor basin functions as well as 10 common Travelling Salesman problem instances. Our experimental results focus on the effect of parameters and problem size on the accuracy of the algorithm as well as on a comparison to traditional populationbased incremental learning. We show that the new algorithm is able to effectively utilize the increased diversity of opposition which leads to significantly improved results over traditional populationbased incremental learning. Preprint submitted to ElsevierKey words: Populationbased incremental learning, oppositionbased computing, diversity maintenance, diversity control. 1
Scaling Up Estimation of Distribution Algorithms for Continuous Optimization
"... were proposed, many attempts have been made to improve EDAs ’ performance in the context of global optimization. So far, the studies or applications of multivariate probabilistic model based EDAs in continuous domain are still mostly restricted to low dimensional problems. Traditional EDAs have diff ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
were proposed, many attempts have been made to improve EDAs ’ performance in the context of global optimization. So far, the studies or applications of multivariate probabilistic model based EDAs in continuous domain are still mostly restricted to low dimensional problems. Traditional EDAs have difficulties in solving higher dimensional problems because of the curse of dimensionality and their rapidly increasing computational costs. However, scaling up continuous EDAs for large scale optimization is still necessary, which is supported by the distinctive feature of EDAs: Because a probabilistic model is explicitly estimated, from the learnt model one can discover useful properties of the problem. Besides obtaining a good solution, understanding of the problem structure can be of great benefit, especially for black box optimization. We propose a novel EDA framework with Model Complexity Control (EDAMCC) to scale up continuous EDAs. By employing Weakly dependent variable Identification (WI) and Subspace Modeling (SM), EDAMCC shows significantly better performance than traditional EDAs on high dimensional problems. Moreover, the computational cost and the requirement of large population sizes can be reduced in EDAMCC. In addition to being able to find a good solution, EDAMCC can also provide useful problem structure characterizations. EDAMCC is the first successful instance of multivariate model based EDAs that can be effectively applied a general class of up to 500D problems. It also outperforms some newly developed algorithms designed specifically for large scale optimization. In order to understand the strength and weakness of EDAMCC, we have carried out extensive computational studies. Our results have revealed when EDAMCC is likely to outperform others on what kind of benchmark functions. Index Terms—Estimation of distribution algorithm, large scale optimization, model complexity control.
Preventing Premature Convergence in a Simple EDA Via Global Step Size Setting
, 2008
"... When a simple realvalued estimation of distribution algorithm (EDA) with Gaussian model and maximum likelihood estimation of parameters is used, it converges prematurely even on the slope of the fitness function. The simplest way of preventing premature convergence by multiplying the variance estim ..."
Abstract
 Add to MetaCart
When a simple realvalued estimation of distribution algorithm (EDA) with Gaussian model and maximum likelihood estimation of parameters is used, it converges prematurely even on the slope of the fitness function. The simplest way of preventing premature convergence by multiplying the variance estimate by a constant factor k each generation is studied. Recent works have shown that when increasing the dimensionality of the search space, such an algorithm becomes very quickly unable to traverse the slope and focus to the optimum at the same time. In this paper it is shown that when isotropic distributions with Gaussian or Cauchy distributed norms are used, the simple constant setting of k is able to ensure a reasonable behaviour of the EDA on the slope and in the valley of the fitness function at the same time.
Truncation Selection and Gaussian EDA: Bounds for Sustainable Progress in HighDimensional Spaces
, 2008
"... In realvalued estimationofdistribution algorithms, the Gaussian distribution is often used along with maximum likelihood (ML) estimation of its parameters. Such a process is highly prone to premature convergence. The simplest method for preventing premature convergence of Gaussian distribution is ..."
Abstract
 Add to MetaCart
In realvalued estimationofdistribution algorithms, the Gaussian distribution is often used along with maximum likelihood (ML) estimation of its parameters. Such a process is highly prone to premature convergence. The simplest method for preventing premature convergence of Gaussian distribution is enlarging the maximum likelihood estimate of σ by a constant factor k each generation. Such a factor should be large enough to prevent convergence on slopes of the fitness function, but should not be too large to allow the algorithm converge in the neighborhood of the optimum. Previous work showed that for truncation selection such admissible k exists in 1D case. In this article it is shown experimentaly, that for the Gaussian EDA with truncation selection in highdimensional spaces no admissible k exists!
Gaussian EDA and Truncation Selection: Setting Limits for Sustainable Progress
, 2008
"... In realvalued estimationofdistribution algorithms, the Gaussian distribution is often used along with maximum likelihood (ML) estimation of its parameters. Such a process is highly prone to premature convergence. The simplest method for preventing premature convergence of gaussian distribution is ..."
Abstract
 Add to MetaCart
In realvalued estimationofdistribution algorithms, the Gaussian distribution is often used along with maximum likelihood (ML) estimation of its parameters. Such a process is highly prone to premature convergence. The simplest method for preventing premature convergence of gaussian distribution is to enlarge the maximum likelihood estimate of standard deviation σ by a constant factor k each generation. This paper surveys and broadens the theoretical models of the behaviour of this simple EDA on 1D problems and derives the limits for the constant k. The behaviour of this simple EDA with various values of k is analysed and the agreement of the model with the reality is confirmed.
unknown title
, 710
"... Effective linkage learning using loworder statistics and clustering 1 ..."