Results 1  10
of
55
Designing competent mutation operators via probabilistic model building of neighborhoods
 In Deb, K., & et al. (Eds.), Proceedings of the Genetic and Evolutionary Computation Conference (GECCO2004), Part II, LNCS 3103
, 2004
"... This paper presents a competent selectomutative genetic algorithm (GA), that adapts linkage and solves hard problems quickly, reliably, and accurately. A probabilistic model building process is used to automatically identify key building blocks (BBs) of the search problem. The mutation operator uses ..."
Abstract

Cited by 31 (21 self)
 Add to MetaCart
This paper presents a competent selectomutative genetic algorithm (GA), that adapts linkage and solves hard problems quickly, reliably, and accurately. A probabilistic model building process is used to automatically identify key building blocks (BBs) of the search problem. The mutation operator uses the probabilistic model of linkage groups to find the best among competing building blocks. The competent selectomutative GA successfully solves additively separable problems of bounded difficulty, requiring only subquadratic number of function evaluations. The results show that for additively separable problems the probabilistic model building BBwise mutation scales as O(2 k m 1.5), and requires O ( √ k log m) less function evaluations than its selectorecombinative counterpart, confirming theoretical results reported elsewhere (Sastry & Goldberg, 2004). 1
Let’s get ready to rumble: Crossover versus mutation head to head
 In GECCO ’04: Proc. of the Genetic and Evolutionary Computation Conference
, 2004
"... This paper analyzes the relative advantages between crossover and mutation on a class of deterministic and stochastic additively separable problems. This study assumes that the recombination and mutation operators have the knowledge of the building blocks (BBs) and effectively exchange or search amo ..."
Abstract

Cited by 25 (19 self)
 Add to MetaCart
This paper analyzes the relative advantages between crossover and mutation on a class of deterministic and stochastic additively separable problems. This study assumes that the recombination and mutation operators have the knowledge of the building blocks (BBs) and effectively exchange or search among competing BBs. Facetwise models of convergence time and population sizing have been used to determine the scalability of each algorithm. The analysis shows that for additively separable deterministic problems, the BBwise mutation is more efficient than crossover, while the crossover outperforms the mutation on additively separable problems perturbed with additive Gaussian noise. The results show that the speedup of using BBwise mutation on deterministic problems is O ( √ k log m), where k is the BB size, and m is the number of BBs. Likewise, the speedup of using crossover on stochastic problems with fixed noise variance is O(m √ k / logm). 1
Combating user fatigue in iGAs: partial ordering, support vector machines, and synthetic fitness
 In Genetic and Evolutionary Computation Conference, GECCO 2005, Proceedings
"... One of the daunting challenges of interactive genetic algorithms (iGAs)—genetic algorithms in which fitness measure of a solution is provided by a human rather than by a fitness function, model, or computation—is user fatigue which leads to suboptimal solutions. This paper proposes a method to comb ..."
Abstract

Cited by 23 (8 self)
 Add to MetaCart
One of the daunting challenges of interactive genetic algorithms (iGAs)—genetic algorithms in which fitness measure of a solution is provided by a human rather than by a fitness function, model, or computation—is user fatigue which leads to suboptimal solutions. This paper proposes a method to combat user fatigue by augmenting user evaluations with a synthetic fitness function. The proposed method combines partial ordering concepts, notion of nondomination from multiobjective optimization, and support vector machines to synthesize a fitness model based on user evaluation. The proposed method is used in an iGA on a simple test problem and the results demonstrate that the method actively combats user fatigue by requiring 3–7 times less user evaluation when compared to a simple iGA.
Efficiency enhancement of genetic algorithms via buildingblockwise fitness estimation
 PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON EVOLUTIONARY COMPUTATION
, 2004
"... This paper studies fitness inheritance as an efficiency enhancement technique for a class of competent genetic algorithms called estimation distribution algorithms. Probabilistic models of important subsolutions are developed to estimate the fitness of a proportion of individuals in the population, ..."
Abstract

Cited by 21 (17 self)
 Add to MetaCart
This paper studies fitness inheritance as an efficiency enhancement technique for a class of competent genetic algorithms called estimation distribution algorithms. Probabilistic models of important subsolutions are developed to estimate the fitness of a proportion of individuals in the population, thereby avoiding computationally expensive function evaluations. The effect of fitness inheritance on the convergence time and population sizing are modeled and the speedup obtained through inheritance is predicted. The results show that a fitnessinheritance mechanism which utilizes information on buildingblock fitnesses provides significant efficiency enhancement. For additively separable problems, fitness inheritance reduces the number of function evaluations to about half and yields a speedup of about 1.75–2.25.
Analyzing probabilistic models in hierarchical boa on traps and spin glasses
 Genetic and Evolutionary Computation Conference (GECCO2007), I
, 2007
"... The hierarchical Bayesian optimization algorithm (hBOA) can solve nearly decomposable and hierarchical problems of bounded difficulty in a robust and scalable manner by building and sampling probabilistic models of promising solutions. This paper analyzes probabilistic models in hBOA on two common t ..."
Abstract

Cited by 17 (15 self)
 Add to MetaCart
The hierarchical Bayesian optimization algorithm (hBOA) can solve nearly decomposable and hierarchical problems of bounded difficulty in a robust and scalable manner by building and sampling probabilistic models of promising solutions. This paper analyzes probabilistic models in hBOA on two common test problems: concatenated traps and 2D Ising spin glasses with periodic boundary conditions. We argue that although Bayesian networks with local structures can encode complex probability distributions, analyzing these models in hBOA is relatively straightforward and the results of such analyses may provide practitioners with useful information about their problems. The results show that the probabilistic models in hBOA closely correspond to the structure of the underlying problem, the models do not change significantly in subsequent iterations of BOA, and creating adequate probabilistic models by hand is not straightforward even with complete knowledge of the optimization problem. Categories and Subject Descriptors
Genetic Algorithms
, 2005
"... Genetic algorithms (GAs) are search methods based on principles of natural selection and genetics (Fraser, 1957; Bremermann, 1958; Holland, 1975). We start with a brief introduction to simple genetic algorithms and associated terminology. GAs encode ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
Genetic algorithms (GAs) are search methods based on principles of natural selection and genetics (Fraser, 1957; Bremermann, 1958; Holland, 1975). We start with a brief introduction to simple genetic algorithms and associated terminology. GAs encode
Parameterless hierarchical BOA
 Proceedings of the Genetic and Evolutionary Computation Conference (GECCO2004), Part II, LNCS 3103
, 2004
"... An automated technique has recently been proposed to transfer learning in the hierarchical Bayesian optimization algorithm (hBOA) based on distancebased statistics. The technique enables practitioners to improve hBOA efficiency by collecting statistics from probabilistic models obtained in previous ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
An automated technique has recently been proposed to transfer learning in the hierarchical Bayesian optimization algorithm (hBOA) based on distancebased statistics. The technique enables practitioners to improve hBOA efficiency by collecting statistics from probabilistic models obtained in previous hBOA runs and using the obtained statistics to bias future hBOA runs on similar problems. The purpose of this paper is threefold: (1) test the technique on several classes of NPcomplete problems, including MAXSAT, spin glasses and minimum vertex cover; (2) demonstrate that the technique is effective even when previous runs were done on problems of different size; (3) provide empirical evidence that combining transfer learning with other efficiency enhancement techniques can often provide nearly multiplicative speedups.
Towards billion bit optimization via parallel estimation of distribution algorithm
 Genetic and Evolutionary Computation Conference (GECCO2007
, 2007
"... This paper presents a highly efficient, fully parallelized implementation of the compact genetic algorithm (cGA) to solve very large scale problems with millions to billions of variables. The paper presents principled results demonstrating the scalable solution of a difficult test function on instan ..."
Abstract

Cited by 14 (8 self)
 Add to MetaCart
This paper presents a highly efficient, fully parallelized implementation of the compact genetic algorithm (cGA) to solve very large scale problems with millions to billions of variables. The paper presents principled results demonstrating the scalable solution of a difficult test function on instances over a billion variables using a parallel implementation of cGA. The problem addressed is a noisy, blind problem over a vector of binary decision variables. Noise is added equaling up to a tenth of the deterministic objective function variance of the problem, thereby making it difficult for simple hillclimbers to find the optimal solution. The compact GA, on the other hand, is able to find the optimum in the presence of noise quickly, reliably, and accurately, and the solution scalability follows known convergence theories. These results on noisy problem together with other results on problems involving varying modularity, hierarchy, and overlap foreshadow routine solution of billionvariable problems across the landscape of search problems.
Sporadic model building for efficiency enhancement of hierarchical BOA
, 2007
"... Efficiency enhancement techniques—such as parallelization and hybridization—are among the most important ingredients of practical applications of genetic and evolutionary algorithms and that is why this research area represents an important niche of evolutionary computation. This paper describes and ..."
Abstract

Cited by 13 (8 self)
 Add to MetaCart
Efficiency enhancement techniques—such as parallelization and hybridization—are among the most important ingredients of practical applications of genetic and evolutionary algorithms and that is why this research area represents an important niche of evolutionary computation. This paper describes and analyzes sporadic model building, which can be used to enhance the efficiency of the hierarchical Bayesian optimization algorithm (hBOA) and other estimation of distribution algorithms (EDAs) that use complex multivariate probabilistic models. With sporadic model building, the structure of the probabilistic model is updated once in every few iterations (generations), whereas in the remaining iterations, only model parameters (conditional and marginal probabilities) are updated. Since the time complexity of updating model parameters is much lower than the time complexity of learning the model structure, sporadic model building decreases the overall time complexity of model building. The paper shows that for boundedly difficult nearly decomposable and hierarchical optimization problems, sporadic model building leads to a significant modelbuilding speedup, which decreases the asymptotic time complexity of model building in hBOA by a factor of Θ(n 0.26) to Θ(n 0.5), where n is the problem size. On the other hand, sporadic model building also increases the number of evaluations until convergence; nonetheless, if model building is the bottleneck, the evaluation slowdown is insignificant compared to the gains in the asymptotic complexity of model building. The paper also presents a dimensional model to provide a heuristic for scaling the structurebuilding period, which is the only parameter of the proposed sporadic modelbuilding approach. The paper then tests the proposed method and the rule for setting the structurebuilding period on the problem of finding ground states of 2D and 3D Ising spin glasses.
Using previous models to bias structural learning in the hierarchical BOA
, 2008
"... Estimation of distribution algorithms (EDAs) are stochastic optimization techniques that explore the space of potential solutions by building and sampling explicit probabilistic models of promising candidate solutions. While the primary goal of applying EDAs is to discover the global optimum or at l ..."
Abstract

Cited by 11 (9 self)
 Add to MetaCart
Estimation of distribution algorithms (EDAs) are stochastic optimization techniques that explore the space of potential solutions by building and sampling explicit probabilistic models of promising candidate solutions. While the primary goal of applying EDAs is to discover the global optimum or at least its accurate approximation, besides this, any EDA provides us with a sequence of probabilistic models, which in most cases hold a great deal of information about the problem. Although using problemspecific knowledge has been shown to significantly improve performance of EDAs and other evolutionary algorithms, this readily available source of problemspecific information has been practically ignored by the EDA community. This paper takes the first step towards the use of probabilistic models obtained by EDAs to speed up the solution of similar problems in future. More specifically, we propose two approaches to biasing model building in the hierarchical Bayesian optimization algorithm (hBOA) based on knowledge automatically learned from previous hBOA runs on similar problems. We show that the proposed methods lead to substantial speedups and argue that the methods should work well in other applications that require solving a large number of problems with similar structure.