Results 1 - 10
of
339
Hierarchical Bayesian Optimization Algorithm = Bayesian Optimization Algorithm + Niching + Local Structures
, 2001
"... The paper describes the hierarchical Bayesian optimization algorithm which combines the Bayesian optimization algorithm, local structures in Bayesian networks, and a powerful niching technique. The proposed algorithm is able to solve hierarchical traps and other difficult problems very efficiently. ..."
Abstract
-
Cited by 329 (70 self)
- Add to MetaCart
(Show Context)
The paper describes the hierarchical Bayesian optimization algorithm which combines the Bayesian optimization algorithm, local structures in Bayesian networks, and a powerful niching technique. The proposed algorithm is able to solve hierarchical traps and other difficult problems very efficiently.
Metaheuristics in combinatorial optimization: Overview and conceptual comparison
- ACM COMPUTING SURVEYS
, 2003
"... The field of metaheuristics for the application to combinatorial optimization problems is a rapidly growing field of research. This is due to the importance of combinatorial optimization problems for the scientific as well as the industrial world. We give a survey of the nowadays most important meta ..."
Abstract
-
Cited by 314 (17 self)
- Add to MetaCart
The field of metaheuristics for the application to combinatorial optimization problems is a rapidly growing field of research. This is due to the importance of combinatorial optimization problems for the scientific as well as the industrial world. We give a survey of the nowadays most important metaheuristics from a conceptual point of view. We outline the different components and concepts that are used in the different metaheuristics in order to analyze their similarities and differences. Two very important concepts in metaheuristics are intensification and diversification. These are the two forces that largely determine the behaviour of a metaheuristic. They are in some way contrary but also complementary to each other. We introduce a framework, that we call the I&D frame, in order to put different intensification and diversification components into relation with each other. Outlining the advantages and disadvantages of different metaheuristic approaches we conclude by pointing out the importance of hybridization of metaheuristics as well as the integration of metaheuristics and other methods for optimization.
Quantum-inspired Evolutionary Algorithm for a Class of Combinatorial Optimization
- IEEE TRANS. EVOLUTIONARY COMPUTATION
, 2002
"... This paper proposes a novel evolutionary algorithm inspired by quantum computing, called a quantum-inspired evolutionary algorithm (QEA), which is based on the concept and principles of quantum computing, such as a quantum bit and superposition of states. Like other evolutionary algorithms, QEA is a ..."
Abstract
-
Cited by 112 (7 self)
- Add to MetaCart
This paper proposes a novel evolutionary algorithm inspired by quantum computing, called a quantum-inspired evolutionary algorithm (QEA), which is based on the concept and principles of quantum computing, such as a quantum bit and superposition of states. Like other evolutionary algorithms, QEA is also characterized by the representation of the individual, the evaluation function, and the population dynamics. However, instead of binary, numeric, or symbolic representation, QEA uses a Q-bit, defined as the smallest unit of information, for the probabilistic representation and a Q-bit individual as a string of Q-bits. A Q-gate is introduced as a variation operator to drive the individuals toward better solutions. To demonstrate its effectiveness and applicability, experiments are carried out on the knapsack problem, which is a well-known combinatorial optimization problem. The results show that QEA performs well, even with a small population, without premature convergence as compared to the conventional genetic algorithm.
Escaping Hierarchical Traps with Competent Genetic Algorithms
- Proceedings of the Genetic and Evolutionary Computation Conference (GECCO2001
, 2001
"... To solve hierarchical problems, one must be able to learn the linkage, represent partial solutions efficiently, and assure effective niching. We propose the hierarchical ... ..."
Abstract
-
Cited by 101 (49 self)
- Add to MetaCart
(Show Context)
To solve hierarchical problems, one must be able to learn the linkage, represent partial solutions efficiently, and assure effective niching. We propose the hierarchical ...
Bayesian Optimization Algorithm: From Single Level to Hierarchy
, 2002
"... There are four primary goals of this dissertation. First, design a competent optimization algorithm capable of learning and exploiting appropriate problem decomposition by sampling and evaluating candidate solutions. Second, extend the proposed algorithm to enable the use of hierarchical decompositi ..."
Abstract
-
Cited by 101 (19 self)
- Add to MetaCart
(Show Context)
There are four primary goals of this dissertation. First, design a competent optimization algorithm capable of learning and exploiting appropriate problem decomposition by sampling and evaluating candidate solutions. Second, extend the proposed algorithm to enable the use of hierarchical decomposition as opposed to decomposition on only a single level. Third, design a class of difficult hierarchical problems that can be used to test the algorithms that attempt to exploit hierarchical decomposition. Fourth, test the developed algorithms on the designed class of problems and several real-world applications. The dissertation proposes the Bayesian optimization algorithm (BOA), which uses Bayesian networks to model the promising solutions found so far and sample new candidate solutions. BOA is theoretically and empirically shown to be capable of both learning a proper decomposition of the problem and exploiting the learned decomposition to ensure robust and scalable search for the optimum across a wide range of problems. The dissertation then identifies important features that must be incorporated into the basic BOA to solve problems that are not decomposable on a single level, but that can still be solved by decomposition over multiple levels of difficulty. Hierarchical
Linkage Problem, Distribution Estimation, and Bayesian Networks
, 2000
"... This paper proposes an algorithm that uses an estimation of the joint distribution of promising solutions in order to generate new candidate solutions. The algorithm is settled into the context of genetic and evolutionary computation and the algorithms based on the estimation of distributions. Th ..."
Abstract
-
Cited by 101 (21 self)
- Add to MetaCart
This paper proposes an algorithm that uses an estimation of the joint distribution of promising solutions in order to generate new candidate solutions. The algorithm is settled into the context of genetic and evolutionary computation and the algorithms based on the estimation of distributions. The proposed algorithm is called the Bayesian Optimization Algorithm (BOA). To estimate the distribution of promising solutions, the techniques for modeling multivariate data by Bayesian networks are used. TheBOA identifies, reproduces, and mixes building blocks up to a specified order. It is independent of the ordering of the variables in strings representing the solutions. Moreover, prior information about the problem can be incorporated into the algorithm, but it is not essential. First experiments were done with additively decomposable problems with both nonoverlapping as well as overlapping building blocks. The proposed algorithm is able to solve all but one of the tested problems in linear or close to linear time with respect to the problem size. Except for the maximal order of interactions to be covered, the algorithm does not use any prior knowledge about the problem. The BOA represents a step toward alleviating the problem of identifying and mixing building blocks correctly to obtain good solutions for problems with very limited domain information.
Evaluation-relaxation schemes for genetic and evolutionary algorithms
, 2002
"... Genetic and evolutionary algorithms have been increasingly applied to solve complex, large scale search problems with mixed success. Competent genetic algorithms have been proposed to solve hard problems quickly, reliably and accurately. They have rendered problems that were difficult to solve by th ..."
Abstract
-
Cited by 68 (27 self)
- Add to MetaCart
(Show Context)
Genetic and evolutionary algorithms have been increasingly applied to solve complex, large scale search problems with mixed success. Competent genetic algorithms have been proposed to solve hard problems quickly, reliably and accurately. They have rendered problems that were difficult to solve by the earlier GAs to be solvable, requiring only a subquadratic number of function evaluations. To facilitate solving large-scale complex problems, and to further enhance the performance of competent GAs, various efficiency-enhancement techniques have been developed. This study investigates one such class of efficiency-enhancement technique called evaluation relaxation. Evaluation-relaxation schemes replace a high-cost, low-error fitness function with a low-cost, high-error fitness function. The error in fitness functions comes in two flavors: Bias and variance. The presence of bias and variance in fitness functions is considered in isolation and strategies for increasing efficiency in both cases are developed. Specifically, approaches for choosing between two fitness functions with either differing variance or differing bias values have been developed. This thesis also investigates fitness inheritance as an evaluation-
Model-based search for combinatorial optimization
, 2001
"... Abstract In this paper we introduce model-based search as a unifying framework accommodating some recently proposed heuristics for combinatorial optimization such as ant colony optimization, stochastic gradient ascent, cross-entropy and estimation of distribution methods. We discuss similarities as ..."
Abstract
-
Cited by 64 (12 self)
- Add to MetaCart
(Show Context)
Abstract In this paper we introduce model-based search as a unifying framework accommodating some recently proposed heuristics for combinatorial optimization such as ant colony optimization, stochastic gradient ascent, cross-entropy and estimation of distribution methods. We discuss similarities as well as distinctive features of each method, propose some extensions and present a comparative experimental study of these algorithms. 1
Hierarchical BOA Solves Ising Spin Glasses and MAXSAT
- In Proc. of the Genetic and Evolutionary Computation Conference (GECCO 2003), number 2724 in LNCS
, 2003
"... Theoretical and empirical evidence exists that the hierarchical Bayesian optimization algorithm (hBOA) can solve challenging hierarchical problems and anything easier. This paper applies hBOA to two important classes of real-world problems: Ising spin-glass systems and maximum satis ability (MAX ..."
Abstract
-
Cited by 56 (19 self)
- Add to MetaCart
(Show Context)
Theoretical and empirical evidence exists that the hierarchical Bayesian optimization algorithm (hBOA) can solve challenging hierarchical problems and anything easier. This paper applies hBOA to two important classes of real-world problems: Ising spin-glass systems and maximum satis ability (MAXSAT). The paper shows how easy it is to apply hBOA to realworld optimization problems. The results indicate that hBOA is capable of solving enormously dicult problems that cannot be solved by other optimizers and still provide competitive or better performance than problem-speci c approaches on other problems. The results thus con- rm that hBOA is a practical, robust, and scalable technique for solving challenging real-world problems.
Relevance Estimation and Value Calibration of Evolutionary Algorithm Parameters
, 2007
"... The main objective of this paper is to present and evaluate a method that helps to calibrate the parameters of an evolutionary algorithm in a systematic and semi-automated manner. The method for Relevance Estimation and Value Calibration of EA parameters (REVAC) is empirically evaluated in two diffe ..."
Abstract
-
Cited by 55 (13 self)
- Add to MetaCart
The main objective of this paper is to present and evaluate a method that helps to calibrate the parameters of an evolutionary algorithm in a systematic and semi-automated manner. The method for Relevance Estimation and Value Calibration of EA parameters (REVAC) is empirically evaluated in two different ways. First, we use abstract test cases reflecting the typical properties of EA parameter spaces. Here we observe that REVAC is able to approximate the exact (hand-coded) relevance of parameters and it works robustly with measurement noise that is highly variable and not normally distributed. Second, we use REVAC for calibrating GAs for a number of common objective functions. Here we obtain a common sense validation, REVAC finds mutation rate pm much more sensitive than crossover rate pc and it recommends intuitively sound values: pm between 0.01 and 0.1, and 0.6 ≤ pc ≤ 1.0. 1