Results 1  10
of
217
Ant algorithms for discrete optimization
 ARTIFICIAL LIFE
, 1999
"... This article presents an overview of recent work on ant algorithms, that is, algorithms for discrete optimization that took inspiration from the observation of ant colonies’ foraging behavior, and introduces the ant colony optimization (ACO) metaheuristic. In the first part of the article the basic ..."
Abstract

Cited by 476 (42 self)
 Add to MetaCart
(Show Context)
This article presents an overview of recent work on ant algorithms, that is, algorithms for discrete optimization that took inspiration from the observation of ant colonies’ foraging behavior, and introduces the ant colony optimization (ACO) metaheuristic. In the first part of the article the basic biological findings on real ants are reviewed and their artificial counterparts as well as the ACO metaheuristic are defined. In the second part of the article a number of applications of ACO algorithms to combinatorial optimization and routing in communications networks are described. We conclude with a discussion of related work and of some of the most important aspects of the ACO metaheuristic.
Traffic and related selfdriven manyparticle systems
, 2000
"... Since the subject of traffic dynamics has captured the interest of physicists, many surprising effects have been revealed and explained. Some of the questions now understood are the following: Why are vehicles sometimes stopped by ‘‘phantom traffic jams’ ’ even though drivers all like to drive fast? ..."
Abstract

Cited by 336 (38 self)
 Add to MetaCart
(Show Context)
Since the subject of traffic dynamics has captured the interest of physicists, many surprising effects have been revealed and explained. Some of the questions now understood are the following: Why are vehicles sometimes stopped by ‘‘phantom traffic jams’ ’ even though drivers all like to drive fast? What are the mechanisms behind stopandgo traffic? Why are there several different kinds of congestion, and how are they related? Why do most traffic jams occur considerably before the road capacity is reached? Can a temporary reduction in the volume of traffic cause a lasting traffic jam? Under which conditions can speed limits speed up traffic? Why do pedestrians moving in opposite directions normally organize into lanes, while similar systems ‘‘freeze by heating’’? All of these questions have been answered by applying and extending methods from statistical physics and nonlinear dynamics to selfdriven manyparticle systems. This article considers the empirical data and then reviews the main approaches to modeling pedestrian and vehicle traffic. These include microscopic (particlebased), mesoscopic (gaskinetic), and macroscopic (fluiddynamic) models. Attention is also paid to the formulation of a micromacro link, to aspects of universality, and to other unifying concepts, such as a general modeling framework for selfdriven manyparticle systems, including spin systems. While the primary focus is upon vehicle and pedestrian traffic, applications to biological or socioeconomic systems such as bacterial colonies, flocks of birds, panics, and stock market dynamics are touched upon as well.
Evolutionary computation: Comments on the history and current state
 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION
, 1997
"... Evolutionary computation has started to receive significant attention during the last decade, although the origins can be traced back to the late 1950’s. This article surveys the history as well as the current state of this rapidly growing field. We describe the purpose, the general structure, and ..."
Abstract

Cited by 274 (0 self)
 Add to MetaCart
(Show Context)
Evolutionary computation has started to receive significant attention during the last decade, although the origins can be traced back to the late 1950’s. This article surveys the history as well as the current state of this rapidly growing field. We describe the purpose, the general structure, and the working principles of different approaches, including genetic algorithms (GA) [with links to genetic programming (GP) and classifier systems (CS)], evolution strategies (ES), and evolutionary programming (EP) by analysis and comparison of their most important constituents (i.e., representations, variation operators, reproduction, and selection mechanism). Finally, we give a brief overview on the manifold of application domains, although this necessarily must remain incomplete.
A Survey of Evolution Strategies
 Proceedings of the Fourth International Conference on Genetic Algorithms
, 1991
"... Similar to Genetic Algorithms, Evolution Strategies (ESs) are algorithms which imitate the principles of natural evolution as a method to solve parameter optimization problems. The development of Evolution Strategies from the first mutationselection scheme to the refined (¯,)ES including the gen ..."
Abstract

Cited by 259 (3 self)
 Add to MetaCart
Similar to Genetic Algorithms, Evolution Strategies (ESs) are algorithms which imitate the principles of natural evolution as a method to solve parameter optimization problems. The development of Evolution Strategies from the first mutationselection scheme to the refined (¯,)ES including the general concept of selfadaptation of the strategy parameters for the mutation variances as well as their covariances are described. 1 Introduction The idea to use principles of organic evolution processes as rules for optimum seeking procedures emerged independently on both sides of the Atlantic ocean more than two decades ago. Both approaches rely upon imitating the collective learning paradigm of natural populations, based upon Darwin's observations and the modern synthetic theory of evolution. In the USA Holland introduced Genetic Algorithms in the 60ies, embedded into the general framework of adaptation [Hol75]. He also mentioned the applicability to parameter optimization which was fir...
SelfAdaptation in Genetic Algorithms
 Proceedings of the First European Conference on Artificial Life
, 1992
"... Within Genetic Algorithms (GAs) the mutation rate is mostly handled as a global, external parameter, which is constant over time or exogeneously changed over time. In this paper a new approach is presented, which transfers a basic idea from Evolution Strategies (ESs) to GAs. Mutation rates are chang ..."
Abstract

Cited by 127 (2 self)
 Add to MetaCart
Within Genetic Algorithms (GAs) the mutation rate is mostly handled as a global, external parameter, which is constant over time or exogeneously changed over time. In this paper a new approach is presented, which transfers a basic idea from Evolution Strategies (ESs) to GAs. Mutation rates are changed into endogeneous items which are adapting during the search process. First experimental results are presented, which indicate that environment dependent selfadaptation of appropriate settings for the mutation rate is possible even for GAs. Furthermore, the reduction of the number of external parameters of a GA is seen as a first step towards achieving a problemdependent selfadaptation of the algorithm. Introduction Natural evolution has proven to be a powerful mechanism for emergence and improvement of the living beings on our planet by performing a randomized search in the space of possible DNAsequences. Due to this knowledge about the qualities of natural evolution, some resea...
Evolution Strategies for Vector Optimization
 Parallel Problem Solving from Nature. 1st Workshop, PPSN I, volume 496 of Lecture Notes in Computer Science
, 1992
"... Evolution strategies  a stochastic optimization method originally designed for single criterion problems  have been modified in such a way that they can also tackle multiple criteria problems. Instead of computing only one efficient solution interactively, a decision maker can collect as many ..."
Abstract

Cited by 121 (2 self)
 Add to MetaCart
Evolution strategies  a stochastic optimization method originally designed for single criterion problems  have been modified in such a way that they can also tackle multiple criteria problems. Instead of computing only one efficient solution interactively, a decision maker can collect as many members of the Pareto set as needed before making up his mind. Apart from this feature one could also reflect upon the algorithm as a simple model of biological evolution. Following this idea one might emphasize the algorithm's capability of selfadapting its parameters. Furthermore, the effect of polyploid individuals corresponds in both `worlds'. 1 Introduction It has become increasingly obvious that the optimization under a single scalarvalued criterion  often a monetary one  fails to reflect the variety of aspects in a world getting more and more complex. Although V. Pareto [4] laid the mathematical foundations already about a hundred years ago the existing tools for multiple c...
Reevaluating Genetic Algorithm Performance under Coordinate Rotation of Benchmark Functions  A survey of some theoretical and practical aspects of genetic algorithms
 BioSystems
, 1995
"... This work analyzes some concepts of genetic algorithms and explains why they may be applied with success to some problems in function optimization. In addition to other performance properties, it has been shown that genetic algorithms are able to overcome local minima in highly multimodal functions ..."
Abstract

Cited by 108 (18 self)
 Add to MetaCart
(Show Context)
This work analyzes some concepts of genetic algorithms and explains why they may be applied with success to some problems in function optimization. In addition to other performance properties, it has been shown that genetic algorithms are able to overcome local minima in highly multimodal functions (e.g., Rastrigin, Schwefel). The performance of genetic algorithms is supported by an extensive theory, which is based on the assumption of additive gene effects. But the current work shows that the assumption of additive gene effects is not weak, and that the dependence on specific parameter settings is much stronger than often believed. Furthermore, the assumptions regarding the fitness function are so restricting that slight modifications of the standard test functions cause a failure of the optimization procedure even though the function's structure is preserved. The current experiments focus on a few widelyused scalable test functions. the results indicate that a standard g...
Learning Bayesian Networks by Genetic Algorithms. A case study in the prediction of survival in malignant skin melanoma
, 1997
"... In this work we introduce a methodology based on Genetic Algorithms for the automatic induction of Bayesian Networks from a file containing cases and variables related to the problem. The methodology is applied to the problem of predicting survival of people after one, three and five years of being ..."
Abstract

Cited by 101 (11 self)
 Add to MetaCart
In this work we introduce a methodology based on Genetic Algorithms for the automatic induction of Bayesian Networks from a file containing cases and variables related to the problem. The methodology is applied to the problem of predicting survival of people after one, three and five years of being diagnosed as having malignant skin melanoma. The accuracy of the obtained model, measured in terms of the percentage of wellclassified subjects, is compared to that obtained by the called NaiveBayes. In both cases, the estimation of the model accuracy is obtained from the 10fold crossvalidation method. 1. Introduction Expert systems, one of the most developed areas in the field of Artificial Intelligence, are computer programs designed to help or replace humans beings in tasks in which the human experience and human knowledge are scarce and unreliable. Although, there are domains in which the tasks can be specifed by logic rules, other domains are characterized by an uncertainty inherent...
Bayesian Optimization Algorithm: From Single Level to Hierarchy
, 2002
"... There are four primary goals of this dissertation. First, design a competent optimization algorithm capable of learning and exploiting appropriate problem decomposition by sampling and evaluating candidate solutions. Second, extend the proposed algorithm to enable the use of hierarchical decompositi ..."
Abstract

Cited by 101 (19 self)
 Add to MetaCart
(Show Context)
There are four primary goals of this dissertation. First, design a competent optimization algorithm capable of learning and exploiting appropriate problem decomposition by sampling and evaluating candidate solutions. Second, extend the proposed algorithm to enable the use of hierarchical decomposition as opposed to decomposition on only a single level. Third, design a class of difficult hierarchical problems that can be used to test the algorithms that attempt to exploit hierarchical decomposition. Fourth, test the developed algorithms on the designed class of problems and several realworld applications. The dissertation proposes the Bayesian optimization algorithm (BOA), which uses Bayesian networks to model the promising solutions found so far and sample new candidate solutions. BOA is theoretically and empirically shown to be capable of both learning a proper decomposition of the problem and exploiting the learned decomposition to ensure robust and scalable search for the optimum across a wide range of problems. The dissertation then identifies important features that must be incorporated into the basic BOA to solve problems that are not decomposable on a single level, but that can still be solved by decomposition over multiple levels of difficulty. Hierarchical
Shifting Inductive Bias with SuccessStory Algorithm, Adaptive Levin Search, and Incremental SelfImprovement
 MACHINE LEARNING
, 1997
"... We study task sequences that allow for speeding up the learner's average reward intake through appropriate shifts of inductive bias (changes of the learner's policy). To evaluate longterm effects of bias shifts setting the stage for later bias shifts we use the "successstory algori ..."
Abstract

Cited by 76 (33 self)
 Add to MetaCart
We study task sequences that allow for speeding up the learner's average reward intake through appropriate shifts of inductive bias (changes of the learner's policy). To evaluate longterm effects of bias shifts setting the stage for later bias shifts we use the "successstory algorithm" (SSA). SSA is occasionally called at times that may depend on the policy itself. It uses backtracking to undo those bias shifts that have not been empirically observed to trigger longterm reward accelerations (measured up until the current SSA call). Bias shifts that survive SSA represent a lifelong success history. Until the next SSA call, they are considered useful and build the basis for additional bias shifts. SSA allows for plugging in a wide variety of learning algorithms. We plug in (1) a novel, adaptive extension of Levin search and (2) a method for embedding the learner's policy modification strategy within the policy itself (incremental selfimprovement). Our inductive transfer case studies...