Results 1  10
of
289
Adaptive Constraint Satisfaction
 WORKSHOP OF THE UK PLANNING AND SCHEDULING
, 1996
"... Many different approaches have been applied to constraint satisfaction. These range from complete backtracking algorithms to sophisticated distributed configurations. However, most research effort in the field of constraint satisfaction algorithms has concentrated on the use of a single algorithm fo ..."
Abstract

Cited by 809 (43 self)
 Add to MetaCart
Many different approaches have been applied to constraint satisfaction. These range from complete backtracking algorithms to sophisticated distributed configurations. However, most research effort in the field of constraint satisfaction algorithms has concentrated on the use of a single algorithm for solving all problems. At the same time, a consensus appears to have developed to the effect that it is unlikely that any single algorithm is always the best choice for all classes of problem. In this paper we argue that an adaptive approach should play an important part in constraint satisfaction. This approach relaxes the commitment to using a single algorithm once search commences. As a result, we claim that it is possible to undertake a more focused approach to problem solving, allowing for the correction of bad algorithm choices and for capitalising on opportunities for gain by dynamically changing to more suitable candidates.
The Ant System: Optimization by a colony of cooperating agents
 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART B
, 1996
"... An analogy with the way ant colonies function has suggested the definition of a new computational paradigm, which we call Ant System. We propose it as a viable new approach to stochastic combinatorial optimization. The main characteristics of this model are positive feedback, distributed computation ..."
Abstract

Cited by 801 (47 self)
 Add to MetaCart
An analogy with the way ant colonies function has suggested the definition of a new computational paradigm, which we call Ant System. We propose it as a viable new approach to stochastic combinatorial optimization. The main characteristics of this model are positive feedback, distributed computation, and the use of a constructive greedy heuristic. Positive feedback accounts for rapid discovery of good solutions, distributed computation avoids premature convergence, and the greedy heuristic helps find acceptable solutions in the early stages of the search process. We apply the proposed methodology to the classical Traveling Salesman Problem (TSP), and report simulation results. We also discuss parameter selection and the early setups of the model, and compare it with tabu search and simulated annealing using TSP. To demonstrate the robustness of the approach, we show how the Ant System (AS) can be applied to other optimization problems like the asymmetric traveling salesman, the quadrat...
Parameter control in evolutionary algorithms
 IEEE Transactions on Evolutionary Computation
"... Summary. The issue of setting the values of various parameters of an evolutionary algorithm is crucial for good performance. In this paper we discuss how to do this, beginning with the issue of whether these values are best set in advance or are best changed during evolution. We provide a classifica ..."
Abstract

Cited by 236 (30 self)
 Add to MetaCart
Summary. The issue of setting the values of various parameters of an evolutionary algorithm is crucial for good performance. In this paper we discuss how to do this, beginning with the issue of whether these values are best set in advance or are best changed during evolution. We provide a classification of different approaches based on a number of complementary features, and pay special attention to setting parameters onthefly. This has the potential of adjusting the algorithm to the problem while solving the problem. This paper is intended to present a survey rather than a set of prescriptive details for implementing an EA for a particular type of problem. For this reason we have chosen to interleave a number of examples throughout the text. Thus we hope to both clarify the points we wish to raise as we present them, and also to give the reader a feel for some of the many possibilities available for controlling different parameters. 1
A Survey of Evolution Strategies
 Proceedings of the Fourth International Conference on Genetic Algorithms
, 1991
"... Similar to Genetic Algorithms, Evolution Strategies (ESs) are algorithms which imitate the principles of natural evolution as a method to solve parameter optimization problems. The development of Evolution Strategies from the first mutationselection scheme to the refined (¯,)ES including the gen ..."
Abstract

Cited by 224 (3 self)
 Add to MetaCart
Similar to Genetic Algorithms, Evolution Strategies (ESs) are algorithms which imitate the principles of natural evolution as a method to solve parameter optimization problems. The development of Evolution Strategies from the first mutationselection scheme to the refined (¯,)ES including the general concept of selfadaptation of the strategy parameters for the mutation variances as well as their covariances are described. 1 Introduction The idea to use principles of organic evolution processes as rules for optimum seeking procedures emerged independently on both sides of the Atlantic ocean more than two decades ago. Both approaches rely upon imitating the collective learning paradigm of natural populations, based upon Darwin's observations and the modern synthetic theory of evolution. In the USA Holland introduced Genetic Algorithms in the 60ies, embedded into the general framework of adaptation [Hol75]. He also mentioned the applicability to parameter optimization which was fir...
A Survey of Automated Timetabling
 ARTIFICIAL INTELLIGENCE REVIEW
, 1999
"... The timetabling problem consists in fixing a sequence of meetings between teachers and students in a prefixed period of time (typically a week), satisfying a set of constraints of various types. A large number of variants of the timetabling problem have been proposed in the literature, which diff ..."
Abstract

Cited by 143 (13 self)
 Add to MetaCart
The timetabling problem consists in fixing a sequence of meetings between teachers and students in a prefixed period of time (typically a week), satisfying a set of constraints of various types. A large number of variants of the timetabling problem have been proposed in the literature, which differ from each other based on the type of institution involved (university or high school) and the type of constraints. This problem, that has been traditionally considered in the operational research field, has recently been tackled with techniques belonging also to artificial intelligence (e.g. genetic algorithms, tabu search, simulated annealing, and constraint satisfaction). In this paper, we survey the various formulations of the problem, and the techniques and algorithms used for its solution.
Evolution in time and space  the parallel genetic algorithm
 FOUNDATIONS OF GENETIC ALGORITHMS
, 1991
"... The parallel genetic algorithm (PGA) uses two major modifications compared to the genetic algorithm. Firstly, selection for mating is distributed. Individuals live in a 2D world. Selection of a mate is done by each individual independently in its neighborhood. Secondly, each individual may improve ..."
Abstract

Cited by 108 (13 self)
 Add to MetaCart
The parallel genetic algorithm (PGA) uses two major modifications compared to the genetic algorithm. Firstly, selection for mating is distributed. Individuals live in a 2D world. Selection of a mate is done by each individual independently in its neighborhood. Secondly, each individual may improve its fitness during its lifetime by e.g. local hillclimbing. The PGA is totally asynchronous, running with maximal efficiency on MIMD parallel computers. The search strategy of the PGA is based on a small number of active and intelligent individuals, whereas a GA uses a large population of passive individuals. We will investigate the PGA with deceptive problems and the traveling salesman problem. We outline why and when the PGA is succesful. Abstractly, a PGA is a parallel search with information exchange between the individuals. If we represent the optimization problem as a fitness landscape in a certain configuration space, we see, that a PGA tries to jump from two local minima to a third, still better local minima, by using the crossover operator. This jump is (probabilistically) successful, if the fitness landscape has a certain correlation. We show the correlation for the traveling salesman problem by a configuration space analysis. The PGA explores implicitly the above correlation.
A tutorial on the crossentropy method
 Annals of Operations Research
, 2005
"... Abstract: The crossentropy method is a recent versatile Monte Carlo technique. This article provides a brief introduction to the crossentropy method and discusses how it can be used for rareevent probability estimation and for solving combinatorial, continuous, constrained and noisy optimization ..."
Abstract

Cited by 104 (15 self)
 Add to MetaCart
Abstract: The crossentropy method is a recent versatile Monte Carlo technique. This article provides a brief introduction to the crossentropy method and discusses how it can be used for rareevent probability estimation and for solving combinatorial, continuous, constrained and noisy optimization problems. A comprehensive list of references on crossentropy methods and applications is included.
A Solver for the Network Testbed Mapping Problem
 SIGCOMM Computer Communications Review
, 2002
"... this paper, we explore this problem, which we call the network testbed mapping problem. We describe the interesting challenges that characterize this problem, and explore its application to other spaces, such as distributed simulation. We present the design, implementation, and evaluation of a solve ..."
Abstract

Cited by 85 (9 self)
 Add to MetaCart
this paper, we explore this problem, which we call the network testbed mapping problem. We describe the interesting challenges that characterize this problem, and explore its application to other spaces, such as distributed simulation. We present the design, implementation, and evaluation of a solver for this problem, which is currently in use on the Netbed network testbed. It builds on simulated annealing to find very good solutions in a few seconds for our historical workload, and scales gracefully on large wellconnected synthetic topologies
The CrossEntropy Method for Combinatorial and Continuous Optimization
, 1999
"... We present a new and fast method, called the crossentropy method, for finding the optimal solution of combinatorial and continuous nonconvex optimization problems with convex bounded domains. To find the optimal solution we solve a sequence of simple auxiliary smooth optimization problems based on ..."
Abstract

Cited by 55 (6 self)
 Add to MetaCart
We present a new and fast method, called the crossentropy method, for finding the optimal solution of combinatorial and continuous nonconvex optimization problems with convex bounded domains. To find the optimal solution we solve a sequence of simple auxiliary smooth optimization problems based on KullbackLeibler crossentropy, importance sampling, Markov chain and Boltzmann distribution. We use importance sampling as an important ingredient for adaptive adjustment of the temperature in the Boltzmann distribution and use KullbackLeibler crossentropy to find the optimal solution. In fact, we use the mode of a unimodal importance sampling distribution, like the mode of beta distribution, as an estimate of the optimal solution for continuous optimization and Markov chains approach for combinatorial optimization. In the later case we show almost surely convergence of our algorithm to the optimal solution. Supporting numerical results for both continuous and combinatorial optimization problems are given as well. Our empirical studies suggest that the crossentropy method has polynomial in the size of the problem running time complexity.
Online Prediction and Conversion Strategies
 Machine Learning
, 1994
"... We study the problem of deterministically predicting boolean values by combining the boolean predictions... ..."
Abstract

Cited by 49 (18 self)
 Add to MetaCart
We study the problem of deterministically predicting boolean values by combining the boolean predictions...