Results 1  10
of
255
Ant Colony System: A cooperative learning approach to the traveling salesman problem
 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION
, 1997
"... This paper introduces the ant colony system (ACS), a distributed algorithm that is applied to the traveling salesman problem (TSP). In the ACS, a set of cooperating agents called ants cooperate to find good solutions to TSP’s. Ants cooperate using an indirect form of communication mediated by a pher ..."
Abstract

Cited by 624 (49 self)
 Add to MetaCart
This paper introduces the ant colony system (ACS), a distributed algorithm that is applied to the traveling salesman problem (TSP). In the ACS, a set of cooperating agents called ants cooperate to find good solutions to TSP’s. Ants cooperate using an indirect form of communication mediated by a pheromone they deposit on the edges of the TSP graph while building solutions. We study the ACS by running experiments to understand its operation. The results show that the ACS outperforms other natureinspired algorithms such as simulated annealing and evolutionary computation, and we conclude comparing ACS3opt, a version of the ACS augmented with a local search procedure, to some of the best performing algorithms for symmetric and asymmetric TSP’s.
Polynomial time approximation schemes for Euclidean TSP and other geometric problems
 In Proceedings of the 37th IEEE Symposium on Foundations of Computer Science (FOCS’96
, 1996
"... Abstract. We present a polynomial time approximation scheme for Euclidean TSP in fixed dimensions. For every fixed c � 1 and given any n nodes in � 2, a randomized version of the scheme finds a (1 � 1/c)approximation to the optimum traveling salesman tour in O(n(log n) O(c) ) time. When the nodes a ..."
Abstract

Cited by 320 (3 self)
 Add to MetaCart
Abstract. We present a polynomial time approximation scheme for Euclidean TSP in fixed dimensions. For every fixed c � 1 and given any n nodes in � 2, a randomized version of the scheme finds a (1 � 1/c)approximation to the optimum traveling salesman tour in O(n(log n) O(c) ) time. When the nodes are in � d, the running time increases to O(n(log n) (O(�dc))d�1). For every fixed c, d the running time is n � poly(log n), that is nearly linear in n. The algorithm can be derandomized, but this increases the running time by a factor O(n d). The previous best approximation algorithm for the problem (due to Christofides) achieves a 3/2approximation in polynomial time. We also give similar approximation schemes for some other NPhard Euclidean problems: Minimum Steiner Tree, kTSP, and kMST. (The running times of the algorithm for kTSP and kMST involve an additional multiplicative factor k.) The previous best approximation algorithms for all these problems achieved a constantfactor approximation. We also give efficient approximation schemes for Euclidean MinCost Matching, a problem that can be solved exactly in polynomial time. All our algorithms also work, with almost no modification, when distance is measured using any geometric norm (such as �p for p � 1 or other Minkowski norms). They also have simple parallel (i.e., NC) implementations.
Local search heuristics for kmedian and facility location problems
, 2001
"... ÔÖÓ��ÙÖ�ØÓØ���ÐÓ��ÐÓÔØ�ÑÙÑ�ÓÖ�Ñ����ÒÛ � Ö�Ø�ÓÓ��ÐÓ�ÐÐÝÓÔØ�ÑÙÑ×ÓÐÙØ�ÓÒÓ�Ø��Ò��Ù×�Ò�Ø�� × ÐÓ�Ð�ØÝ��ÔÓ��ÐÓ�Ð×��Ö�ÔÖÓ��ÙÖ��×Ø��Ñ�Ü�ÑÙÑ �Ñ����Ò�Ò����Ð�ØÝÐÓ�Ø�ÓÒÔÖÓ�Ð�Ñ×Ï���¬Ò�Ø� � ÁÒØ��×Ô�Ô�ÖÛ��Ò�ÐÝÞ�ÐÓ�Ð×��Ö���ÙÖ�×Ø�×�ÓÖØ�� ×�ÓÛØ��ØÐÓ�Ð×��Ö�Û�Ø�×Û�Ô×��×�ÐÓ�Ð�ØÝ��ÔÓ � ×�ÑÙÐØ�Ò�ÓÙ×ÐÝØ��ÒØ��ÐÓ�Ð�ØÝ��ÔÓ�Ø�� ..."
Abstract

Cited by 234 (10 self)
 Add to MetaCart
ÔÖÓ��ÙÖ�ØÓØ���ÐÓ��ÐÓÔØ�ÑÙÑ�ÓÖ�Ñ����ÒÛ � Ö�Ø�ÓÓ��ÐÓ�ÐÐÝÓÔØ�ÑÙÑ×ÓÐÙØ�ÓÒÓ�Ø��Ò��Ù×�Ò�Ø�� × ÐÓ�Ð�ØÝ��ÔÓ��ÐÓ�Ð×��Ö�ÔÖÓ��ÙÖ��×Ø��Ñ�Ü�ÑÙÑ �Ñ����Ò�Ò����Ð�ØÝÐÓ�Ø�ÓÒÔÖÓ�Ð�Ñ×Ï���¬Ò�Ø� � ÁÒØ��×Ô�Ô�ÖÛ��Ò�ÐÝÞ�ÐÓ�Ð×��Ö���ÙÖ�×Ø�×�ÓÖØ�� ×�ÓÛØ��ØÐÓ�Ð×��Ö�Û�Ø�×Û�Ô×��×�ÐÓ�Ð�ØÝ��ÔÓ � ×�ÑÙÐØ�Ò�ÓÙ×ÐÝØ��ÒØ��ÐÓ�Ð�ØÝ��ÔÓ�Ø��ÐÓ�Ð×��Ö � �Ü�ØÐÝ�Ï��ÒÛ�Ô�ÖÑ�ØÔ���Ð�Ø��×ØÓ��×Û�ÔÔ�� �ÑÔÖÓÚ�×Ø��ÔÖ�Ú�ÓÙ×�ÒÓÛÒ��ÔÔÖÓÜ�Ñ�Ø�ÓÒ�ÓÖØ�� × ÔÖÓ�Ð�Ñ�ÓÖÍÒ�Ô��Ø�Ø�����Ð�ØÝÐÓ�Ø�ÓÒÛ�×�ÓÛ ÔÖÓ��ÙÖ��×�Ü�ØÐÝ Ó�ÐÓ�Ð×��Ö��ÓÖ�Ñ����ÒØ��ØÔÖÓÚ���×��ÓÙÒ�� � Ô�Ö�ÓÖÑ�Ò��Ù�Ö�ÒØ��Û�Ø�ÓÒÐÝ�Ñ����Ò×Ì��×�Ð×Ó �ÔÌ��×�×Ø��¬Ö×Ø�Ò�ÐÝ×�× ×Û�ÔÔ�Ò�����Ð�ØÝ��×�ÐÓ�Ð�ØÝ��ÔÓ��Ü�ØÐÝÌ�� × �ÑÔÖÓÚ�×Ø����ÓÙÒ�Ó�ÃÓÖÙÔÓÐÙ�Ø�ÐÏ��Ð×ÓÓÒ ×���Ö��Ô��Ø�Ø�����Ð�ØÝÐÓ�Ø�ÓÒÔÖÓ�Ð�ÑÛ��Ö��� � Ø��ØÐÓ�Ð×��Ö�Û���Ô�ÖÑ�Ø×����Ò��ÖÓÔÔ�Ò��Ò� Ø�ÔÐ�ÓÔ��×Ó�����Ð�ØÝ�ÓÖØ��×ÔÖÓ�Ð�ÑÛ��ÒØÖÓ�Ù � ���Ð�ØÝ��×��Ô��ØÝ�Ò�Û��Ö��ÐÐÓÛ��ØÓÓÔ�ÒÑÙÐ ÐÓ�Ð×��Ö�Û���Ô�ÖÑ�Ø×Ø��×Ò�ÛÓÔ�Ö�Ø�ÓÒ��×�ÐÓ ���Ð�ØÝ�Ò��ÖÓÔ×Þ�ÖÓÓÖÑÓÖ����Ð�Ø��×Ï�ÔÖÓÚ�Ø��Ø �Ò�ÛÓÔ�Ö�Ø�ÓÒÛ���ÓÔ�Ò×ÓÒ�ÓÖÑÓÖ�ÓÔ��×Ó� � �Ð�ØÝ��Ô��ØÛ��Ò�Ò�� ÝÈ�ÖØ��ÐÐÝ×ÙÔÔÓÖØ���Ý���ÐÐÓÛ×��Ô�ÖÓÑÁÒ�Ó×Ý×Ì� � Ê�×��Ö�Ä� � ÒÓÐÓ���×ÄØ���Ò��ÐÓÖ � ÞËÙÔÔÓÖØ���Ý�ÊÇ������� � £È�ÖØ��ÐÐÝ×ÙÔÔÓÖØ���Ý���ÐÐÓÛ×��Ô�ÖÓÑÁ�ÅÁÒ���
Survey of clustering algorithms
 IEEE TRANSACTIONS ON NEURAL NETWORKS
, 2005
"... Data analysis plays an indispensable role for understanding various phenomena. Cluster analysis, primitive exploration with little or no prior knowledge, consists of research developed across a wide variety of communities. The diversity, on one hand, equips us with many tools. On the other hand, the ..."
Abstract

Cited by 231 (3 self)
 Add to MetaCart
Data analysis plays an indispensable role for understanding various phenomena. Cluster analysis, primitive exploration with little or no prior knowledge, consists of research developed across a wide variety of communities. The diversity, on one hand, equips us with many tools. On the other hand, the profusion of options causes confusion. We survey clustering algorithms for data sets appearing in statistics, computer science, and machine learning, and illustrate their applications in some benchmark data sets, the traveling salesman problem, and bioinformatics, a new field attracting intensive efforts. Several tightly related topics, proximity measure, and cluster validation, are also discussed.
On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts  Towards Memetic Algorithms
, 1989
"... Short abstract, isn't it? P.A.C.S. numbers 05.20, 02.50, 87.10 1 Introduction Large Numbers "...the optimal tour displayed (see Figure 6) is the possible unique tour having one arc fixed from among 10 655 tours that are possible among 318 points and have one arc fixed. Assuming that one could ..."
Abstract

Cited by 186 (10 self)
 Add to MetaCart
Short abstract, isn't it? P.A.C.S. numbers 05.20, 02.50, 87.10 1 Introduction Large Numbers "...the optimal tour displayed (see Figure 6) is the possible unique tour having one arc fixed from among 10 655 tours that are possible among 318 points and have one arc fixed. Assuming that one could possibly enumerate 10 9 tours per second on a computer it would thus take roughly 10 639 years of computing to establish the optimality of this tour by exhaustive enumeration." This quote shows the real difficulty of a combinatorial optimization problem. The huge number of configurations is the primary difficulty when dealing with one of these problems. The quote belongs to M.W Padberg and M. Grotschel, Chap. 9., "Polyhedral computations", from the book The Traveling Salesman Problem: A Guided tour of Combinatorial Optimization [124]. It is interesting to compare the number of configurations of realworld problems in combinatorial optimization with those large numbers arising in Cosmol...
Iterated local search
 Handbook of Metaheuristics, volume 57 of International Series in Operations Research and Management Science
, 2002
"... Iterated Local Search has many of the desirable features of a metaheuristic: it is simple, easy to implement, robust, and highly effective. The essential idea of Iterated Local Search lies in focusing the search not on the full space of solutions but on a smaller subspace defined by the solutions th ..."
Abstract

Cited by 121 (15 self)
 Add to MetaCart
Iterated Local Search has many of the desirable features of a metaheuristic: it is simple, easy to implement, robust, and highly effective. The essential idea of Iterated Local Search lies in focusing the search not on the full space of solutions but on a smaller subspace defined by the solutions that are locally optimal for a given optimization engine. The success of Iterated Local Search lies in the biased sampling of this set of local optima. How effective this approach turns out to be depends mainly on the choice of the local search, the perturbations, and the acceptance criterion. So far, in spite of its conceptual simplicity, it has lead to a number of stateoftheart results without the use of too much problemspecific knowledge. But with further work so that the different modules are well adapted to the problem at hand, Iterated Local Search can often become a competitive or even state of the art algorithm. The purpose of this review is both to give a detailed description of this metaheuristic and to show where it stands in terms of performance. O.M. acknowledges support from the Institut Universitaire de France. This work was partially supported by the “Metaheuristics Network”, a Research Training Network funded by the Improving Human Potential programme of the CEC, grant HPRNCT199900106. The information provided is the sole responsibility of the authors and does not reflect the Community’s opinion. The Community is not responsible for any use that might be made of data appearing in this publication. 1 1
Optimal Composition of RealTime Systems
 ARTIFICIAL INTELLIGENCE
, 1996
"... Realtime systems are designed for environments in which the utility of actions is strongly timedependent. Recent work by Dean, Horvitz and others has shown that anytime algorithms are a useful tool for realtime system design, since they allow computation time to be traded for decision quality. In ..."
Abstract

Cited by 113 (21 self)
 Add to MetaCart
Realtime systems are designed for environments in which the utility of actions is strongly timedependent. Recent work by Dean, Horvitz and others has shown that anytime algorithms are a useful tool for realtime system design, since they allow computation time to be traded for decision quality. In order to construct complex systems, however, we need to be able to compose larger systems from smaller, reusable anytime modules. This paper addresses two basic problems associated with composition: how to ensure the interruptibility of the composed system
Very LargeScale Neighborhood Search for the Quadratic Assignment Problem
 DISCRETE APPLIED MATHEMATICS
, 2002
"... The Quadratic Assignment Problem (QAP) consists of assigning n facilities to n locations so as to minimize the total weighted cost of interactions between facilities. The QAP arises in many diverse settings, is known to be NPhard, and can be solved to optimality only for fairly small size instances ..."
Abstract

Cited by 108 (11 self)
 Add to MetaCart
The Quadratic Assignment Problem (QAP) consists of assigning n facilities to n locations so as to minimize the total weighted cost of interactions between facilities. The QAP arises in many diverse settings, is known to be NPhard, and can be solved to optimality only for fairly small size instances (typically, n < 25). Neighborhood search algorithms are the most popular heuristic algorithms to solve larger size instances of the QAP. The most extensively used neighborhood structure for the QAP is the 2exchange neighborhood. This neighborhood is obtained by swapping the locations of two facilities and thus has size O(n²). Previous efforts to explore larger size neighborhoods (such as 3exchange or 4exchange neighborhoods) were not very successful, as it took too long to evaluate the larger set of neighbors. In this paper, we propose very largescale neighborhood (VLSN) search algorithms where the size of the neighborhood is very large and we propose a novel search procedure to heuristically enumerate good neighbors. Our search procedure relies on the concept of improvement graph which allows us to evaluate neighbors much faster than the existing methods. We present extensive computational results of our algorithms on standard benchmark instances. These investigations reveal that very largescale neighborhood search algorithms give consistently better solutions compared the popular 2exchange neighborhood algorithms considering both the solution time and solution accuracy.
Variable neighborhood search: Principles and applications
, 2001
"... Systematic change of neighborhood within a possibly randomized local search algorithm yields a simple and effective metaheuristic for combinatorial and global optimization, called variable neighborhood search (VNS). We present a basic scheme for this purpose, which can easily be implemented using an ..."
Abstract

Cited by 94 (9 self)
 Add to MetaCart
Systematic change of neighborhood within a possibly randomized local search algorithm yields a simple and effective metaheuristic for combinatorial and global optimization, called variable neighborhood search (VNS). We present a basic scheme for this purpose, which can easily be implemented using any local search algorithm as a subroutine. Its effectiveness is illustrated by solving several classical combinatorial or global optimization problems. Moreover, several extensions are proposed for solving large problem instances: using VNS within the successive approximation method yields a twolevel VNS, called variable neighborhood decomposition search (VNDS); modifying the basic scheme to explore easily valleys far from the incumbent solution yields an efficient skewed VNS (SVNS) heuristic. Finally, we show how to stabilize column generation algorithms with help of VNS and discuss various ways to use VNS in graph theory, i.e., to suggest, disprove or give hints on how to prove conjectures, an area where metaheuristics do not appear