Results 11  20
of
303
Very LargeScale Neighborhood Search for the Quadratic Assignment Problem
 DISCRETE APPLIED MATHEMATICS
, 2002
"... The Quadratic Assignment Problem (QAP) consists of assigning n facilities to n locations so as to minimize the total weighted cost of interactions between facilities. The QAP arises in many diverse settings, is known to be NPhard, and can be solved to optimality only for fairly small size instances ..."
Abstract

Cited by 143 (13 self)
 Add to MetaCart
The Quadratic Assignment Problem (QAP) consists of assigning n facilities to n locations so as to minimize the total weighted cost of interactions between facilities. The QAP arises in many diverse settings, is known to be NPhard, and can be solved to optimality only for fairly small size instances (typically, n < 25). Neighborhood search algorithms are the most popular heuristic algorithms to solve larger size instances of the QAP. The most extensively used neighborhood structure for the QAP is the 2exchange neighborhood. This neighborhood is obtained by swapping the locations of two facilities and thus has size O(n²). Previous efforts to explore larger size neighborhoods (such as 3exchange or 4exchange neighborhoods) were not very successful, as it took too long to evaluate the larger set of neighbors. In this paper, we propose very largescale neighborhood (VLSN) search algorithms where the size of the neighborhood is very large and we propose a novel search procedure to heuristically enumerate good neighbors. Our search procedure relies on the concept of improvement graph which allows us to evaluate neighbors much faster than the existing methods. We present extensive computational results of our algorithms on standard benchmark instances. These investigations reveal that very largescale neighborhood search algorithms give consistently better solutions compared the popular 2exchange neighborhood algorithms considering both the solution time and solution accuracy.
MAXMIN Ant System
, 1999
"... Ant System, the first Ant Colony Optimization algorithm, showed to be a viable method for attacking hard combinatorial optimization problems. Yet, its performance, when compared to more finetuned algorithms, was rather poor for large instances of traditional benchmark problems like the Traveling Sa ..."
Abstract

Cited by 108 (3 self)
 Add to MetaCart
Ant System, the first Ant Colony Optimization algorithm, showed to be a viable method for attacking hard combinatorial optimization problems. Yet, its performance, when compared to more finetuned algorithms, was rather poor for large instances of traditional benchmark problems like the Traveling Salesman Problem. To show that Ant Colony Optimization algorithms could be good alternatives to existing algorithms for hard combinatorial optimization problems, recent research in this ares has mainly focused on the development of algorithmic variants which achieve better performance than AS. In this article, we present¨�©� � –¨��� � Ant System, an Ant Colony Optimization algorithm derived from Ant System.¨�©� � –¨��� � Ant System differs from Ant System in several important aspects, whose usefulness we demonstrate by means of an experimental study. Additionally, we relate one of the characteristics specific to¨� ¨ AS — that of using a greedier search than Ant System — to results from the search space analysis of the combinatorial optimization problems attacked in this paper. Our computational results on the Traveling Salesman Problem and the Quadratic Assignment Problem show that ¨�©� � –¨��� � Ant System is currently among the best performing algorithms for these problems.
A Theoretician's Guide to the Experimental Analysis of Algorithms
, 1996
"... This paper presents an informal discussion of issues that arise when one attempts to analyze algorithms experimentally. It is based on lessons learned by the author over the course of more than a decade of experimentation, survey paper writing, refereeing, and lively discussions with other experimen ..."
Abstract

Cited by 101 (0 self)
 Add to MetaCart
(Show Context)
This paper presents an informal discussion of issues that arise when one attempts to analyze algorithms experimentally. It is based on lessons learned by the author over the course of more than a decade of experimentation, survey paper writing, refereeing, and lively discussions with other experimentalists. Although written from the perspective of a theoretical computer scientist, it is intended to be of use to researchers from all fields who want to study algorithms experimentally. It has two goals: first, to provide a useful guide to new experimentalists about how such work can best be performed and written up, and second, to challenge current researchers to think about whether their own work might be improved from a scientific point of view. With the latter purpose in mind, the author hopes that at least a few of his recommendations will be considered controversial.
The ant colony optimization metaheuristic: Algorithms, applications, and advances
 Handbook of Metaheuristics
, 2002
"... ..."
(Show Context)
Solving Symmetric and Asymmetric TSPs by Ant Colonies
, 1996
"... In this paper we present ACS, a distributed algorithm for the solution of combinatorial optimization problems which was inspired by the observation of real colonies of ants. We apply ACS to both symmetric and asymmetric traveling salesman problems. Results show that ACS is able to find good sol ..."
Abstract

Cited by 93 (20 self)
 Add to MetaCart
(Show Context)
In this paper we present ACS, a distributed algorithm for the solution of combinatorial optimization problems which was inspired by the observation of real colonies of ants. We apply ACS to both symmetric and asymmetric traveling salesman problems. Results show that ACS is able to find good solutions to these problems. I. Introduction In this paper we present Ant Colony System (ACS), a novel distributed approach to combinatorial optimization based on the observation of real ant colonies behavior. ACS finds its ground in one of the authors previous work on the socalled Ant System (AS) [1],[2],[5],[7] and in AntQ [8] an extension of AS with Qlearning [12], a reinforcement learning technique. In particular, ACS is a revisited version of AntQ where a different way to update the experience accumulated by the artificial ants has been introduced [6]. All the mentioned systems belong to the Artificial Ant Colonies (AAC) family of algorithms that has been applied to various combinat...
Nearly Linear Time Approximation Schemes for Euclidean TSP and other Geometric Problems
, 1997
"... We present a randomized polynomial time approximation scheme for Euclidean TSP in ! 2 that is substantially more efficient than our earlier scheme in [2] (and the scheme of Mitchell [21]). For any fixed c ? 1 and any set of n nodes in the plane, the new scheme finds a (1+ 1 c )approximation to ..."
Abstract

Cited by 92 (3 self)
 Add to MetaCart
(Show Context)
We present a randomized polynomial time approximation scheme for Euclidean TSP in ! 2 that is substantially more efficient than our earlier scheme in [2] (and the scheme of Mitchell [21]). For any fixed c ? 1 and any set of n nodes in the plane, the new scheme finds a (1+ 1 c )approximation to the optimum traveling salesman tour in O(n(log n) O(c) ) time. (Our earlier scheme ran in n O(c) time.) For points in ! d the algorithm runs in O(n(log n) (O( p dc)) d\Gamma1 ) time. This time is polynomial (actually nearly linear) for every fixed c; d. Designing such a polynomialtime algorithm was an open problem (our earlier algorithm in [2] ran in superpolynomial time for d 3). The algorithm generalizes to the same set of Euclidean problems handled by the previous algorithm, including Steiner Tree, kTSP, kMST, etc, although for kTSP and kMST the running time gets multiplied by k. We also use our ideas to design nearlylinear time approximation schemes for Euclidean vers...
Genetic Local Search for the TSP: New Results
 In Proceedings of the 1997 IEEE International Conference on Evolutionary Computation
, 1997
"... The combination of local search heuristics and genetic algorithms has been shown to be an effective approach for finding nearoptimum solutions to the traveling salesman problem. In this paper, previously proposed genetic local search algorithms for the symmetric and asymmetric traveling salesman pr ..."
Abstract

Cited by 83 (13 self)
 Add to MetaCart
(Show Context)
The combination of local search heuristics and genetic algorithms has been shown to be an effective approach for finding nearoptimum solutions to the traveling salesman problem. In this paper, previously proposed genetic local search algorithms for the symmetric and asymmetric traveling salesman problem are revisited and potential improvements are identified. Since local search is the central component in which most of the computation time is spent, improving the efficiency of the local search operators is crucial for improving the overall performance of the algorithms. The modifications of the algorithms are described and the new results obtained are presented. The results indicate that the improved algorithms are able to arrive at better solutions in significantly less time. I. Introduction Consider a salesman who wants to start from his home city, visit each of a set of n cities exactly once, and then return home. Since the salesman is interested in finding the shortest possible r...
Parameterized Complexity: A Framework for Systematically Confronting Computational Intractability
 DIMACS Series in Discrete Mathematics and Theoretical Computer Science
, 1997
"... In this paper we give a programmatic overview of parameterized computational complexity in the broad context of the problem of coping with computational intractability. We give some examples of how fixedparameter tractability techniques can deliver practical algorithms in two different ways: (1) by ..."
Abstract

Cited by 77 (16 self)
 Add to MetaCart
(Show Context)
In this paper we give a programmatic overview of parameterized computational complexity in the broad context of the problem of coping with computational intractability. We give some examples of how fixedparameter tractability techniques can deliver practical algorithms in two different ways: (1) by providing useful exact algorithms for small parameter ranges, and (2) by providing guidance in the design of heuristic algorithms. In particular, we describe an improved FPT kernelization algorithm for Vertex Cover, a practical FPT algorithm for the Maximum Agreement Subtree (MAST) problem parameterized by the number of species to be deleted, and new general heuristics for these problems based on FPT techniques. In the course of making this overview, we also investigate some structural and hardness issues. We prove that an important naturally parameterized problem in artificial intelligence, STRIPS Planning (where the parameter is the size of the plan) is complete for W [1]. As a corollary, this implies that kStep Reachability for Petri Nets is complete for W [1]. We describe how the concept of treewidth can be applied to STRIPS Planning and other problems of logic to obtain FPT results. We describe a surprising structural result concerning the top end of the parameterized complexity hierarchy: the naturally parameterized Graph kColoring problem cannot be resolved with respect to XP either by showing membership in XP, or by showing hardness for XP without settling the P = NP question one way or the other.
Ant Colony Optimization: A New MetaHeuristic
 PROCEEDINGS OF THE CONGRESS ON EVOLUTIONARY COMPUTATION
, 1999
"... Recently, a number of algorithms inspired by the foraging behavior of ant colonies have been applied to the solution of difficult discrete optimization problems. In this paper we put these algorithms in a common framework by defining the Ant Colony Optimization (ACO) metaheuristic. A couple of pa ..."
Abstract

Cited by 72 (2 self)
 Add to MetaCart
Recently, a number of algorithms inspired by the foraging behavior of ant colonies have been applied to the solution of difficult discrete optimization problems. In this paper we put these algorithms in a common framework by defining the Ant Colony Optimization (ACO) metaheuristic. A couple of paradigmatic examples of applications of these novel metaheuristic are given, as well as a brief overview of existing applications.
A gentle introduction to memetic algorithms
 Handbook of Metaheuristics
, 2003
"... ..."
(Show Context)