Results 1  10
of
168
HyperHeuristics: An Emerging Direction In Modern Search Technology
, 2003
"... This chapter introduces and overviews an emerging methodology in search and optimisation. One of the key aims of these new approaches, which have been termed hyperheuristics, is to raise the level of generality at which optimisation systems can operate. An objective is that hyperheuristics will le ..."
Abstract

Cited by 82 (34 self)
 Add to MetaCart
This chapter introduces and overviews an emerging methodology in search and optimisation. One of the key aims of these new approaches, which have been termed hyperheuristics, is to raise the level of generality at which optimisation systems can operate. An objective is that hyperheuristics will lead to more general systems that are able to handle a wide range of problem domains rather than current metaheuristic technology which tends to be customised to a particular problem or a narrow class of problems. Hyperheuristics are broadly concerned with intelligently choosing the right heuristic or algorithm in a given situation. Of course, a hyperheuristic can be (often is) a (meta)heuristic and it can operate on (meta)heuristics. In a certain sense, a hyperheuristic works at a higher level when compared with the typical application of metaheuristics to optimisation problems i.e. a hyperheuristic could be thought of as a (meta)heuristic which operates on lower level (meta )heuristics. In this chapter we will introduce the idea and give a brief history of this emerging area. In addition, we will review some of the latest work to be published in the field.
Hedging uncertainty: Approximation algorithms for stochastic optimization problems
 In Proceedings of the 10th International Conference on Integer Programming and Combinatorial Optimization
, 2004
"... We initiate the design of approximation algorithms for stochastic combinatorial optimization problems; we formulate the problems in the framework of twostage stochastic optimization, and provide nearly tight approximation algorithms. Our problems range from the simple (shortest path, vertex cover, ..."
Abstract

Cited by 69 (10 self)
 Add to MetaCart
We initiate the design of approximation algorithms for stochastic combinatorial optimization problems; we formulate the problems in the framework of twostage stochastic optimization, and provide nearly tight approximation algorithms. Our problems range from the simple (shortest path, vertex cover, bin packing) to complex (facility location, set cover), and contain representatives with different approximation ratios. The approximation ratio of the stochastic variant of a typical problem is of the same order of magnitude as its deterministic counterpart. Furthermore, common techniques for designing approximation algorithms such as LP rounding, the primaldual method, and the greedy algorithm, can be carefully adapted to obtain these results. 1
Learning Evaluation Functions for Global Optimization and Boolean Satisfiability
 In Proc. of 15th National Conf. on Artificial Intelligence (AAAI
, 1998
"... This paper describes STAGE, a learning approach to automatically improving search performance on optimization problems. STAGE learns an evaluation function which predicts the outcome of a local search algorithm, such as hillclimbing or WALKSAT, as a function of state features along its search ..."
Abstract

Cited by 59 (3 self)
 Add to MetaCart
This paper describes STAGE, a learning approach to automatically improving search performance on optimization problems. STAGE learns an evaluation function which predicts the outcome of a local search algorithm, such as hillclimbing or WALKSAT, as a function of state features along its search trajectories. The learned evaluation function is used to bias future search trajectories toward better optima. We present positive results on six largescale optimization domains.
Learning Evaluation Functions to Improve Optimization by Local Search
 Journal of Machine Learning Research
, 2000
"... This paper describes algorithms that learn to improve search performance on largescale optimization tasks. The main algorithm, Stage, works by learning an evaluation function that predicts the outcome of a local search algorithm, such as hillclimbing or Walksat, from features of states visited durin ..."
Abstract

Cited by 56 (0 self)
 Add to MetaCart
This paper describes algorithms that learn to improve search performance on largescale optimization tasks. The main algorithm, Stage, works by learning an evaluation function that predicts the outcome of a local search algorithm, such as hillclimbing or Walksat, from features of states visited during search. The learned evaluation function is then used to bias future search trajectories toward better optima on the same problem. Another algorithm, XStage, transfers previously learned evaluation functions to new, similar optimization problems. Empirical results are provided on seven largescale optimization domains: binpacking, channel routing, Bayesian network structurefinding, radiotherapy treatment planning, cartogram design, Boolean satisfiability, and Boggle board setup.
The ThreeDimensional Bin Packing Problem
 Operations Research
, 2000
"... The problem addressed in this paper is that of orthogonally packing a given set of rectangularshaped boxes into the minimum number of rectangular bins. The problem is strongly NPhard and extremely difficult to solve in practice. Lower bounds are discussed, and it is proved that the asymptotical wo ..."
Abstract

Cited by 34 (3 self)
 Add to MetaCart
The problem addressed in this paper is that of orthogonally packing a given set of rectangularshaped boxes into the minimum number of rectangular bins. The problem is strongly NPhard and extremely difficult to solve in practice. Lower bounds are discussed, and it is proved that the asymptotical worstcase performance of the continuous lower bound is 1/8. An exact algorithm for filling a single bin is developed, leading to the definition of a exact branchandbound algorithm for the threedimensional bin packing problem, which also incorporates original approximation algorithms. Extensive computational results, involving instances with up to 60 boxes, are presented: it is shown that many instances can be solved to optimality within a reasonable time limit.
Polynomial Time Approximation Schemes for ClassConstrained Packing Problems
 Proc. of Workshop on Approximation Algorithms
, 1999
"... . We consider variants of the classic bin packing and multiple knapsack problems, in which sets of items of different classes (colors) need to be placed in bins; the items may have different sizes and values. Each bin has a limited capacity, and a bound on the number of distinct classes of items ..."
Abstract

Cited by 27 (6 self)
 Add to MetaCart
. We consider variants of the classic bin packing and multiple knapsack problems, in which sets of items of different classes (colors) need to be placed in bins; the items may have different sizes and values. Each bin has a limited capacity, and a bound on the number of distinct classes of items it can hold. In the classconstrained multiple knapsack (CCMK) problem, our goal is to maximize the total value of packed items, whereas in the classconstrained binpacking (CCBP), we seek to minimize the number of (identical) bins, needed for packing all the items. We give a polynomial time approximation scheme (PTAS) for CCMK and a dual PTAS for CCBP. We also show that the 01 classconstrained knapsack admits a fully polynomial time approximation scheme, even when the number of distinct colors of items depends on the input size. Finally, we introduce the generalized classconstrained packing problem (GCCP), where each item may have more than one color. We show that GCCP is APX...
Learning a Procedure That Can Solve Hard BinPacking Problems: a new GAbased approach to hyperheuristics
, 2003
"... The idea underlying hyperheuristics is to discover some combination of familiar, straightforward heuristics that performs very well across a whole range of problems. To be worthwhile, such a combination should outperform all of the constituent heuristics. In this paper we describe a novel messyGA ..."
Abstract

Cited by 27 (3 self)
 Add to MetaCart
The idea underlying hyperheuristics is to discover some combination of familiar, straightforward heuristics that performs very well across a whole range of problems. To be worthwhile, such a combination should outperform all of the constituent heuristics. In this paper we describe a novel messyGAbased approach that learns such a heuristic combination for solving onedimensional binpacking problems. When applied to a large set of benchmark problems, the learned procedure finds an optimal solution for nearly 80% of them, and for the rest produces an answer very close to optimal. When compared with its own constituent heuristics, it ranks first in 98% of the problems.
Bin packing in multiple dimensions: Inapproximability results and approximation schemes
 MATHEMATICS OF OPERATIONS RESEARCH
, 2006
"... We study the multidimensional generalization of the classical Bin Packing problem: Given a collection of ddimensional rectangles of specified sizes, the goal is to pack them into the minimum number of unit cubes. A long history of results exists for this problem and its special cases. Currently, t ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
We study the multidimensional generalization of the classical Bin Packing problem: Given a collection of ddimensional rectangles of specified sizes, the goal is to pack them into the minimum number of unit cubes. A long history of results exists for this problem and its special cases. Currently, the best known approximation algorithm for packing twodimensional rectangles achieves a guarantee of 1.69 in the asymptotic case (i.e., when the optimum uses a large number of bins) [3]. An important open question has been whether 2−dimensional bin packing is essentially similar to the 1−dimensional case in that it admits an asymptotic polynomial time approximation scheme (APTAS) [12, 17] or not. We answer the question in the negative and show that the problem is APX hard in the asymptotic sense. On the positive side, we give the following results: First, we consider the special case where we have to pack ddimensional cubes into the minimum number of unit cubes. We give an asymptotic polynomial time approximation scheme for this problem. This represents a significant improvement over the previous best known asymptotic approximation factor of 2 − (2/3) d [21] (1.45 for d = 2 [11]), and settles the approximability of the problem. Second, we give a polynomial time algorithm for packing arbitrary rectangles into at most OPT square bins with sides of length 1 + ε, where OPT denotes the minimum number of unit bins required to pack these rectangles. Interestingly, this result does not have an additive constant term i.e., is not an asymptotic result. As a corollary, we obtain a polynomial time approximation scheme for the problem of placing a collection of rectangles in a minimum area encasing rectangle, settling also the approximability of this problem.
Approximation Algorithms for the Multiple Knapsack Problem with Assignment Restrictions
, 1998
"... Motivated by a real world application, we study the multiple knapsack problem with assignment restrictions (MKAR): We are given a set of items N = f1; : : : ; ng and a set of knapsacks M = f1; : : : ; mg. Each item j 2 N has a positive real weight w j and each knapsack i 2 M has a positive real c ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
Motivated by a real world application, we study the multiple knapsack problem with assignment restrictions (MKAR): We are given a set of items N = f1; : : : ; ng and a set of knapsacks M = f1; : : : ; mg. Each item j 2 N has a positive real weight w j and each knapsack i 2 M has a positive real capacity c i associated with it. In addition, for each item j 2 N a set A j ` M of knapsacks that can hold item j is specified. In a feasible assignment of items to knapsacks, for each knapsack i 2 M , we need to choose a subset S i of items in N to be assigned to knapsack i, such that (i) Each item is assigned to at most one knapsack (ii) Assignment restrictions are satisfied and (iii) For each knapsack, its capacity constraint is satisfied. We consider two objectives (i) Maximize assigned weight P i2M P j2S i w j and (ii) minimize utilized capacity P i:S i 6=; c i Our results include two 1 3 approximation algorithms and two 1 2 approximation algorithms for the single objective problem of maximizing assigned weight. For the bicriteria problem which considers both the objectives, we present two algorithms with performance ratios ( 1 3 ; 2) and ( 1 2 ; 3) respectively.