Results 1  10
of
240
HyperHeuristics: An Emerging Direction In Modern Search Technology
, 2003
"... This chapter introduces and overviews an emerging methodology in search and optimisation. One of the key aims of these new approaches, which have been termed hyperheuristics, is to raise the level of generality at which optimisation systems can operate. An objective is that hyperheuristics will le ..."
Abstract

Cited by 92 (38 self)
 Add to MetaCart
This chapter introduces and overviews an emerging methodology in search and optimisation. One of the key aims of these new approaches, which have been termed hyperheuristics, is to raise the level of generality at which optimisation systems can operate. An objective is that hyperheuristics will lead to more general systems that are able to handle a wide range of problem domains rather than current metaheuristic technology which tends to be customised to a particular problem or a narrow class of problems. Hyperheuristics are broadly concerned with intelligently choosing the right heuristic or algorithm in a given situation. Of course, a hyperheuristic can be (often is) a (meta)heuristic and it can operate on (meta)heuristics. In a certain sense, a hyperheuristic works at a higher level when compared with the typical application of metaheuristics to optimisation problems i.e. a hyperheuristic could be thought of as a (meta)heuristic which operates on lower level (meta )heuristics. In this chapter we will introduce the idea and give a brief history of this emerging area. In addition, we will review some of the latest work to be published in the field.
Energyaware partitioning for multiprocessor realtime systems
 Proceedings of the International Parallel and Distributed Processing Symposium (IPDPS), IEEE CS
, 2003
"... In this paper, we address the problem of partitioning periodic realtime tasks in a multiprocessor platform by considering both feasibility and energyawareness perspectives: our objective is to compute the feasible partitioning that results in minimum energy consumption on multiple identical proc ..."
Abstract

Cited by 74 (2 self)
 Add to MetaCart
(Show Context)
In this paper, we address the problem of partitioning periodic realtime tasks in a multiprocessor platform by considering both feasibility and energyawareness perspectives: our objective is to compute the feasible partitioning that results in minimum energy consumption on multiple identical processors by using variable voltage EarliestDeadlineFirst scheduling. We show that the problem is NPHard in the strong sense on m 2 processors even when feasibility is guaranteed a priori. Then, we develop our framework where load balancing plays a major role in producing energyefficient partitionings. We evaluate the feasibility and energyefficiency performances of partitioning heuristics experimentally. 1
Dynamic Placement of Virtual Machines for Managing SLA Violations
 10th IFIP/IEEE International Symposium on Integrated Network Management
, 2007
"... {bobroffJakochut,kirkbeaty} @ us. ibm.com Abstract A dynamic server migration and consolidation algorithm is introduced. The algorithm is shown to provide substantial improvement over static server consolidation in reducing the amount of required capacity and the rate of service level agreement ..."
Abstract

Cited by 70 (1 self)
 Add to MetaCart
(Show Context)
{bobroffJakochut,kirkbeaty} @ us. ibm.com Abstract A dynamic server migration and consolidation algorithm is introduced. The algorithm is shown to provide substantial improvement over static server consolidation in reducing the amount of required capacity and the rate of service level agreement violations. Benefits accrue for workloads that are variable and can be forecast over intervals shorter than the time scale of demand variability. The management algorithm reduces the amount of physical capacity required to support a specified rate of SLA violations for a given workload by as much as 50 % as compared to static consolidation approach. Another result is that the rate of SLA violations at fixed capacity may be reduced by up to 20%. The results are based on hundreds of production workload traces across a variety of operating systems, applications, and industries. I.
Hedging uncertainty: Approximation algorithms for stochastic optimization problems
 In Proceedings of the 10th International Conference on Integer Programming and Combinatorial Optimization
, 2004
"... We initiate the design of approximation algorithms for stochastic combinatorial optimization problems; we formulate the problems in the framework of twostage stochastic optimization, and provide nearly tight approximation algorithms. Our problems range from the simple (shortest path, vertex cover, ..."
Abstract

Cited by 67 (10 self)
 Add to MetaCart
(Show Context)
We initiate the design of approximation algorithms for stochastic combinatorial optimization problems; we formulate the problems in the framework of twostage stochastic optimization, and provide nearly tight approximation algorithms. Our problems range from the simple (shortest path, vertex cover, bin packing) to complex (facility location, set cover), and contain representatives with different approximation ratios. The approximation ratio of the stochastic variant of a typical problem is of the same order of magnitude as its deterministic counterpart. Furthermore, common techniques for designing approximation algorithms such as LP rounding, the primaldual method, and the greedy algorithm, can be carefully adapted to obtain these results. 1
Learning Evaluation Functions for Global Optimization and Boolean Satisfiability
 In Proc. of 15th National Conf. on Artificial Intelligence (AAAI
, 1998
"... This paper describes STAGE, a learning approach to automatically improving search performance on optimization problems. STAGE learns an evaluation function which predicts the outcome of a local search algorithm, such as hillclimbing or WALKSAT, as a function of state features along its search ..."
Abstract

Cited by 62 (3 self)
 Add to MetaCart
This paper describes STAGE, a learning approach to automatically improving search performance on optimization problems. STAGE learns an evaluation function which predicts the outcome of a local search algorithm, such as hillclimbing or WALKSAT, as a function of state features along its search trajectories. The learned evaluation function is used to bias future search trajectories toward better optima. We present positive results on six largescale optimization domains.
Learning Evaluation Functions to Improve Optimization by Local Search
 Journal of Machine Learning Research
, 2000
"... This paper describes algorithms that learn to improve search performance on largescale optimization tasks. The main algorithm, Stage, works by learning an evaluation function that predicts the outcome of a local search algorithm, such as hillclimbing or Walksat, from features of states visited durin ..."
Abstract

Cited by 57 (0 self)
 Add to MetaCart
This paper describes algorithms that learn to improve search performance on largescale optimization tasks. The main algorithm, Stage, works by learning an evaluation function that predicts the outcome of a local search algorithm, such as hillclimbing or Walksat, from features of states visited during search. The learned evaluation function is then used to bias future search trajectories toward better optima on the same problem. Another algorithm, XStage, transfers previously learned evaluation functions to new, similar optimization problems. Empirical results are provided on seven largescale optimization domains: binpacking, channel routing, Bayesian network structurefinding, radiotherapy treatment planning, cartogram design, Boolean satisfiability, and Boggle board setup.
The ThreeDimensional Bin Packing Problem
 Operations Research
, 2000
"... The problem addressed in this paper is that of orthogonally packing a given set of rectangularshaped boxes into the minimum number of rectangular bins. The problem is strongly NPhard and extremely difficult to solve in practice. Lower bounds are discussed, and it is proved that the asymptotical wo ..."
Abstract

Cited by 41 (5 self)
 Add to MetaCart
The problem addressed in this paper is that of orthogonally packing a given set of rectangularshaped boxes into the minimum number of rectangular bins. The problem is strongly NPhard and extremely difficult to solve in practice. Lower bounds are discussed, and it is proved that the asymptotical worstcase performance of the continuous lower bound is 1/8. An exact algorithm for filling a single bin is developed, leading to the definition of a exact branchandbound algorithm for the threedimensional bin packing problem, which also incorporates original approximation algorithms. Extensive computational results, involving instances with up to 60 boxes, are presented: it is shown that many instances can be solved to optimality within a reasonable time limit.
HyperHeuristics: Learning to Combine Simple Heuristics in BinPacking Problem
"... ..."
(Show Context)
Learning a Procedure That Can Solve Hard BinPacking Problems: a new GAbased approach to hyperheuristics
, 2003
"... The idea underlying hyperheuristics is to discover some combination of familiar, straightforward heuristics that performs very well across a whole range of problems. To be worthwhile, such a combination should outperform all of the constituent heuristics. In this paper we describe a novel messyGA ..."
Abstract

Cited by 30 (3 self)
 Add to MetaCart
(Show Context)
The idea underlying hyperheuristics is to discover some combination of familiar, straightforward heuristics that performs very well across a whole range of problems. To be worthwhile, such a combination should outperform all of the constituent heuristics. In this paper we describe a novel messyGAbased approach that learns such a heuristic combination for solving onedimensional binpacking problems. When applied to a large set of benchmark problems, the learned procedure finds an optimal solution for nearly 80% of them, and for the rest produces an answer very close to optimal. When compared with its own constituent heuristics, it ranks first in 98% of the problems.
Polynomial Time Approximation Schemes for ClassConstrained Packing Problems
 Proc. of Workshop on Approximation Algorithms
, 1999
"... . We consider variants of the classic bin packing and multiple knapsack problems, in which sets of items of different classes (colors) need to be placed in bins; the items may have different sizes and values. Each bin has a limited capacity, and a bound on the number of distinct classes of items ..."
Abstract

Cited by 28 (6 self)
 Add to MetaCart
. We consider variants of the classic bin packing and multiple knapsack problems, in which sets of items of different classes (colors) need to be placed in bins; the items may have different sizes and values. Each bin has a limited capacity, and a bound on the number of distinct classes of items it can hold. In the classconstrained multiple knapsack (CCMK) problem, our goal is to maximize the total value of packed items, whereas in the classconstrained binpacking (CCBP), we seek to minimize the number of (identical) bins, needed for packing all the items. We give a polynomial time approximation scheme (PTAS) for CCMK and a dual PTAS for CCBP. We also show that the 01 classconstrained knapsack admits a fully polynomial time approximation scheme, even when the number of distinct colors of items depends on the input size. Finally, we introduce the generalized classconstrained packing problem (GCCP), where each item may have more than one color. We show that GCCP is APX...