Results 1  10
of
15
Automatic Algorithm Configuration based on Local Search
 IN AAAI ’07: PROC. OF THE TWENTYSECOND CONFERENCE ON ARTIFICAL INTELLIGENCE
, 2007
"... The determination of appropriate values for free algorithm parameters is a challenging and tedious task in the design of effective algorithms for hard problems. Such parameters include categorical choices (e.g., neighborhood structure in local search or variable/value ordering heuristics in tree sea ..."
Abstract

Cited by 76 (32 self)
 Add to MetaCart
(Show Context)
The determination of appropriate values for free algorithm parameters is a challenging and tedious task in the design of effective algorithms for hard problems. Such parameters include categorical choices (e.g., neighborhood structure in local search or variable/value ordering heuristics in tree search), as well as numerical parameters (e.g., noise or restart timing). In practice, tuning of these parameters is largely carried out manually by applying rules of thumb and crude heuristics, while more principled approaches are only rarely used. In this paper, we present a local search approach for algorithm configuration and prove its convergence to the globally optimal parameter configuration. Our approach is very versatile: it can, e.g., be used for minimising runtime in decision problems or for maximising solution quality in optimisation problems. It further applies to arbitrary algorithms, including heuristic tree search and local search algorithms, with no limitation on the number of parameters. Experiments in four algorithm configuration scenarios demonstrate that our automatically determined parameter settings always outperform the algorithm defaults, sometimes by several orders of magnitude. Our approach also shows better performance and greater flexibility than the recent CALIBRA system. Our ParamILS code, along with instructions on how to use it for tuning your own algorithms, is available online at
Performance prediction and automated tuning of randomized and parametric algorithms
 In Proc. of CP06
, 2006
"... Abstract. Machine learning can be used to build models that predict the runtime of search algorithms for hard combinatorial problems. Such empirical hardness models have previously been studied for complete, deterministic search algorithms. In this work, we demonstrate that such models can also make ..."
Abstract

Cited by 60 (23 self)
 Add to MetaCart
(Show Context)
Abstract. Machine learning can be used to build models that predict the runtime of search algorithms for hard combinatorial problems. Such empirical hardness models have previously been studied for complete, deterministic search algorithms. In this work, we demonstrate that such models can also make surprisingly accurate predictions of the runtime distributions of incomplete and randomized search methods, such as stochastic local search algorithms. We also show for the first time how information about an algorithm’s parameter settings can be incorporated into a model, and how such models can be used to automatically adjust the algorithm’s parameters on a perinstance basis in order to optimize its performance. Empirical results for Novelty + and SAPS on structured and unstructured SAT instances show very good predictive performance and significant speedups of our automatically determined parameter settings when compared to the default and best fixed distributionspecific parameter settings. 1
Exploring Hyperheuristic Methodologies with Genetic Programming
"... Hyperheuristics represent a novel search methodology that is motivated by the goal of automating the process of selecting or combining simpler heuristics in order to solve hard computational search problems. An extension of the original hyperheuristic idea is to generate new heuristics which are n ..."
Abstract

Cited by 36 (13 self)
 Add to MetaCart
(Show Context)
Hyperheuristics represent a novel search methodology that is motivated by the goal of automating the process of selecting or combining simpler heuristics in order to solve hard computational search problems. An extension of the original hyperheuristic idea is to generate new heuristics which are not currently known. These approaches operate on a search space of heuristics rather than directly on a search space of solutions to the underlying problem which is the case with most metaheuristics implementations. In the majority of hyperheuristic studies so far, a framework is provided with a set of human designed heuristics, taken from the literature, and with good measures of performance in practice. A less well studied approach aims to generate new heuristics from a set of potential heuristic components. The purpose of this chapter is to discuss this class of hyperheuristics, in which Genetic Programming is the most widely used methodology. A detailed discussion is presented including the steps needed to apply this technique, some representative case studies, a literature review of related work, and a discussion of relevant issues. Our aim is to convey the exciting potential of this innovative approach for automating the heuristic design process
Parameter adjustment based on performance prediction: Towards an instanceaware problem solver
 In: Technical Report: MSRTR2005125, Microsoft Research
, 2005
"... Tuning an algorithm’s parameters for robust and high performance is a tedious and timeconsuming task that often requires knowledge about both the domain and the algorithm of interest. Furthermore, the optimal parameter configuration to use may differ considerably across problem instances. In this r ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
(Show Context)
Tuning an algorithm’s parameters for robust and high performance is a tedious and timeconsuming task that often requires knowledge about both the domain and the algorithm of interest. Furthermore, the optimal parameter configuration to use may differ considerably across problem instances. In this report, we define and tackle the algorithm configuration problem, which is to automatically choose the optimal parameter configuration for a given algorithm on a perinstance base. We employ an indirect approach that predicts algorithm runtime for the problem instance at hand and each (continuous) parameter configuration, and then simply chooses the configuration that minimizes the prediction. This approach is based on similar work by LeytonBrown et al. [LBNS02, NLBD + 04] who tackle the algorithm selection problem [Ric76] (given a problem instance, choose the best algorithm to solve it). While all previous studies for runtime prediction focussed on tree search algorithm, we demonstrate that it is possible to fairly accurately predict the runtime of SAPS [HTH02], one of the bestperforming stochastic local search algorithms for SAT. We also show that our approach automatically picks parameter configurations that speed up SAPS by an average factor of more than two when compared to its default parameter configuration. Finally, we introduce sequential Bayesian learning to the problem of runtime prediction, enabling an incremental learning approach and yielding very informative estimates of predictive uncertainty. 1
A.J.: Mapping the performance of heuristics for constraint satisfaction
 In: IEEE Congress on Evolutionary Computation (CEC
, 2010
"... Abstract — Hyperheuristics are high level search methodologies that operate over a set of heuristics which operate directly on the problem domain. In one of the hyperheuristic frameworks, the goal is automating the process of selecting a humandesigned low level heuristic at each step to construc ..."
Abstract

Cited by 10 (10 self)
 Add to MetaCart
(Show Context)
Abstract — Hyperheuristics are high level search methodologies that operate over a set of heuristics which operate directly on the problem domain. In one of the hyperheuristic frameworks, the goal is automating the process of selecting a humandesigned low level heuristic at each step to construct a solution for a given problem. Constraint Satisfaction Problems (CSP) are well know NP complete problems. In this study, behaviours of two variable ordering heuristics MaxConflicts (MXC) and Saturation Degree (SD) with respect to various combinations of constraint density and tightness values are investigated in depth over a set of random CSP instances. The empirical results show that the performance of these two heuristics are somewhat complementary and they vary for changing constraint density and tightness value pairs. The outcome is used to design three hyperheuristics using MXC and SD as low level heuristics to construct a solution for unseen CSP instances. It has been observed that these hyperheuristics improve the performance of individual low level heuristics even further in terms of mean consistency checks for some CSP instances. I.
A memorybased rash optimizer
 IN AAAI06 WORKSHOP ON HEURISTIC SEARCH, MEMORY BASED HEURISTICS AND THEIR APPLICATIONS
, 2006
"... This paper presents a memorybased Reactive Affine Shaker (MRASH) algorithm for global optimization. The Reactive Affine Shaker is an adaptive search algorithm based only on the function values. MRASH is an extension of RASH in which good starting points to RASH are suggested online by using Bayes ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
(Show Context)
This paper presents a memorybased Reactive Affine Shaker (MRASH) algorithm for global optimization. The Reactive Affine Shaker is an adaptive search algorithm based only on the function values. MRASH is an extension of RASH in which good starting points to RASH are suggested online by using Bayesian Locally Weighted Regression (BLWR). Both techniques use the memory about the previous history of the search to guide the future exploration but in very different ways. RASH compiles the previous experience into a local search area where sample points are drawn, while locallyweighted regression saves the entire previous history to be mined extensively when an additional sample point is generated. Because of the high computational cost related to the BLWR model, it is applied only to evaluate the potential of an initial point for a local search run. The experimental results, focussed onto the case when the dominant computational cost is the evaluation of the target f function, show that MRASH is indeed capable of leading to good results for a smaller number of function evaluations.
GOSH! Gossiping Optimization Search Heuristics
 In Proceedings of the Learning and Intelligent Optimization Workshop (LION 2007
, 2007
"... While the use of distributed computing in search and optimization problems has a long research history, most efforts have been devoted to parallel implementations with strict synchronization requirements or to distributed architectures where a central server coordinates the work of clients by partit ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
While the use of distributed computing in search and optimization problems has a long research history, most efforts have been devoted to parallel implementations with strict synchronization requirements or to distributed architectures where a central server coordinates the work of clients by partitioning the search space or working as a status repository. In this paper we discuss the distributed implementation of global function optimization through decentralized processing in a peertopeer fashion, where relevant information is exchanged among nodes by means of epidemic protocols. A key issue in such setting is the degradation of the quality of the solution due to the lack of complete information about the global search status. A tradeoff between message complexity and solution quality must be investigated. Preliminary computational results in a simplified setting, reported in the experimental section, show that research in the field is motivated. 1
Learning while Optimizing an Unknown Fitness Surface ⋆
"... Abstract. This paper is about Reinforcement Learning (RL) applied to online parameter tuning in Stochastic Local Search (SLS) methods. In particular a novel application of RL is considered in the Reactive Tabu Search (RTS) method, where the appropriate amount of diversification in prohibitionbased ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract. This paper is about Reinforcement Learning (RL) applied to online parameter tuning in Stochastic Local Search (SLS) methods. In particular a novel application of RL is considered in the Reactive Tabu Search (RTS) method, where the appropriate amount of diversification in prohibitionbased (Tabu) local search is adapted in a fast online manner to the characteristics of a task and of the local configuration. We model the parametertuning policy as a Markov Decision Process where the states summarize relevant information about the recent history of the search, and we determine a nearoptimal policy by using the Least Squares Policy Iteration (LSPI) method. Preliminary experiments on Maximum Satisfiability (MAXSAT) instances show very promising results indicating that the learnt policy is competitive with previously proposed reactive strategies. 1 Reinforcement Learning and Reactive Search Reactive Search (RS) [1–3] advocates the integration of subsymbolic machine
Using learning classifier systems to design selective hyperheuristics for constraint satisfaction problems
 In 2013 IEEE Congress on Evolutionary Computation (CEC
, 2013
"... Abstract—Constraint satisfaction problems (CSP) are defined by a set of variables, where each variable contains a series of values it can be instantiated with. There is a set of constraints among the variables that restrict the different values they can take simultaneously. The task is to find one a ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract—Constraint satisfaction problems (CSP) are defined by a set of variables, where each variable contains a series of values it can be instantiated with. There is a set of constraints among the variables that restrict the different values they can take simultaneously. The task is to find one assignment to all the variables without breaking any constraint. To solve a CSP instance, a search tree is created where each node represents a variable of the instance. The order in which the variables are selected for instantiation changes the form of the search tree and affects the cost of finding a solution. Many heuristics have been proposed to help to decide the next variable to instantiate during the search and they have proved to be helpful for some instances. In this paper we explore the use of learning classifier systems to construct selective hyperheuristics that dynamically select, from a set of variable ordering heuristics for CSPs, the one that best matches the current problem state in order to perform well on a wide range of instances. During a training phase, the system constructs stateheuristic rules as it explores the search space. Heuristics with good performance at certain points are rewarded and become more likely to be applied in similar situations. The approach is tested on random instances, providing promising results with respect to the median performance of the variable ordering heuristics used in isolation. I.
Performance Prediction and Automated Tuning of Randomized and Parametric Algorithms
"... Machine learning can be utilized to build models that predict the runtime of search algorithms for hard combinatorial problems. Such empirical hardness models have previously been studied for complete, deterministic search algorithms. In this work, we demonstrate that such models can also make surp ..."
Abstract
 Add to MetaCart
(Show Context)
Machine learning can be utilized to build models that predict the runtime of search algorithms for hard combinatorial problems. Such empirical hardness models have previously been studied for complete, deterministic search algorithms. In this work, we demonstrate that such models can also make surprisingly accurate runtime predictions for incomplete, randomized search methods, such as stochastic local search algorithms. We also show for the first time how information about an algorithm’s parameter settings can be incorporated into a model, and how such models can be used to automatically adjust the algorithm’s parameters on a perinstance basis in order to achieve peak performance. Empirical results for Novelty + and SAPS on random and structured SAT instances show very good predictive performance and significant speedups of our automatically determined parameter settings when compared to the default and best fixed parameter settings.