Results 1  10
of
15
Algorithm runtime prediction: Methods & evaluation.
 Artificial Intelligence,
, 2014
"... Abstract Perhaps surprisingly, it is possible to predict how long an algorithm will take to run on a previously unseen input, using machine learning techniques to build a model of the algorithm's runtime as a function of problemspecific instance features. Such models have many important appli ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
(Show Context)
Abstract Perhaps surprisingly, it is possible to predict how long an algorithm will take to run on a previously unseen input, using machine learning techniques to build a model of the algorithm's runtime as a function of problemspecific instance features. Such models have many important applications and over the past decade, a wide variety of techniques have been studied for building such models. In this extended abstract of our 2014 AI Journal article of the same title, we summarize existing models and describe new model families and various extensions. In a comprehensive empirical analyis using 11 algorithms and 35 instance distributions spanning a wide range of hard combinatorial problems, we demonstrate that our new models yield substantially better runtime predictions than previous approaches in terms of their generalization to new problem instances, to new algorithms from a parameterized space, and to both simultaneously. Introduction NPcomplete problems are ubiquitous in AI. Luckily, while these problems may be hard to solve on worstcase inputs, it is often feasible to solve even large problem instances that arise in practice. Less luckily, stateoftheart algorithms often exhibit extreme runtime variation across instances from realistic distributions, even when problem size is held constant, and conversely the same instance can take dramatically different amounts of time to solve depending on the algorithm used • Algorithm selection. This classic problem of selecting the best from a given set of algorithms on a perinstance * This paper is an invited extended abstract of our 2014 AI Journal article
An empirical evaluation of portfolios approaches for solving csps
 In Integration of AI and OR Techniques in Constraint Programming for Combinatorial Optimization Problems
, 2013
"... Abstract. Recent research in areas such as SAT solving and Integer Linear Programming has shown that the performances of a single arbitrarily efficient solver can be significantly outperformed by a portfolio of possibly slower onaverage solvers. We report an empirical evaluation and comparison of ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Recent research in areas such as SAT solving and Integer Linear Programming has shown that the performances of a single arbitrarily efficient solver can be significantly outperformed by a portfolio of possibly slower onaverage solvers. We report an empirical evaluation and comparison of portfolio approaches applied to Constraint Satisfaction Problems (CSPs). We compared models developed on top of offtheshelf machine learning algorithms with respect to approaches used in the SAT field and adapted for CSPs, considering different portfolio sizes and using as evaluation metrics the number of solved problems and the time taken to solve them. Results indicate that the best SAT approaches have top performances also in the CSP field and are slightly more competitive than simple models built on top of classification algorithms. 1
Algorithm portfolios based on costsensitive hierarchical clustering.
 Proc. of IJCAI’13.
, 2013
"... Abstract Different solution approaches for combinatorial problems often exhibit incomparable performance that depends on the concrete problem instance to be solved. Algorithm portfolios aim to combine the strengths of multiple algorithmic approaches by training a classifier that selects or schedule ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Abstract Different solution approaches for combinatorial problems often exhibit incomparable performance that depends on the concrete problem instance to be solved. Algorithm portfolios aim to combine the strengths of multiple algorithmic approaches by training a classifier that selects or schedules solvers dependent on the given instance. We devise a new classifier that selects solvers based on a costsensitive hierarchical clustering model. Experimental results on SAT and MaxSAT show that the new method outperforms the most effective portfolio builders to date.
Robust Benchmark Set Selection for Boolean Constraint Solvers
"... Abstract. We investigate the composition of representative benchmark sets for evaluating and improving the performance of robust Boolean constraint solvers in the context of satisfiability testing and answer set programming. Starting from an analysis of current practice, we isolate a set of desidera ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Abstract. We investigate the composition of representative benchmark sets for evaluating and improving the performance of robust Boolean constraint solvers in the context of satisfiability testing and answer set programming. Starting from an analysis of current practice, we isolate a set of desiderata for guiding the development of a parametrized benchmark selection algorithm. Our algorithm samples a benchmark set from a larger base set (or distribution) comprising a large variety of instances. This is done fully automatically, in a way that carefully calibrates instance hardness and avoids duplicates. We demonstrate the usefulness of this approach by means of empirical results showing that optimizing solvers on the benchmark sets produced by our method leads to better configurations than obtained based on the much larger, original sets. 1
Boosting Sequential Solver Portfolios: Knowledge Sharing and Accuracy Prediction
"... Abstract. Sequential algorithm portfolios for satisfiability testing (SAT), such as SATzilla and 3S, have enjoyed much success in the last decade. By leveraging the differing strengths of individual SAT solvers, portfolios employing older solvers have often fared as well or better than newly designe ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Sequential algorithm portfolios for satisfiability testing (SAT), such as SATzilla and 3S, have enjoyed much success in the last decade. By leveraging the differing strengths of individual SAT solvers, portfolios employing older solvers have often fared as well or better than newly designed ones, in several categories of the annual SAT Competitions and Races. We propose two simple yet powerful techniques to further boost the performance of sequential portfolios, namely, a generic way of knowledge sharing suitable for sequential SAT solver schedules which is commonly employed in parallel SAT solvers, and a metalevel guardian classifier for judging whether to switch the main solver suggested by the portfolio with a recourse action solver. With these additions, we show that the performance of the sequential portfolio solver 3S, which dominated other sequential categories but was ranked 10th in
An Enhanced Features Extractor for a Portfolio of Constraint Solvers. http://www.cs.unibo.it/ ~amadini/sac_2014.pdf
 In SAC
, 2014
"... Recent research has shown that a single arbitrarily efficient solver can be significantly outperformed by a portfolio of possibly slower onaverage solvers. The solver selection is usually done by means of (un)supervised learning techniques which exploit features extracted from the problem specifica ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Recent research has shown that a single arbitrarily efficient solver can be significantly outperformed by a portfolio of possibly slower onaverage solvers. The solver selection is usually done by means of (un)supervised learning techniques which exploit features extracted from the problem specification. In this paper we present an useful and flexible framework that is able to extract an extensive set of features from a Constraint (Satisfaction/Optimization) Problem defined in possibly different modeling languages: MiniZinc, FlatZinc or XCSP. We also report some empirical results showing that the performances that can be obtained using these features are effective and competitive with state of the art CSP portfolio techniques. 1.
Snappy: A simple algorithm portfolio  (tool paper
 In International Conference on Theory and Applications of Satisfiability Testing (SAT’13), LNCS
"... Abstract. Algorithm portfolios try to combine the strength of individual algorithms to tackle a problem instance at hand with the most suitable technique. In the context of SAT the effectiveness of such approaches is often demonstrated at the SAT Competitions. In this paper we show that a competitiv ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Algorithm portfolios try to combine the strength of individual algorithms to tackle a problem instance at hand with the most suitable technique. In the context of SAT the effectiveness of such approaches is often demonstrated at the SAT Competitions. In this paper we show that a competitive algorithm portfolio can be designed in an extremely simple fashion. In fact, the algorithm portfolio we present does not require any offline learning nor knowledge of any complex Machine Learning tools. We hope that the utter simplicity of our approach combined with its effectiveness will make algorithm portfolios accessible by a broader range of researchers including SAT and CSP solver developers. 1
From sequential algorithm selection to parallel portfolio selection
 In Dhaenens
, 2015
"... Abstract. In view of the increasing importance of hardware parallelism, a natural extension of perinstance algorithm selection is to select a set of algorithms to be run in parallel on a given problem instance, based on features of that instance. Here, we explore how existing algorithm selection t ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. In view of the increasing importance of hardware parallelism, a natural extension of perinstance algorithm selection is to select a set of algorithms to be run in parallel on a given problem instance, based on features of that instance. Here, we explore how existing algorithm selection techniques can be effectively parallelized. To this end, we leverage the machine learning models used by existing sequential algorithm selectors, such as 3S , ISAC , SATzilla and MEASP, and modify their selection procedures to produce a ranking of the given candidate algorithms; we then select the top n algorithms under this ranking to be run in parallel on n processing units. Furthermore, we adapt the presolving schedules obtained by aspeed to be effective in a parallel setting with different time budgets for each processing unit. Our empirical results demonstrate that, using 4 processing units, the best of our methods achieves a 12fold average speedup over the best single solver on a broad set of challenging scenarios from the algorithm selection library.
Reinforcement Learning for Automatic Online Algorithm Selection an Empirical Study ❤❛♥s✳❞❡❣r♦♦t❡❅❦✉•❡✉✈❡♥✳❜❡
"... Abstract: In this paper a reinforcement learning methodology for automatic online algorithm selection is introduced and empirically tested. It is applicable to automatic algorithm selection methods that predict the performance of each available algorithm and then pick the best one. The experiments ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract: In this paper a reinforcement learning methodology for automatic online algorithm selection is introduced and empirically tested. It is applicable to automatic algorithm selection methods that predict the performance of each available algorithm and then pick the best one. The experiments confirm the usefulness of the methodology: using online data results in better performance. As in many online learning settings an exploration vs. exploitation tradeoff, synonymously learning vs. earning tradeoff, is incurred. Empirically investigating the quality of classic solution strategies for handling this tradeoff in the automatic online algorithm selection setting is the secondary goal of this paper. The automatic online algorithm selection problem can be modelled as a contextual multiarmed bandit problem. Two classic strategies for solving this problem are tested in the context of automatic online algorithm selection: εgreedy and lower confidence bound. The experiments show that a simple purely exploitative greedy strategy outperforms strategies explicitly performing exploration.