Results 1  10
of
20
Algorithm runtime prediction: Methods & evaluation.
 Artificial Intelligence,
, 2014
"... Abstract Perhaps surprisingly, it is possible to predict how long an algorithm will take to run on a previously unseen input, using machine learning techniques to build a model of the algorithm's runtime as a function of problemspecific instance features. Such models have many important appli ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
Abstract Perhaps surprisingly, it is possible to predict how long an algorithm will take to run on a previously unseen input, using machine learning techniques to build a model of the algorithm's runtime as a function of problemspecific instance features. Such models have many important applications and over the past decade, a wide variety of techniques have been studied for building such models. In this extended abstract of our 2014 AI Journal article of the same title, we summarize existing models and describe new model families and various extensions. In a comprehensive empirical analyis using 11 algorithms and 35 instance distributions spanning a wide range of hard combinatorial problems, we demonstrate that our new models yield substantially better runtime predictions than previous approaches in terms of their generalization to new problem instances, to new algorithms from a parameterized space, and to both simultaneously. Introduction NPcomplete problems are ubiquitous in AI. Luckily, while these problems may be hard to solve on worstcase inputs, it is often feasible to solve even large problem instances that arise in practice. Less luckily, stateoftheart algorithms often exhibit extreme runtime variation across instances from realistic distributions, even when problem size is held constant, and conversely the same instance can take dramatically different amounts of time to solve depending on the algorithm used • Algorithm selection. This classic problem of selecting the best from a given set of algorithms on a perinstance * This paper is an invited extended abstract of our 2014 AI Journal article
LLAMA: leveraging learning to automatically manage algorithms
, 2013
"... Algorithm portfolio and selection approaches have achieved remarkable improvements over single solvers. However, the implementation of such systems is often highly customised and specific to the problem domain. This makes it difficult for researchers to explore different techniques for their speci ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
Algorithm portfolio and selection approaches have achieved remarkable improvements over single solvers. However, the implementation of such systems is often highly customised and specific to the problem domain. This makes it difficult for researchers to explore different techniques for their specific problems. We present LLAMA, a modular and extensible toolkit implemented as an R package that facilitates the exploration of a range of different portfolio techniques on any problem domain. It implements the algorithm selection approaches most commonly used in the literature and leverages the extensive library of machine learning algorithms and techniques in R. We describe the current capabilities and limitations of the toolkit and illustrate its usage on a set of example SAT problems. This document corresponds to LLAMA version 0.6. ar
Proteus: A Hierarchical Portfolio of Solvers and Transformations
"... Abstract. In recent years, portfolio approaches to solving SAT problems and CSPs have become increasingly common. There are also a number of different encodings for representing CSPs as SAT instances. In this paper, we leverage advances in both SAT and CSP solving to present a novel hierarchical p ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Abstract. In recent years, portfolio approaches to solving SAT problems and CSPs have become increasingly common. There are also a number of different encodings for representing CSPs as SAT instances. In this paper, we leverage advances in both SAT and CSP solving to present a novel hierarchical portfoliobased approach to CSP solving, which we call Proteus, that does not rely purely on CSP solvers. Instead, it may decide that it is best to encode a CSP problem instance into SAT, selecting an appropriate encoding and a corresponding SAT solver. Our experimental evaluation used an instance of Proteus that involved four CSP solvers, three SAT encodings, and six SAT solvers, evaluated on the most challenging problem instances from the CSP solver competitions, involving global and intensional constraints. We show that significant performance improvements can be achieved by Proteus obtained by exploiting alternative viewpoints and solvers for combinatorial problemsolving. 1
Supervised Learning to Control Energetic Reasoning: Feasibility Study
"... Abstract. Propagation is a doubleedged sword, with more pruning power coming at the price of larger computation time. For each problem constraint, the best propagator depends on the specific instance and may change at search time. We propose to use an oracle function, obtained via Machine Learning, ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Propagation is a doubleedged sword, with more pruning power coming at the price of larger computation time. For each problem constraint, the best propagator depends on the specific instance and may change at search time. We propose to use an oracle function, obtained via Machine Learning, to decide whether to run complex propagators for a target constraint. In this paper, we focus on investigating the feasibility of building an oracle for the Energetic Reasoning propagator used in scheduling. Our experiments show that high prediction accuracy can be obtained, provide suggestions for classification features, and highlight important issues to address when building such an oracle. Propagation is a doubleedged sword: more powerful filtering algorithms provide an increased chance to prune values, but they also have larger computation time, that must be paid regardless of whether additional propagation is actually achieved. The cumulative constraint is widely employed to model resource restric
From sequential algorithm selection to parallel portfolio selection
 In Dhaenens
, 2015
"... Abstract. In view of the increasing importance of hardware parallelism, a natural extension of perinstance algorithm selection is to select a set of algorithms to be run in parallel on a given problem instance, based on features of that instance. Here, we explore how existing algorithm selection t ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. In view of the increasing importance of hardware parallelism, a natural extension of perinstance algorithm selection is to select a set of algorithms to be run in parallel on a given problem instance, based on features of that instance. Here, we explore how existing algorithm selection techniques can be effectively parallelized. To this end, we leverage the machine learning models used by existing sequential algorithm selectors, such as 3S , ISAC , SATzilla and MEASP, and modify their selection procedures to produce a ranking of the given candidate algorithms; we then select the top n algorithms under this ranking to be run in parallel on n processing units. Furthermore, we adapt the presolving schedules obtained by aspeed to be effective in a parallel setting with different time budgets for each processing unit. Our empirical results demonstrate that, using 4 processing units, the best of our methods achieves a 12fold average speedup over the best single solver on a broad set of challenging scenarios from the algorithm selection library.
Algorithm portfolios for noisy optimization: Compare solvers early
 In Proceedings of Lion8
, 2014
"... Abstract. Noisy optimization is the optimization of objective functions corrupted by noise. A portfolio of algorithms is a set of algorithms equipped with an algorithm selection tool for distributing the computational power among them. We study portfolios of noisy optimization solvers, show that di ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. Noisy optimization is the optimization of objective functions corrupted by noise. A portfolio of algorithms is a set of algorithms equipped with an algorithm selection tool for distributing the computational power among them. We study portfolios of noisy optimization solvers, show that different settings lead to different performances, obtain mathematically proved performance (in the sense that the portfolio performs nearly as well as the best of its algorithms) by an ad hoc selection algorithm dedicated to noisy optimization. A somehow surprising result is that it is better to compare solvers with some lag; i.e., recommend the current recommendation of the best solver, selected from a comparison based on their recommendations earlier in the run. 1
DEVELOPMENT OF A RESOURCE MANAGER FRAMEWORK FOR ADAPTIVE BEAMFORMER SELECTION
, 2013
"... Adaptive digital beamforming (DBF) algorithms are designed to mitigate the effects of interference and noise in an radio frequency (RF) environment encountered by modern electronic support (ES) receivers. Traditionally, an ES receiver employs a single adaptive DBF algorithm that is part of the desig ..."
Abstract
 Add to MetaCart
(Show Context)
Adaptive digital beamforming (DBF) algorithms are designed to mitigate the effects of interference and noise in an radio frequency (RF) environment encountered by modern electronic support (ES) receivers. Traditionally, an ES receiver employs a single adaptive DBF algorithm that is part of the design of the receiver system. If the ES receiver is designed to form multiple independent beams, the same adaptive DBF algorithm is applied in each beam. Traditional receiver design is effective and works when system processing power is limited. Modern computer technology allows improvements over traditional receiver design, where a receiver is able to change the implemented algorithm based upon system usage. This dissertation provides a new ES receiver framework that attempts to make better use of the available computing resources by adaptively selecting the most efficient DBF algorithm for each beam that is able to meet system requirements. The framework contains a resource manager (RM) that facilitates adaptive algorithm selection through the use of a lookuptable (LUT). The RM estimates parameters of the RF environment, to include
Reinforcement Learning for Automatic Online Algorithm Selection an Empirical Study ❤❛♥s✳❞❡❣r♦♦t❡❅❦✉•❡✉✈❡♥✳❜❡
"... Abstract: In this paper a reinforcement learning methodology for automatic online algorithm selection is introduced and empirically tested. It is applicable to automatic algorithm selection methods that predict the performance of each available algorithm and then pick the best one. The experiments ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract: In this paper a reinforcement learning methodology for automatic online algorithm selection is introduced and empirically tested. It is applicable to automatic algorithm selection methods that predict the performance of each available algorithm and then pick the best one. The experiments confirm the usefulness of the methodology: using online data results in better performance. As in many online learning settings an exploration vs. exploitation tradeoff, synonymously learning vs. earning tradeoff, is incurred. Empirically investigating the quality of classic solution strategies for handling this tradeoff in the automatic online algorithm selection setting is the secondary goal of this paper. The automatic online algorithm selection problem can be modelled as a contextual multiarmed bandit problem. Two classic strategies for solving this problem are tested in the context of automatic online algorithm selection: εgreedy and lower confidence bound. The experiments show that a simple purely exploitative greedy strategy outperforms strategies explicitly performing exploration.
Algorithm Selection for the Graph Coloring Problem
"... Abstract. We present an automated algorithm selection method based on machine learning for the graph coloring problem (GCP). For this purpose, we identify 78 features for this problem and evaluate the performance of six stateoftheart (meta)heuristics for the GCP. We use the obtained data to trai ..."
Abstract
 Add to MetaCart
Abstract. We present an automated algorithm selection method based on machine learning for the graph coloring problem (GCP). For this purpose, we identify 78 features for this problem and evaluate the performance of six stateoftheart (meta)heuristics for the GCP. We use the obtained data to train several classification algorithms that are applied to predict on a new instance the algorithm with the highest expected performance. To achieve better performance for the machine learning algorithms, we investigate the impact of parameters, and evaluate different data discretization and feature selection methods. Finally, we evaluate our approach, which exploits the existing GCP techniques and the automated algorithm selection, and compare it with existing heuristic algorithms. Experimental results show that the GCP solver based on machine learning outperforms previous methods on benchmark instances.