Results 1  10
of
10
Using CBR to select solution strategies in constraint programming
 In ICCBR
, 2005
"... Abstract. Constraint programming is a powerful paradigm that offers many different strategies for solving problems. Choosing a good strategy is difficult; choosing a poor strategy wastes resources and may result in a problem going unsolved. We show how CaseBased Reasoning can be used to select good ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Constraint programming is a powerful paradigm that offers many different strategies for solving problems. Choosing a good strategy is difficult; choosing a poor strategy wastes resources and may result in a problem going unsolved. We show how CaseBased Reasoning can be used to select good strategies. We design experiments which demonstrate that, on two problems with quite different characteristics, CBR can outperform four other strategy selection techniques. 1
Input Feature Approximation Using Instance Sampling
"... Features (or properties) of problem inputs can provide information not only for classifying input but also for selecting the right algorithm for a particular instance. Using these input properties can help close the gap between problem complexity and algorithm efficiency, but enumerating the feature ..."
Abstract
 Add to MetaCart
Features (or properties) of problem inputs can provide information not only for classifying input but also for selecting the right algorithm for a particular instance. Using these input properties can help close the gap between problem complexity and algorithm efficiency, but enumerating the features is often at least as costly as solving the problem itself. An approximation of such features can be useful, though. This work defines the notion of group input properties and proposes an efficient solution for their approximation through input sampling. Using common statistical techniques, we show that samples of inputs for sorting and graph problems retain the general properties of the inputs themselves.
A New Class of NatureInspired . . .
"... We present, and evaluate benefits of, a design methodology for translating natural phenomena represented as mathematical models, into novel, selfadaptive, peertopeer (p2p) distributed computing algorithms (“protocols”). Concretely, our first contribution is a set of techniques to translate discre ..."
Abstract
 Add to MetaCart
We present, and evaluate benefits of, a design methodology for translating natural phenomena represented as mathematical models, into novel, selfadaptive, peertopeer (p2p) distributed computing algorithms (“protocols”). Concretely, our first contribution is a set of techniques to translate discrete “sequence equations ” (also known as difference equations) into new p2p protocols called “sequence protocols”. Sequence protocols are selfadaptive, scalable, and faulttolerant, with applicability in p2p settings like Grids. A sequence protocol is a set of probabilistic local and messagepassing actions for each process. These actions are translated from terms in a set of source sequence equations. Individual processes do not simulate the source sequence equations completely. Instead, each process executes probabilistic local and message passing actions, so that the emergent roundtoround behavior of the sequence protocol in a p2p system can be probabilistically predicted by the source sequence equations. The paper’s second contribution is the design and evaluation of a set of sequence protocols for detection of two global triggers in a distributed system: threshold detection and interval detection. This paper’s third contribution is a new selfadaptive Grid computing protocol called “HoneyAdapt”. HoneyAdapt is derived from sequence equations modeling adaptive bee foraging behavior in nature. HoneyAdapt is intended
Harnessing Algorithm Bias A Study of Selection Strategies and Evaluation for Portfolios of Algorithms
, 2006
"... Search algorithms are biased because many factors related to the algorithm design – such as representation, decision points, search control, memory usage, heuristic guidance, and stopping criteria – or the problem instance characteristics impact how a search algorithm performs. The variety of algori ..."
Abstract
 Add to MetaCart
(Show Context)
Search algorithms are biased because many factors related to the algorithm design – such as representation, decision points, search control, memory usage, heuristic guidance, and stopping criteria – or the problem instance characteristics impact how a search algorithm performs. The variety of algorithms and their parameterized variants make it difficult to select the most efficient algorithm for a given problem instance. It seems natural to apply learning to the algorithm selection problem of allocating computational resources among a portfolio of algorithms that may have complementing (or competing) search technologies. Such selection is called the portfolio strategy. This research exam studies the stateoftheart in portfolio strategy by examining five recent papers, listed below, from the AI community. The specific focus of the exam will identify the key issues in the mechanism and evaluation of the portfolio strategy. But the discussion of these papers will also include a summary of the author’s primary findings, a more specific context within its community, the implications to the specific community where the paper is published, and the implications to the larger AI community. This set of papers is representative, but not exhaustive, of recent work in portfolios. All selected portfolios model the runtime behavior of various algorithms for combinatorial problems. The papers also
Sorting Feature Retention and Algorithm Selection through Input Sampling
"... There are often several algorithms to solve a particular problem, but these solutions can differ in complexity depending on the problem's input. For example, Knuth discusses 25 sorting algorithms that vary in performance depending on the input's features or the hardware's architecture ..."
Abstract
 Add to MetaCart
(Show Context)
There are often several algorithms to solve a particular problem, but these solutions can differ in complexity depending on the problem's input. For example, Knuth discusses 25 sorting algorithms that vary in performance depending on the input's features or the hardware's architecture. Various features of the input determine the efficiency of a given algorithm, but these characteristics are often either unknown or more inefficient to enumerate than the solutions to the original problem. Randomly sampling the input, though, can provide a much smaller problem that retains the general features of the much larger input on which to test the algorithms for efficiency for algorithm selection. We show for the sorting problem that using a small sample of a large input not only retains the features of sortedness but also allows accurate ranking of algorithms for the large input. There is often, in fact, a net gain in performance even when considering the extra time for testing the algorithms on the samples. I.
Algorithm Selection for the Graph Coloring Problem
"... We present an automated algorithm selection method based on machine learning for the graph coloring problem (GCP). For this purpose, we identify 78 features for this problem and evaluate the performance of six stateoftheart (meta)heuristics for the GCP. We use the obtained data to train several ..."
Abstract
 Add to MetaCart
We present an automated algorithm selection method based on machine learning for the graph coloring problem (GCP). For this purpose, we identify 78 features for this problem and evaluate the performance of six stateoftheart (meta)heuristics for the GCP. We use the obtained data to train several classification algorithms that are applied to predict on a new instance the algorithm with the highest expected performance. To achieve better performance for the machine learning algorithms, we investigate the impact of parameters, and evaluate different data discretization and feature selection methods. Finally, we evaluate our approach, which exploits the existing GCP techniques and the automated algorithm selection, and compare it with existing heuristic algorithms. Experimental results show that the GCP solver based on machine learning outperforms previous methods on benchmark instances.
Algorithm Selection for Search: A survey Algorithm Selection for Combinatorial Search Problems:
"... The Algorithm Selection Problem is concerned with selecting the best algorithm to solve a given problem on a casebycase basis. It has become especially relevant in the last decade, as researchers are increasingly investigating how to identify the most suitable existing algorithm for solving a prob ..."
Abstract
 Add to MetaCart
The Algorithm Selection Problem is concerned with selecting the best algorithm to solve a given problem on a casebycase basis. It has become especially relevant in the last decade, as researchers are increasingly investigating how to identify the most suitable existing algorithm for solving a problem instead of developing new algorithms. This survey presents an overview of this work focusing on the contributions made in the area of combinatorial search problems, where Algorithm Selection techniques have achieved significant performance improvements. We unify and organise the vast literature according to criteria that determine Algorithm Selection systems in practice. The comprehensive classification of approaches identifies and analyses the different directions from which Algorithm Selection has been approached. This paper contrasts and compares different methods for solving the problem as well as ways of using these solutions. It closes by identifying directions of current and future research. 1.