Results 11  20
of
57
Parameter adjustment based on performance prediction: Towards an instanceaware problem solver
 In: Technical Report: MSRTR2005125, Microsoft Research
, 2005
"... Tuning an algorithm’s parameters for robust and high performance is a tedious and timeconsuming task that often requires knowledge about both the domain and the algorithm of interest. Furthermore, the optimal parameter configuration to use may differ considerably across problem instances. In this r ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
Tuning an algorithm’s parameters for robust and high performance is a tedious and timeconsuming task that often requires knowledge about both the domain and the algorithm of interest. Furthermore, the optimal parameter configuration to use may differ considerably across problem instances. In this report, we define and tackle the algorithm configuration problem, which is to automatically choose the optimal parameter configuration for a given algorithm on a perinstance base. We employ an indirect approach that predicts algorithm runtime for the problem instance at hand and each (continuous) parameter configuration, and then simply chooses the configuration that minimizes the prediction. This approach is based on similar work by LeytonBrown et al. [LBNS02, NLBD + 04] who tackle the algorithm selection problem [Ric76] (given a problem instance, choose the best algorithm to solve it). While all previous studies for runtime prediction focussed on tree search algorithm, we demonstrate that it is possible to fairly accurately predict the runtime of SAPS [HTH02], one of the bestperforming stochastic local search algorithms for SAT. We also show that our approach automatically picks parameter configurations that speed up SAPS by an average factor of more than two when compared to its default parameter configuration. Finally, we introduce sequential Bayesian learning to the problem of runtime prediction, enabling an incremental learning approach and yielding very informative estimates of predictive uncertainty. 1
Discriminative Learning of BeamSearch Heuristics for Planning
 PROCEEDINGS OF THE INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE 2007
, 2007
"... We consider the problem of learning heuristics for controlling forward statespace beam search in AI planning domains. We draw on a recent framework for “structured output classification ” (e.g. syntactic parsing) known as learning as search optimization (LaSO). The LaSO approach uses discriminative ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
We consider the problem of learning heuristics for controlling forward statespace beam search in AI planning domains. We draw on a recent framework for “structured output classification ” (e.g. syntactic parsing) known as learning as search optimization (LaSO). The LaSO approach uses discriminative learning to optimize heuristic functions for searchbased computation of structured outputs and has shown promising results in a number of domains. However, the search problems that arise in AI planning tend to be qualitatively very different from those considered in structured classification, which raises a number of potential difficulties in directly applying LaSO to planning. In this paper, we discuss these issues and describe a LaSObased approach for discriminative learning of beamsearch heuristics in AI planning domains. We give convergence results for this approach and present experiments in several benchmark domains. The results show that the discriminatively trained heuristic can outperform the one used by the planner FF and another recent nondiscriminative learning approach.
Learning an Approximation to Inductive Logic Programming Clause Evaluation
 In Proceedings of the 14th international
, 2004
"... One challenge faced by many Inductive Logic Programming (ILP) systems is poor scalability to problems with large search spaces and many examples. Randomized search methods such as stochastic clause selection (SCS) and rapid random restarts (RRR) have proven somewhat successful at addressing this ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
One challenge faced by many Inductive Logic Programming (ILP) systems is poor scalability to problems with large search spaces and many examples. Randomized search methods such as stochastic clause selection (SCS) and rapid random restarts (RRR) have proven somewhat successful at addressing this weakness. However, on datasets where hypothesis evaluation is computationally expensive, even these algorithms may take unreasonably long to discover a good solution. We attempt to improve the performance of these algorithms on datasets by learning an approximation to ILP hypothesis evaluation. We generate a small set of hypotheses, uniformly sampled from the space of candidate hypotheses, and evaluate this set on actual data. These hypotheses and their corresponding evaluation scores serve as training data for learning an approximate hypothesis evaluator. We outline three techniques that make use of the trained evaluationfunction approximator in order to reduce the computation required during an ILP hypothesis search. We test our approximate clause evaluation algorithm using the popular ILP system Aleph.
Feature selection methods for improving protein structure prediction with Rosetta
 in: Advances in Neural Information Processing Systems (NIPS
, 2007
"... Rosetta is one of the leading algorithms for protein structure prediction today. It is a Monte Carlo energy minimization method requiring many random restarts to find structures with low energy. In this paper we present a resampling technique for structure prediction of small alpha/beta proteins usi ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
Rosetta is one of the leading algorithms for protein structure prediction today. It is a Monte Carlo energy minimization method requiring many random restarts to find structures with low energy. In this paper we present a resampling technique for structure prediction of small alpha/beta proteins using Rosetta. From an initial round of Rosetta sampling, we learn properties of the energy landscape that guide a subsequent round of sampling toward lowerenergy structures. Rather than attempt to fit the full energy landscape, we use feature selection methods—both L1regularized linear regression and decision trees—to identify structural features that give rise to low energy. We then enrich these structural features in the second sampling round. Results are presented across a benchmark set of nine small alpha/beta proteins demonstrating that our methods seldom impair, and frequently improve, Rosetta’s performance. 1
A memorybased rash optimizer
 IN AAAI06 WORKSHOP ON HEURISTIC SEARCH, MEMORY BASED HEURISTICS AND THEIR APPLICATIONS
, 2006
"... This paper presents a memorybased Reactive Affine Shaker (MRASH) algorithm for global optimization. The Reactive Affine Shaker is an adaptive search algorithm based only on the function values. MRASH is an extension of RASH in which good starting points to RASH are suggested online by using Bayes ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
This paper presents a memorybased Reactive Affine Shaker (MRASH) algorithm for global optimization. The Reactive Affine Shaker is an adaptive search algorithm based only on the function values. MRASH is an extension of RASH in which good starting points to RASH are suggested online by using Bayesian Locally Weighted Regression (BLWR). Both techniques use the memory about the previous history of the search to guide the future exploration but in very different ways. RASH compiles the previous experience into a local search area where sample points are drawn, while locallyweighted regression saves the entire previous history to be mined extensively when an additional sample point is generated. Because of the high computational cost related to the BLWR model, it is applied only to evaluate the potential of an initial point for a local search run. The experimental results, focussed onto the case when the dominant computational cost is the evaluation of the target f function, show that MRASH is indeed capable of leading to good results for a smaller number of function evaluations.
Generating SAT LocalSearch Heuristics using a GP HyperHeuristic Framework
"... Abstract. We present GPHH, a framework for evolving localsearch 3SAT heuristics based on GP. The aim is to obtain “disposable ” heuristics which are evolved and used for a specific subset of instances of a problem. We test the heuristics evolved by GPHH against wellknown localsearch heuristics ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Abstract. We present GPHH, a framework for evolving localsearch 3SAT heuristics based on GP. The aim is to obtain “disposable ” heuristics which are evolved and used for a specific subset of instances of a problem. We test the heuristics evolved by GPHH against wellknown localsearch heuristics on a variety of benchmark SAT problems. Results are very encouraging. 1
A framework for online adaptive control of problem solving
 In Proc. of CP2001 workshop on OnLine combinatorial problem solving and Constraint Programming, Paphos
, 2001
"... The design of a problem solver for a particular problem depends on the problem type, the system resources, and the application requirements, as well as the specific problem instance. The difficulty in matching a solver to a problem can be ameliorated through the use of online adaptive control of sol ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
The design of a problem solver for a particular problem depends on the problem type, the system resources, and the application requirements, as well as the specific problem instance. The difficulty in matching a solver to a problem can be ameliorated through the use of online adaptive control of solving. In this approach, the solver or problem representation selection and parameters are defined appropriately to the problem structure, environment models, and dynamic performance information, and the rules or model underlying this decision are adapted dynamically. This paper presents a general framework for the adaptive control of solving and discusses the relationship of this framework both to adaptive techniques in control theory and to the existing adaptive solving literature. Experimental examples are presented to illustrate the possible uses of solver control. 1
Reactive Search Optimization: Learning while Optimizing
"... The final purpose of Reactive Search Optimization (RSO) is to simplify the life for the final user of optimization. While researchers enjoy designing algorithms, testing alternatives, tuning parameters and choosing solution schemes — in fact this is part of their daily life — the final users ’ inter ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
The final purpose of Reactive Search Optimization (RSO) is to simplify the life for the final user of optimization. While researchers enjoy designing algorithms, testing alternatives, tuning parameters and choosing solution schemes — in fact this is part of their daily life — the final users ’ interests are different: solving a problem in the
Local Search Methods
, 2006
"... Local search is one of the fundamental paradigms for solving computationally hard combinatorial problems, including the constraint satisfaction problem (CSP). It provides the basis for some of the most successful and versatile methods for solving the large and difficult problem instances encountered ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Local search is one of the fundamental paradigms for solving computationally hard combinatorial problems, including the constraint satisfaction problem (CSP). It provides the basis for some of the most successful and versatile methods for solving the large and difficult problem instances encountered in many reallife applications. Despite impressive advances in systematic, complete search algorithms, local search methods in many cases represent the only feasible way for solving these large and complex instances. Local search algorithms are also naturally suited for dealing with the optimisation criteria arising in many practical applications. The basic idea underlying local search is to start with a randomly or heuristically generated candidate solution of a given problem instance, which may be infeasible, suboptimal or incomplete, and to iteratively improve this candidate solution by means of typically minor modifications. Different local search methods vary in the way in which improvements are achieved, and in particular, in the way in which situations are handled in which no direct improvement is possible. Most local search methods use randomisation to ensure that the search process does not
Mechanism Design for Computationally Limited Agents
, 2004
"... Design, Bounded Rationality, Resource Bounded ReasoningFirst of all I would like to thank my adviser Tuomas Sandholm for all his support and patience during my years as a graduate student. He has provided me with a model of what a first class researcher should be. Second, I would like to thank the m ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Design, Bounded Rationality, Resource Bounded ReasoningFirst of all I would like to thank my adviser Tuomas Sandholm for all his support and patience during my years as a graduate student. He has provided me with a model of what a first class researcher should be. Second, I would like to thank the members of my thesis committee, Avrim Blum, Andrew Moore, Craig Boutilier, and Mark Satterthwaite, for giving me valuable criticism, insight and guidance, which helped shape the content and presentation of this dissertation. There are many people who have played significant roles in my graduate career. I would particularly like to thank Sherry May for suggesting I try out both graduate school and computer science. Without her initial encouragement I would not be where I am today. I have enjoyed many hours discussing research problems with a large group of people. These discussions introduced me to new ideas and helped clarify various technical points. In particular, discussions with Vince Conitzer, Andrew Gilpin, Anton Likhodedov, Benoit Hudson, Shuchi Chawla, Jason Hartline, Marty Zinkevich, Paolo Santi, Felix Brandt and David Parkes have proved to be invaluable.