Results 1 
5 of
5
Towards distributed algorithm portfolios
 In DCAI 2008 — International Symposium on Distributed Computing and Artificial Intelligence, Advances in Soft Computing
, 2008
"... Summary. In recent work we have developed an online algorithm selection technique, in which a model of algorithm performance is learned incrementally while being used. The resulting explorationexploitation tradeoff is solved as a bandit problem. The candidate solvers are run in parallel on a singl ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Summary. In recent work we have developed an online algorithm selection technique, in which a model of algorithm performance is learned incrementally while being used. The resulting explorationexploitation tradeoff is solved as a bandit problem. The candidate solvers are run in parallel on a single machine, as an algorithm portfolio, and computation time is shared among them according to their expected performances. In this paper, we extend our technique to the more interesting and practical case of multiple CPUs. 1
Efficient multistart strategies for local search algorithms
 In
, 2009
"... Local search algorithms applied to optimization problems often suffer from getting trapped in a local optimum. The common solution for this deficiency is to restart the algorithm when no progress is observed. Alternatively, one can start multiple instances of a local search algorithm, and allocate c ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Local search algorithms applied to optimization problems often suffer from getting trapped in a local optimum. The common solution for this deficiency is to restart the algorithm when no progress is observed. Alternatively, one can start multiple instances of a local search algorithm, and allocate computational resources (in particular, processing time) to the instances depending on their behavior. Hence, a multistart strategy has to decide (dynamically) when to allocate additional resources to a particular instance and when to start new instances. In this paper we propose multistart strategies motivated by works on multiarmed bandit problems and Lipschitz optimization with an unknown constant. The strategies continuously estimate the potential performance of each algorithm instance by supposing a convergence rate of the local search algorithm up to an unknown constant, and in every phase allocate resources to those instances that could converge to the optimum for a particular range of the constant. Asymptotic bounds are given on the performance of the strategies. In particular, we prove that at most a quadratic increase in the number of times the target function is evaluated is needed to achieve the performance of a local search algorithm started from the attraction region of the optimum. Experiments are provided using SPSA (Simultaneous Perturbation Stochastic Approximation) and kmeans as local search algorithms, and the results indicate that the proposed strategies work well in practice, and, in all cases studied, need only logarithmically more evaluations of the target function as opposed to the theoretically suggested quadratic increase. 1.
Author manuscript, published in "N/P" DOI: 10.1007/s104720109213y AMAI – Special Issue on LION manuscript No. (will be inserted by the editor) Analyzing Banditbased Adaptive Operator Selection Mechanisms ⋆
, 2010
"... Abstract Several techniques have been proposed to tackle the Adaptive Operator Selection (AOS) issue in Evolutionary Algorithms. Some recent proposals are based on the MultiArmed Bandit (MAB) paradigm: each operator is viewed as one arm of a MAB problem, and the rewards are mainly based on the fitn ..."
Abstract
 Add to MetaCart
Abstract Several techniques have been proposed to tackle the Adaptive Operator Selection (AOS) issue in Evolutionary Algorithms. Some recent proposals are based on the MultiArmed Bandit (MAB) paradigm: each operator is viewed as one arm of a MAB problem, and the rewards are mainly based on the fitness improvement brought by the corresponding operator to the individual it is applied to. However, the AOS problem is dynamic, whereas standard MAB algorithms are known to optimally solve the exploitation versus exploration tradeoff in static settings. An original dynamic variant of the standard MAB Upper Confidence Bound algorithm is proposed here, using a sliding time window to compute both its exploitation and exploration terms. In order to perform sound comparisons between AOS algorithms, artificial scenarios have been proposed in the literature. They are extended here toward smoother transitions between different reward settings. The resulting original testbed also includes a real evolutionary algorithm that is applied to the wellknown Royal Road problem. It is used here to perform a thorough analysis of the behavior of AOS algorithms, to assess their sensitivity with respect to their own hyperparameters, and to propose a sound
by
"... i Preface This is a revised version of the master thesis Algorithm Selection for the Graph Coloring Problem. In the following paragraph, we list the corrections compared to the original version. Insignificant typos and spelling errors are not marked explicitly. Notation: p. x, t. y means page x, lin ..."
Abstract
 Add to MetaCart
i Preface This is a revised version of the master thesis Algorithm Selection for the Graph Coloring Problem. In the following paragraph, we list the corrections compared to the original version. Insignificant typos and spelling errors are not marked explicitly. Notation: p. x, t. y means page x, line y from top. Similarly p. x, b. y means page x, line y from bottom. • p. 23, b 8: Changes citation source to [109]. Note that this changes the enumeration of the remaining references. • p. 39, first subsection: We are using maximal cliques and not maximum cliques as graph feature. iii Acknowledgements First of all, let me note that I don’t believe that many people will ever read this thesis. From my experience, I know that especially the acknowledgments are one of the first chapters that everybody skips because of time reasons or just a lack of interest. Nevertheless, I would like to