Results 1 
5 of
5
Sequential parameter optimization applied to selfadaptation for binarycoded evolutionary algorithms
 Parameter Setting in Evolutionary Algorithms, Studies in Computational Intelligence
, 2007
"... Summary. Adjusting algorithm parameters to a given problem is of crucial importance for performance comparisons as well as for reliable (first) results on previously unknown problems, or with new algorithms. This also holds for parameters controlling adaptability features, as long as the optimizatio ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Summary. Adjusting algorithm parameters to a given problem is of crucial importance for performance comparisons as well as for reliable (first) results on previously unknown problems, or with new algorithms. This also holds for parameters controlling adaptability features, as long as the optimization algorithm is not able to completely selfadapt itself to the posed problem and thereby get rid of all parameters. We present the recently developed sequential parameter optimization (SPO) technique that reliably finds good parameter sets for stochastically disturbed algorithm output. SPO combines classical regression techniques and modern statistical approaches for deterministic algorithms as Design and Analysis of Computer Experiments (DACE). Moreover, it is embedded in a twelvestep procedure that targets at doing optimization experiments in a statistically sound manner, focusing on answering scientific questions. We apply SPO to a question that did not receive much attention yet: Is selfadaptation as known from realcoded evolution strategies useful when applied to binarycoded problems? Here, SPO enables obtaining parameters resulting in good performance of selfadaptive mutation operators. It thereby allows for reliable comparison of modified and traditional evolutionary algorithms, finally allowing for well founded conclusions concerning the usefulness of either technique. 1
Heuristics for Generating Additive Spanners
, 2004
"... Given an undirected and unweighted graph G, the subgraph S is an additive spanner of G with delay d if the distance between any two vertices in S is no more than d greater than their distance in G. It is known that the problem of finding additive spanners of arbitrary graphs for any fixed value of d ..."
Abstract
 Add to MetaCart
Given an undirected and unweighted graph G, the subgraph S is an additive spanner of G with delay d if the distance between any two vertices in S is no more than d greater than their distance in G. It is known that the problem of finding additive spanners of arbitrary graphs for any fixed value of d with a minimum number of edges is NPhard. Additive spanners are used as substructures for communication networks which are subject to design constraints such as minimizing the number of connections in the network, or permitting only a maximum number of connections at any one node. In this thesis, we consider the problem of constructing good additive spanners. We say that a spanner is “good ” if it contains few edges, but not necessarily a minimum number of them. We present several algorithms which, given a graph G and a delay parameter d as input, produce a graph S which is an additive spanner of G with delay d. We evaluate each of these algorithms experimentally over a large set of input
Algorithm Selection for the Graph Coloring Problem
"... We present an automated algorithm selection method based on machine learning for the graph coloring problem (GCP). For this purpose, we identify 78 features for this problem and evaluate the performance of six stateoftheart (meta)heuristics for the GCP. We use the obtained data to train several ..."
Abstract
 Add to MetaCart
We present an automated algorithm selection method based on machine learning for the graph coloring problem (GCP). For this purpose, we identify 78 features for this problem and evaluate the performance of six stateoftheart (meta)heuristics for the GCP. We use the obtained data to train several classification algorithms that are applied to predict on a new instance the algorithm with the highest expected performance. To achieve better performance for the machine learning algorithms, we investigate the impact of parameters, and evaluate different data discretization and feature selection methods. Finally, we evaluate our approach, which exploits the existing GCP techniques and the automated algorithm selection, and compare it with existing heuristic algorithms. Experimental results show that the GCP solver based on machine learning outperforms previous methods on benchmark instances.