Results 1  10
of
14
Learning Evaluation Functions to Improve Optimization by Local Search
 Journal of Machine Learning Research
, 2000
"... This paper describes algorithms that learn to improve search performance on largescale optimization tasks. The main algorithm, Stage, works by learning an evaluation function that predicts the outcome of a local search algorithm, such as hillclimbing or Walksat, from features of states visited durin ..."
Abstract

Cited by 56 (0 self)
 Add to MetaCart
This paper describes algorithms that learn to improve search performance on largescale optimization tasks. The main algorithm, Stage, works by learning an evaluation function that predicts the outcome of a local search algorithm, such as hillclimbing or Walksat, from features of states visited during search. The learned evaluation function is then used to bias future search trajectories toward better optima on the same problem. Another algorithm, XStage, transfers previously learned evaluation functions to new, similar optimization problems. Empirical results are provided on seven largescale optimization domains: binpacking, channel routing, Bayesian network structurefinding, radiotherapy treatment planning, cartogram design, Boolean satisfiability, and Boggle board setup.
Experiments with Parallel Graph Coloring Heuristics
 In (Johnson & Trick
, 1994
"... We report on experiments with a new hybrid graph coloring algorithm, which combines a parallel version of Morgenstern's SImpasse algorithm [20], with exhaustive search. We contribute new test data arising in five different application domains, including register allocation and class scheduling. We ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
We report on experiments with a new hybrid graph coloring algorithm, which combines a parallel version of Morgenstern's SImpasse algorithm [20], with exhaustive search. We contribute new test data arising in five different application domains, including register allocation and class scheduling. We test our algorithms both on this test data and on several types of randomly generated graphs. We compare our parallel implementation, which is done on the CM5, with two simple heuristics, the Saturation algorithm of Br'elaz [4] and the Recursive Largest First (RLF) algorithm of Leighton [18]. We also compare our results with previous work reported by Morgenstern [20] and Johnson et al. [13]. Our main results are as follows. ffl On the randomly generated graphs, the performance of Hybrid is consistently better than the sequential algorithms, both in terms of speed and number of colorings produced. However, on large random graphs, our algorithms do not come close to the best colorings found ...
HillClimbing Finds Random Planted Bisections
 Proc. 12th Symposium on Discrete Algorithms (SODA 01), ACM press, 2001
, 2001
"... We analyze the behavior of hillclimbing algorithms for the minimum bisection problem on instances drawn from the "planted bisection" random graph model, Gn;p;q , previously studied in [3, 4, 10, 12, 15, 9, 7]. This is one of the few problem distributions for which various popular heuristic methods, ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
We analyze the behavior of hillclimbing algorithms for the minimum bisection problem on instances drawn from the "planted bisection" random graph model, Gn;p;q , previously studied in [3, 4, 10, 12, 15, 9, 7]. This is one of the few problem distributions for which various popular heuristic methods, such as simulated annealing, have been proven to succeed. However, it has been open whether these sophisticated methods were necessary, or whether simpler heuristics would also work. Juels [15] made the first progress towards an answer by showing that simple hillclimbing does suffice for very wide separations between p and q.
Multiagent Cooperative Search for Portfolio Selection
, 2001
"... this paper because we assume throughout that the total initial wealth of all systems of agents is $1 ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
this paper because we assume throughout that the total initial wealth of all systems of agents is $1
PERM: A Monte Carlo strategy for simulating polymers and other things, Monte Carlo Approach to Biopolymers and Protein
, 1998
"... configurations from a given GibbsBoltzmann distribution. The method is not based on the Metropolis concept of establishing a Markov process whose stationary state is the wanted distribution. Instead, it starts off building instances according to a biased distribution, but corrects for this by cloni ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
configurations from a given GibbsBoltzmann distribution. The method is not based on the Metropolis concept of establishing a Markov process whose stationary state is the wanted distribution. Instead, it starts off building instances according to a biased distribution, but corrects for this by cloning “good ” and killing “bad ” configurations. In doing so, it uses the fact that nontrivial problems in statistical physics are high dimensional. Therefore, instances are built step by step, and the final “success ” of an instance can be guessed at an early stage. Using weighted samples, this is done so that the final distribution is strictly unbiased. In contrast to evolutionary algorithms, the cloning/killing is done without simultaneously keeping a large population in computer memory. We apply this in large scale simulations of homopolymers near the theta and unmixing critical points. In addition we sketch other applications, notably to polymers in confined geometries and to randomly branched polymers. For theta polymers we confirm the very strong logarithmic corrections found in previous work. For critical unmixing we essentially confirm the FloryHuggins mean field theory and the logarithmic corrections to it computed by Duplantier. We suggest that the latter are responsible for some apparent violations of mean field behavior. This concerns in particular the exponent for the chain length dependence of the critical density which is 1/2 in FloryHuggins theory, but is claimed to be ≈ 0.38 in several experiments. 1
Eliminating Incoherence from Subjective Estimates of Chance
 In: Proceedings of the 8th International Conference on the Principles of Knowledge Representation and Reasoning (KR
, 2002
"... Human judgment is an essential source of Bayesian probabilities but is plagued by incoherence when complex or conditional events are involved. We consider a method for adjusting estimates of chance over Boolean events so as to render them probabilistically coherent. The method works by searching for ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
Human judgment is an essential source of Bayesian probabilities but is plagued by incoherence when complex or conditional events are involved. We consider a method for adjusting estimates of chance over Boolean events so as to render them probabilistically coherent. The method works by searching for a sparse distribution that approximates a target set of judgments. (We show that sparse distributions suce for this purpose.) The feasibility of our method was tested by randomly generating sets of coherent and incoherent estimates of chance over 30 to 50 variables.
Retrospective analysis: Refinements of local search for satisfiability testing
 Proc Fourth International Conference on Neural Networks and Applications (NeuroNîmes 91), EC2
, 1995
"... iii ABSTRACT Local Search routines typically depend on parameters that control the search, such as how long to search before restarting. Optimizing these parameters improves performance and is important for a fair comparison of differing approaches. However, careful optimization is computationally e ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
iii ABSTRACT Local Search routines typically depend on parameters that control the search, such as how long to search before restarting. Optimizing these parameters improves performance and is important for a fair comparison of differing approaches. However, careful optimization is computationally expensive and has been undoable for larger problem sizes. Here, a probabilistic method, retrospective parameter optimization, is presented. Retrospective analysis allows certain parameters to be tuned using previously collected runtimedata. The method is applied to optimizing mean performance of Wsat on Random 3SAT and scheduling problems by tuning the MaxFlips parameter. Evidence is provided that the optimal value of MaxFlips scales quadratically for Random 3SAT. Further, we show that parallelizing Wsat leads to almost linear speedup on Random 3SAT for a moderate number of processors. Finally, retrospective analysis is used to test refinements of Wsat, including an implicit propagation mechanism which improves performance on Sadeh's scheduling problems by exploiting their structure. iv ACKNOWLEDGEMENTS I would like to thank my advisor, Jimi Crawford, for starting me on Local Search and for giving me substantial and continuous feedback during the course of this thesis. I am grateful to Andrew Parkes for many invaluable discussions and for continuously supplying me with SAT problem instances.
Adaptive Methods for Netlist Partitioning
, 1997
"... An algorithm that remains in use at the core of many partitioning systems is the KernighanLin algorithm and a variant the FidduciaMatheysses (FM) algorithm. To understand the FM algorithm we applied principles of data engineering where visualization and statistical analysis are used to analyze the ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
An algorithm that remains in use at the core of many partitioning systems is the KernighanLin algorithm and a variant the FidduciaMatheysses (FM) algorithm. To understand the FM algorithm we applied principles of data engineering where visualization and statistical analysis are used to analyze the runtime behavior. We identified two improvements to the algorithm which, without clustering or an improved heuristic function, bring the performance of the algorithm near that of more sophisticated algorithms. One improvement is based on the observation, explored empirically, that the full passes in the FM algorithm appear comparable to a stochastic local restart in the search. We motivate this observation with a discussion of recent improvements in Monte Carlo Markov Chain methods in statistics. The other improvement is based on the observation that when an FMlike algorithm is run 20 times and the best run chosen, the performance trace of the algorithm on earlier runs is useful data for ...
A Study on Performance of the (1+1)Evolutionary Algorithm
 Foundations of Genetic Algorithms, 7
, 2003
"... The first contribution of this paper is a theoretical comparison of the (1+1) EA evolutionary algorithm to other evolutionary algorithms in the case of socalled monotone reproduction operator, which indicates that the (1+1) EA is an optimal search technique in this setting. After that we study ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
The first contribution of this paper is a theoretical comparison of the (1+1) EA evolutionary algorithm to other evolutionary algorithms in the case of socalled monotone reproduction operator, which indicates that the (1+1) EA is an optimal search technique in this setting. After that we study the expected optimization time for the (1+1)EA and show two set covering problem families where it is superior to certain generalpurpose exact algorithms. Finally some pessimistic estimates of mutation operators in terms of upper bounds on evolvability are suggested for the NPhard optimization problems.
Lengauer T.; Parallel “go with the winners algorithms” in the LogP Model
 in: Proceedings of the 11th International Parallel Processing Symposium, IEEE Computer Society Press, Los Alamitos
, 1997
"... We parallelize the ‘Go with the winners ’ algorithm of Aldous and Vazirani [1] and analyze the resulting parallel algorithm in the LogPmodel [4]. The main issues in the analysis are load imbalances and communication delays. The result of the analysis is a practical algorithm which, under reasonable ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We parallelize the ‘Go with the winners ’ algorithm of Aldous and Vazirani [1] and analyze the resulting parallel algorithm in the LogPmodel [4]. The main issues in the analysis are load imbalances and communication delays. The result of the analysis is a practical algorithm which, under reasonable assumptions, achieves linear speedup. Finally, we analyze our algorithm for a concrete application: generating models of amorphous chemical structures. 1.