Results 1  10
of
22
Learning Evaluation Functions to Improve Optimization by Local Search
 Journal of Machine Learning Research
, 2000
"... This paper describes algorithms that learn to improve search performance on largescale optimization tasks. The main algorithm, Stage, works by learning an evaluation function that predicts the outcome of a local search algorithm, such as hillclimbing or Walksat, from features of states visited durin ..."
Abstract

Cited by 59 (0 self)
 Add to MetaCart
This paper describes algorithms that learn to improve search performance on largescale optimization tasks. The main algorithm, Stage, works by learning an evaluation function that predicts the outcome of a local search algorithm, such as hillclimbing or Walksat, from features of states visited during search. The learned evaluation function is then used to bias future search trajectories toward better optima on the same problem. Another algorithm, XStage, transfers previously learned evaluation functions to new, similar optimization problems. Empirical results are provided on seven largescale optimization domains: binpacking, channel routing, Bayesian network structurefinding, radiotherapy treatment planning, cartogram design, Boolean satisfiability, and Boggle board setup.
Experiments with Parallel Graph Coloring Heuristics
 In (Johnson & Trick
, 1994
"... We report on experiments with a new hybrid graph coloring algorithm, which combines a parallel version of Morgenstern's SImpasse algorithm [20], with exhaustive search. We contribute new test data arising in five different application domains, including register allocation and class scheduling ..."
Abstract

Cited by 27 (0 self)
 Add to MetaCart
We report on experiments with a new hybrid graph coloring algorithm, which combines a parallel version of Morgenstern's SImpasse algorithm [20], with exhaustive search. We contribute new test data arising in five different application domains, including register allocation and class scheduling. We test our algorithms both on this test data and on several types of randomly generated graphs. We compare our parallel implementation, which is done on the CM5, with two simple heuristics, the Saturation algorithm of Br'elaz [4] and the Recursive Largest First (RLF) algorithm of Leighton [18]. We also compare our results with previous work reported by Morgenstern [20] and Johnson et al. [13]. Our main results are as follows. ffl On the randomly generated graphs, the performance of Hybrid is consistently better than the sequential algorithms, both in terms of speed and number of colorings produced. However, on large random graphs, our algorithms do not come close to the best colorings found ...
HillClimbing Finds Random Planted Bisections
 Proc. 12th Symposium on Discrete Algorithms (SODA 01), ACM press, 2001
, 2001
"... We analyze the behavior of hillclimbing algorithms for the minimum bisection problem on instances drawn from the "planted bisection" random graph model, Gn;p;q , previously studied in [3, 4, 10, 12, 15, 9, 7]. This is one of the few problem distributions for which various popular heuristi ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
We analyze the behavior of hillclimbing algorithms for the minimum bisection problem on instances drawn from the "planted bisection" random graph model, Gn;p;q , previously studied in [3, 4, 10, 12, 15, 9, 7]. This is one of the few problem distributions for which various popular heuristic methods, such as simulated annealing, have been proven to succeed. However, it has been open whether these sophisticated methods were necessary, or whether simpler heuristics would also work. Juels [15] made the first progress towards an answer by showing that simple hillclimbing does suffice for very wide separations between p and q.
Multiagent Cooperative Search for Portfolio Selection
, 2001
"... this paper because we assume throughout that the total initial wealth of all systems of agents is $1 ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
this paper because we assume throughout that the total initial wealth of all systems of agents is $1
Eliminating Incoherence from Subjective Estimates of Chance
 In: Proceedings of the 8th International Conference on the Principles of Knowledge Representation and Reasoning (KR
, 2002
"... Human judgment is an essential source of Bayesian probabilities but is plagued by incoherence when complex or conditional events are involved. We consider a method for adjusting estimates of chance over Boolean events so as to render them probabilistically coherent. The method works by searching for ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
Human judgment is an essential source of Bayesian probabilities but is plagued by incoherence when complex or conditional events are involved. We consider a method for adjusting estimates of chance over Boolean events so as to render them probabilistically coherent. The method works by searching for a sparse distribution that approximates a target set of judgments. (We show that sparse distributions suce for this purpose.) The feasibility of our method was tested by randomly generating sets of coherent and incoherent estimates of chance over 30 to 50 variables.
PERM: A Monte Carlo strategy for simulating polymers and other things, Monte Carlo Approach to Biopolymers and Protein
, 1998
"... configurations from a given GibbsBoltzmann distribution. The method is not based on the Metropolis concept of establishing a Markov process whose stationary state is the wanted distribution. Instead, it starts off building instances according to a biased distribution, but corrects for this by cloni ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
configurations from a given GibbsBoltzmann distribution. The method is not based on the Metropolis concept of establishing a Markov process whose stationary state is the wanted distribution. Instead, it starts off building instances according to a biased distribution, but corrects for this by cloning “good ” and killing “bad ” configurations. In doing so, it uses the fact that nontrivial problems in statistical physics are high dimensional. Therefore, instances are built step by step, and the final “success ” of an instance can be guessed at an early stage. Using weighted samples, this is done so that the final distribution is strictly unbiased. In contrast to evolutionary algorithms, the cloning/killing is done without simultaneously keeping a large population in computer memory. We apply this in large scale simulations of homopolymers near the theta and unmixing critical points. In addition we sketch other applications, notably to polymers in confined geometries and to randomly branched polymers. For theta polymers we confirm the very strong logarithmic corrections found in previous work. For critical unmixing we essentially confirm the FloryHuggins mean field theory and the logarithmic corrections to it computed by Duplantier. We suggest that the latter are responsible for some apparent violations of mean field behavior. This concerns in particular the exponent for the chain length dependence of the critical density which is 1/2 in FloryHuggins theory, but is claimed to be ≈ 0.38 in several experiments. 1
Go with the winners for Graph Bisection
 In Proc. 9th SODA, 510{ 520
, 1998
"... We analyze "Go with the winners" for graph bisection. We introduce a weaker version of expansion called "local expansion". We show that "Go with the winners" works well in any search space whose subgraphs with solutions at least as good as a certain threshold have loca ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
We analyze "Go with the winners" for graph bisection. We introduce a weaker version of expansion called "local expansion". We show that "Go with the winners" works well in any search space whose subgraphs with solutions at least as good as a certain threshold have local expansion, and where these subgraphs do not shrink more than by a polynomial factor when the threshold is incremented. We give a general technique for showing that solution spaces for random instances of problems have local expansion. We apply this technique to the minimum bisection problem for random graphs. We conclude that "Go with the winners" approximates the best solution in random graphs of certain densities with planted bisections in polynomial time, and finds the optimal solution in quasipolynomial time. Although other methods also solve this problem for the same densities, the set of tools we develop may be useful in the analysis of similar problems. In particular, our results easily extend to hypergraph b...
Retrospective analysis: Refinements of local search for satisfiability testing
 Proc Fourth International Conference on Neural Networks and Applications (NeuroNîmes 91), EC2
, 1995
"... iii ABSTRACT Local Search routines typically depend on parameters that control the search, such as how long to search before restarting. Optimizing these parameters improves performance and is important for a fair comparison of differing approaches. However, careful optimization is computationally e ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
iii ABSTRACT Local Search routines typically depend on parameters that control the search, such as how long to search before restarting. Optimizing these parameters improves performance and is important for a fair comparison of differing approaches. However, careful optimization is computationally expensive and has been undoable for larger problem sizes. Here, a probabilistic method, retrospective parameter optimization, is presented. Retrospective analysis allows certain parameters to be tuned using previously collected runtimedata. The method is applied to optimizing mean performance of Wsat on Random 3SAT and scheduling problems by tuning the MaxFlips parameter. Evidence is provided that the optimal value of MaxFlips scales quadratically for Random 3SAT. Further, we show that parallelizing Wsat leads to almost linear speedup on Random 3SAT for a moderate number of processors. Finally, retrospective analysis is used to test refinements of Wsat, including an implicit propagation mechanism which improves performance on Sadeh's scheduling problems by exploiting their structure. iv ACKNOWLEDGEMENTS I would like to thank my advisor, Jimi Crawford, for starting me on Local Search and for giving me substantial and continuous feedback during the course of this thesis. I am grateful to Andrew Parkes for many invaluable discussions and for continuously supplying me with SAT problem instances.
Adaptive Methods for Netlist Partitioning
, 1997
"... An algorithm that remains in use at the core of many partitioning systems is the KernighanLin algorithm and a variant the FidduciaMatheysses (FM) algorithm. To understand the FM algorithm we applied principles of data engineering where visualization and statistical analysis are used to analyze the ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
An algorithm that remains in use at the core of many partitioning systems is the KernighanLin algorithm and a variant the FidduciaMatheysses (FM) algorithm. To understand the FM algorithm we applied principles of data engineering where visualization and statistical analysis are used to analyze the runtime behavior. We identified two improvements to the algorithm which, without clustering or an improved heuristic function, bring the performance of the algorithm near that of more sophisticated algorithms. One improvement is based on the observation, explored empirically, that the full passes in the FM algorithm appear comparable to a stochastic local restart in the search. We motivate this observation with a discussion of recent improvements in Monte Carlo Markov Chain methods in statistics. The other improvement is based on the observation that when an FMlike algorithm is run 20 times and the best run chosen, the performance trace of the algorithm on earlier runs is useful data for ...
Phase Diagram of Random Heteropolymers: Replica Approach and Application of a New Monte Carlo Algorithm.
 Jour Mol Liq
, 2000
"... apse transition before the freezing seems to be predicted exactly by the annealed approximation), while not much can be said about the existence of the conjectured phase transition. The Monte Carlo method that we used, the Pruned Enriched Rosenbluth Method (PERM), has proved to be very efficient. I ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
apse transition before the freezing seems to be predicted exactly by the annealed approximation), while not much can be said about the existence of the conjectured phase transition. The Monte Carlo method that we used, the Pruned Enriched Rosenbluth Method (PERM), has proved to be very efficient. Its principles and its implementation are described in an Appendix. 1. MODEL Proteins perform their biological activity when they are folded in a well defined structure, named the native state, which is uniquely determined by their aminoacid sequence [1, 2]. This behavior attracted much interest in the statistical mechanics community in the last decade. From a theoretical point of view, there are two complementary sides of the protein folding problem: Present address: Freie Universitat Berlin, FB Chemie, Institut fur Kristallographie, Takustr. 6 D14195 Berlin 1. To explain the stability of the native s