Results 1 
6 of
6
"Squeaky Wheel" Optimization
, 1999
"... We describe a general approach to optimization which we term "Squeaky Wheel" Optimization (swo). In swo, a greedy algorithm is used to construct a solution which is then analyzed to find the trouble spots, i.e., those elements, that, if improved, are likely to improve the objective function scor ..."
Abstract

Cited by 68 (2 self)
 Add to MetaCart
We describe a general approach to optimization which we term "Squeaky Wheel" Optimization (swo). In swo, a greedy algorithm is used to construct a solution which is then analyzed to find the trouble spots, i.e., those elements, that, if improved, are likely to improve the objective function score. The results of the analysis are used to generate new priorities that determine the order in which the greedy algorithm constructs the next solution. This Construct/Analyze/Prioritize cycle continues until some limit is reached, or an acceptable solution is found. SWO can be viewed as operating on two search spaces: solutions and prioritizations. Successive solutions are only indirectly related, via the reprioritization that results from analyzing the prior solution. Similarly, successive prioritizations are generated by constructing and analyzing solutions. This "coupled search" has some interesting properties, which we discuss. We report encouraging experimental results on two ...
Mixedinitiative decision support in agentbased automated contracting
 In Proc. of the Fourth Int'l Conf. on Autonomous Agents
, 2000
"... ..."
Incomplete Tree Search using Adaptive Probing
, 2001
"... When not enough time is available to fully explore a search tree, different algorithms will visit different leaves. Depthfirst search and depthbounded discrepancy search, for example, make opposite assumptions about the distribution of good leaves. Unfortunately, it is rarely clear a priori which ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
When not enough time is available to fully explore a search tree, different algorithms will visit different leaves. Depthfirst search and depthbounded discrepancy search, for example, make opposite assumptions about the distribution of good leaves. Unfortunately, it is rarely clear a priori which algorithm will be most appropriate for a particular problem. Rather than fixing strong assumptions in advance, we propose an approach in which an algorithm attempts to adjust to the distribution of leaf costs in the tree while exploring it. By sacrificing completeness, such flexible algorithms can exploit information gathered during the search using only weak assumptions. As an example, we show how a simple depthbased additive cost model of the tree can be learned online. Empirical analysis using a generic tree search problem shows that adaptive probing is competitive with systematic algorithms on a variety of hard trees and outperforms them when the nodeordering heuristic makes many mistakes. Results on boolean satisfiability and two different representations of number partitioning confirm these observations. Adaptive probing combines the flexibility and robustness of local search with the ability to take advantage of constructive heuristics.
Random walks and neighborhood bias in oversubscribed scheduling
 In Multidisciplinary International Conference on Scheduling (MISTA05
, 2005
"... Abstract This paper presents new results showing that a very simple stochastic hill climbing algorithm is as good or better than more complex metaheuristic methods for solving an oversubscribed scheduling problem: scheduling communication contacts on the Air Force Satellite Control Network (AFSCN). ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Abstract This paper presents new results showing that a very simple stochastic hill climbing algorithm is as good or better than more complex metaheuristic methods for solving an oversubscribed scheduling problem: scheduling communication contacts on the Air Force Satellite Control Network (AFSCN). The empirical results also suggest that the best neighborhood construction choices produce a search that is largely a greedy random walk of the graph induced by the complete neighborhood.
Stochastic tree search: Where to put the randomness
 Proceedings of the IJCAI01 Workshop on Stochastic Search
"... In this short note, I argue against two commonlyheld biases. The first is that stochastic search is applicable only to improvement search over complete solutions. On the contrary, many problems have effective greedy heuristics for constructing solutions, making a treestructured search space more ap ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this short note, I argue against two commonlyheld biases. The first is that stochastic search is applicable only to improvement search over complete solutions. On the contrary, many problems have effective greedy heuristics for constructing solutions, making a treestructured search space more appropriate. The second is that stochastic tree search algorithms should explore the same space of decisions as systematic methods. Constructing search trees in the traditional manner, by choosing the default variable at the parent and valuing it differently at each child, makes sense for efficient complete search, but is not necessarily the best choice for incomplete methods. In an empirical study using the combinatorial optimization problem of number partitioning, I show that the opposite approach, choosing a different variable at each child and giving it the default value, can be a good choice for incomplete stochastic algorithms. 1 Stochastic Tree Search A large number of papers have appeared in recent years (including at AI conferences such as IJCAI and AAAI) devoted to stochastic improvement search for optimization problems, in which an algorithm attempts to improve a complete but potentially suboptimal solution. Many of these ‘local search’ methods, such as tabu search or simulated annealing, are completely general and use as their only source of problemspecific information the ability to evaluate the objective function on a complete solution. Others, such as WalkSAT, take advantage of heuristic guidance in the form of a function that identifies variables that might be profitably changed. Improvement methods are often contrasted with complete search methods, which use techniques such as branchandbound or dynamic backtracking [Ginsberg, 1993] to systematically extend an empty solution in all possible ways, implicitly traversing a tree containing all possible solutions. When run to completion, such methods guarantee an optimal solution. But it
SELFPLA��ER: An Intelligent Webbased Calendar Application
"... This paper presents SELFPLANNER, a webbased intelligent calendar application that plans a user's tasks. The user enters her tasks along with task details, i.e. duration, release time and deadline, location, unary and binary constraints and preferences, and the application schedules the tasks and pr ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper presents SELFPLANNER, a webbased intelligent calendar application that plans a user's tasks. The user enters her tasks along with task details, i.e. duration, release time and deadline, location, unary and binary constraints and preferences, and the application schedules the tasks and presents the resulting plan using Google's Calendar application. Whenever new tasks arrive, the user may ask for incremental replanning, whereas attention is paid to keep the original plan as unchanged as possible. SELFPLANNER supports both nonpreemptive and preemptive tasks, with additional types of constraints over the preemptive ones. It also assigns location references to the tasks, whereas travel times are taken into account while scheduling. The user may impose ordering constraints among the tasks. Unary preferences assign utilities to the tasks ' domains, whereas binary preferences concern minimizing or maximizing the distance between pairs of tasks. SELFPLANNER solves the planning problem using an adaptation of the Squeaky Wheel Optimization framework.