Results 1 
7 of
7
Principles of Metareasoning
 Artificial Intelligence
, 1991
"... In this paper we outline a general approach to the study of metareasoning, not in the sense of explicating the semantics of explicitly specified metalevel control policies, but in the sense of providing a basis for selecting and justifying computational actions. This research contributes to a devel ..."
Abstract

Cited by 162 (10 self)
 Add to MetaCart
In this paper we outline a general approach to the study of metareasoning, not in the sense of explicating the semantics of explicitly specified metalevel control policies, but in the sense of providing a basis for selecting and justifying computational actions. This research contributes to a developing attack on the problem of resourcebounded rationality, by providing a means for analysing and generating optimal computational strategies. Because reasoning about a computation without doing it necessarily involves uncertainty as to its outcome, probability and decision theory will be our main tools. We develop a general formula for the utility of computations, this utility being derived directly from the ability of computations to affect an agent's external actions. We address some philosophical difficulties that arise in specifying this formula, given our assumption of limited rationality. We also describe a methodology for applying the theory to particular problemsolving systems, a...
Automated Learning of LoadBalancing Strategies For A Distributed Computer System
, 1992
"... (or derived) decision metrics are exemplified by MinLoad, which denotes the least among all the Load values. ###################################################################################### SENDERSIDE RULES (s) Possibledestinations = { site: Load(site)  Reference(s) < d(s) } Destination = ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
(or derived) decision metrics are exemplified by MinLoad, which denotes the least among all the Load values. ###################################################################################### SENDERSIDE RULES (s) Possibledestinations = { site: Load(site)  Reference(s) < d(s) } Destination = Random(Possibledestinations) IF Load(s)  Reference(s) > q 1 (s) THEN Send RECEIVERSIDE RULES (r) IF Load(r) < q 2 (r) THEN Receive Figure 3. The loadbalancing policy considered in this thesis The senderside rules are applied by the loadbalancing software at the site of arrival (s) of a task. Reference can be either 0 or MinLoad; the other parameters  d, q 1 , and q 2  take nonnegative floatingpoint values. A remote destination (r) is chosen randomly from Destinations, a set of sites whose load index falls within a small neighborhood of Reference. If Destinations is the empty set, or if the rule for sending fails, then the task is executed locally at s, its site of arrival; ot...
Algorithms for Combinatorial Optimization in Real Time and their Automated Refinement by Genetic Programming
 University of Illinois at UrbanaChampaign
, 1994
"... The goal of this research is to develop a systematic, integrated method of designing efficient search algorithms that solve optimization problems in real time. Search algorithms studied in this thesis comprise metacontrol and primitive search. The class of optimization problems addressed are called ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
The goal of this research is to develop a systematic, integrated method of designing efficient search algorithms that solve optimization problems in real time. Search algorithms studied in this thesis comprise metacontrol and primitive search. The class of optimization problems addressed are called combinatorial optimization problems, examples of which include many NPhard scheduling and planning problems, and problems in operations research and artificialintelligence applications. The problems we have addressed have a welldefined problem objective and a finite set of welldefined problem constraints. In this research, we use statespace trees as problem representations. The approach we have undertaken in designing efficient search algorithms is an engineering approach and consists of two phases: (a) designing generic search algorithms, and (b) improving by geneticsbased machine learning methods parametric heuristics used in the search algorithms designed. Our approach is a systematic method that integrates domain knowledge, search techniques, and automated learning techniques for designing better search algorithms. Knowledge captured in designing one search algorithm can be carried over for designing new ones. iv ACKNOWLEDGEMENTS I express my sincere gratitude to all the people who have helped me in the course of my graduate study. My thesis advisor, Professor Benjamin W. Wah, was always available for discussions and encouraged me to explore new ideas. I am deeply grateful to the committee
Capability Representations for Brokering: A Survey
 Available from: www.aiai.ed.ac.uk/ ∼ oplan/cdl/cdlker.ps
, 1999
"... In this article we review knowledge representation formalisms that lend themselves to the representation of capabilities of intelligent agents. The aim of representing capabilities is, of course, that we want to reason about them. The reasoning task we are most interested in is capability brokeri ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
In this article we review knowledge representation formalisms that lend themselves to the representation of capabilities of intelligent agents. The aim of representing capabilities is, of course, that we want to reason about them. The reasoning task we are most interested in is capability brokering, i.e. the task of finding an agent which has a capability that can be used to address a given problem. Thus, the first area we review here is agent cooperation and communication from which the problem originates.
Learning to Recognize (Un)Promising Simulated Annealing Runs: Efficient Search Procedures for Job Shop Scheduling and Vehicle Routing
, 1997
"... Simulated Annealing (SA) procedures can potentially yield nearoptimal solutions to many difficult combinatorial optimization problems, though often at the expense of intensive computational efforts. The single most significant source of inefficiency in SA search is the inherent stochasticity of the ..."
Abstract
 Add to MetaCart
Simulated Annealing (SA) procedures can potentially yield nearoptimal solutions to many difficult combinatorial optimization problems, though often at the expense of intensive computational efforts. The single most significant source of inefficiency in SA search is the inherent stochasticity of the procedure, typically requiring that the procedure be rerun a large number of times before a nearoptimal solution is found. This paper describes a mechanism that attempts to learn the structure of the search space over multiple SA runs on a given problem. Specifically, probability distributions are dynamically updated over multiple runs to estimate at different checkpoints how promising a SA run appears to be. Based on this mechanism, two types of criteria are developed that aim at increasing search efficiency: (1) a cutoff criterion used to determine when to abandon unpromising runs and (2) restart criteria used to determine whether to start a fresh SA run or restart search in the middle of an earlier run. Experimental results obtained on a class of complex job shop scheduling problems show (1) that SA can produce high quality solutions for this class of problems, if run a large number of times, and (2) that our learning mechanism can significantly reduce the computation time required to find high quality solutions to these problems. The results also indicate that, the closer one wants to be to the optimum, the larger the speedups. Similar results obtained on a smaller set of benchmark Vehicle Routing Problems with Time Windows (VRPTW) suggest that our learning mechanisms should help improve the efficiency of SA in a number of different domains.