Results 1  10
of
225
UCPOP: A Sound, Complete, Partial Order Planner for ADL
, 1992
"... We describe the ucpop partial order planning algorithm which handles a subset of Pednault's ADL action representation. In particular, ucpop operates with actions that have conditional effects, universally quantified preconditions and effects, and with universally quantified goals. We prove ucpo ..."
Abstract

Cited by 476 (24 self)
 Add to MetaCart
We describe the ucpop partial order planning algorithm which handles a subset of Pednault's ADL action representation. In particular, ucpop operates with actions that have conditional effects, universally quantified preconditions and effects, and with universally quantified goals. We prove ucpop is both sound and complete for this representation and describe a practical implementation that succeeds on all of Pednault's and McDermott's examples, including the infamous "Yale Stacking Problem" [McDermott 1991].
GSAT and Dynamic Backtracking
 Journal of Artificial Intelligence Research
, 1994
"... There has been substantial recent interest in two new families of search techniques. One family consists of nonsystematic methods such as gsat; the other contains systematic approaches that use a polynomial amount of justification information to prune the search space. This paper introduces a new te ..."
Abstract

Cited by 383 (15 self)
 Add to MetaCart
There has been substantial recent interest in two new families of search techniques. One family consists of nonsystematic methods such as gsat; the other contains systematic approaches that use a polynomial amount of justification information to prune the search space. This paper introduces a new technique that combines these two approaches. The algorithm allows substantial freedom of movement in the search space but enough information is retained to ensure the systematicity of the resulting analysis. Bounds are given for the size of the justification database and conditions are presented that guarantee that this database will be polynomial in the size of the problem in question. 1 INTRODUCTION The past few years have seen rapid progress in the development of algorithms for solving constraintsatisfaction problems, or csps. Csps arise naturally in subfields of AI from planning to vision, and examples include propositional theorem proving, map coloring and scheduling problems. The probl...
Principles of Metareasoning
 Artificial Intelligence
, 1991
"... In this paper we outline a general approach to the study of metareasoning, not in the sense of explicating the semantics of explicitly specified metalevel control policies, but in the sense of providing a basis for selecting and justifying computational actions. This research contributes to a devel ..."
Abstract

Cited by 175 (10 self)
 Add to MetaCart
(Show Context)
In this paper we outline a general approach to the study of metareasoning, not in the sense of explicating the semantics of explicitly specified metalevel control policies, but in the sense of providing a basis for selecting and justifying computational actions. This research contributes to a developing attack on the problem of resourcebounded rationality, by providing a means for analysing and generating optimal computational strategies. Because reasoning about a computation without doing it necessarily involves uncertainty as to its outcome, probability and decision theory will be our main tools. We develop a general formula for the utility of computations, this utility being derived directly from the ability of computations to affect an agent's external actions. We address some philosophical difficulties that arise in specifying this formula, given our assumption of limited rationality. We also describe a methodology for applying the theory to particular problemsolving systems, a...
Valuefunction approximations for partially observable Markov decision processes
 Journal of Artificial Intelligence Research
, 2000
"... Partially observable Markov decision processes (POMDPs) provide an elegant mathematical framework for modeling complex decision and planning problems in stochastic domains in which states of the system are observable only indirectly, via a set of imperfect or noisy observations. The modeling advanta ..."
Abstract

Cited by 156 (1 self)
 Add to MetaCart
(Show Context)
Partially observable Markov decision processes (POMDPs) provide an elegant mathematical framework for modeling complex decision and planning problems in stochastic domains in which states of the system are observable only indirectly, via a set of imperfect or noisy observations. The modeling advantage of POMDPs, however, comes at a price — exact methods for solving them are computationally very expensive and thus applicable in practice only to very simple problems. We focus on efficient approximation (heuristic) methods that attempt to alleviate the computational problem and trade off accuracy for speed. We have two objectives here. First, we survey various approximation methods, analyze their properties and relations and provide some new insights into their differences. Second, we present a number of new approximation methods and novel refinements of existing techniques. The theoretical results are supported by experiments on a problem from the agent navigation domain. 1.
Temporal Planning with Continuous Change
, 1994
"... We present zeno, a least commitment planner that handles actions occurring over extended intervals of time. Deadline goals, metric preconditions, metric effects, and continuous change are supported. Simultaneous actions are allowed when their effects do not interfere. Unlike most planners that deal ..."
Abstract

Cited by 129 (10 self)
 Add to MetaCart
We present zeno, a least commitment planner that handles actions occurring over extended intervals of time. Deadline goals, metric preconditions, metric effects, and continuous change are supported. Simultaneous actions are allowed when their effects do not interfere. Unlike most planners that deal with complex languages, the zeno planning algorithm is sound and complete. The running code is a complete implementation of the formal algorithm, capable of solving simple problems (i.e., those involving less than a dozen steps). Introduction We have built a least commitment planner, zeno, that handles actions occuring over extended intervals of time and whose preconditions and effects can be temporally quantified. These capabilities enable zeno to reason about deadline goals, piecewiselinear continuous change, external events and to a limited extent, simultaneous actions. While other planners exist with some of these features, zeno is different because it is both sound and complete. As a...
Efficient Progressive Sampling
, 1999
"... Having access to massiveamounts of data does not necessarily imply that induction algorithms must use them all. Samples often provide the same accuracy with far less computational cost. However, the correct sample size is rarely obvious. We analyze methods for progressive samplingstarting with ..."
Abstract

Cited by 106 (10 self)
 Add to MetaCart
(Show Context)
Having access to massiveamounts of data does not necessarily imply that induction algorithms must use them all. Samples often provide the same accuracy with far less computational cost. However, the correct sample size is rarely obvious. We analyze methods for progressive samplingstarting with small samples and progressively increasing them as long as model accuracy improves. We show that a simple, geometric sampling schedule is efficient in an asymptotic sense. We then explore the notion of optimal efficiency: what is the absolute best sampling schedule? We describe the issues involved in instantiating an "optimally efficient" progressive sampler. Finally,we provide empirical results comparing a variety of progressive sampling methods. We conclude that progressive sampling often is preferable to analyzing all data instances.
Computer Go: an AI Oriented Survey
 Artificial Intelligence
, 2001
"... Since the beginning of AI, mind games have been studied as relevant application fields. Nowadays, some programs are better than human players in most classical games. Their results highlight the efficiency of AI methods that are now quite standard. Such methods are very useful to Go programs, bu ..."
Abstract

Cited by 95 (20 self)
 Add to MetaCart
Since the beginning of AI, mind games have been studied as relevant application fields. Nowadays, some programs are better than human players in most classical games. Their results highlight the efficiency of AI methods that are now quite standard. Such methods are very useful to Go programs, but they do not enable a strong Go program to be built. The problems related to Computer Go require new AI problem solving methods. Given the great number of problems and the diversity of possible solutions, Computer Go is an attractive research domain for AI. Prospective methods of programming the game of Go will probably be of interest in other domains as well. The goal of this paper is to present Computer Go by showing the links between existing studies on Computer Go and different AI related domains: evaluation function, heuristic search, machine learning, automatic knowledge generation, mathematical morphology and cognitive science. In addition, this paper describes both the practical aspects of Go programming, such as program optimization, and various theoretical aspects such as combinatorial game theory, mathematical morphology, and MonteCarlo methods. B. Bouzy T. Cazenave page 2 08/06/01 1.
Depthbounded Discrepancy Search
 In Proceedings of IJCAI97
, 1997
"... Many search trees are impractically large to explore exhaustively. Recently, techniques like limited discrepancy search have been proposed for improving the chance of finding a goal in a limited amount of search. Depthbounded discrepancy search offers such a hope. The motivation behind depthbounde ..."
Abstract

Cited by 84 (0 self)
 Add to MetaCart
Many search trees are impractically large to explore exhaustively. Recently, techniques like limited discrepancy search have been proposed for improving the chance of finding a goal in a limited amount of search. Depthbounded discrepancy search offers such a hope. The motivation behind depthbounded discrepancy search is that branching heuristics are more likely to be wrong at the top of the tree than at the bottom. We therefore combine one of the best features of limited discrepancy search  the ability to undo early mistakes  with the completeness of iterative deepening search. We show theoretically and experimentally that this novel combination outperforms existing techniques. 1 Introduction On backtracking, depthfirst search explores decisions made against the branching heuristic (or "discrepancies "), starting with decisions made deep in the search tree. However, branching heuristics are more likely to be wrong at the top of the tree than at the bottom. We would like theref...
Near optimal hierarchical pathfinding
 Journal of Game Development
, 2004
"... The problem of pathfinding in commercial computer games has to be solved in real time, often under constraints of limited memory and CPU resources. The computational effort required to find a path, using a search algorithm such as A*, increases with size of the search space. Hence, pathfinding on l ..."
Abstract

Cited by 78 (11 self)
 Add to MetaCart
(Show Context)
The problem of pathfinding in commercial computer games has to be solved in real time, often under constraints of limited memory and CPU resources. The computational effort required to find a path, using a search algorithm such as A*, increases with size of the search space. Hence, pathfinding on large maps can result in serious performance bottlenecks. This paper presents HPA * (Hierarchical PathFinding A*), a hierarchical approach for reducing problem complexity in pathfinding on gridbased maps. This technique abstracts a map into linked local clusters. At the local level, the optimal distances for crossing each cluster are precomputed and cached. At the global level, clusters are traversed in a single big step. A hierarchy can be extended to more than two levels. Small clusters are grouped together to form larger clusters. Computing crossing distances for a large cluster uses distances computed for the smaller contained clusters. Our method is automatic and does not depend on a specific topology. Both random and realgame maps are successfully handled using no domainspecific knowledge. Our problem decomposition approach works very well in domains with a dynamically changing environment. The technique also has the advantage of simplicity and is easy to implement. If desired, more sophisticated, domainspecific algorithms can be plugged in for increased performance. The experimental results show a great reduction of the search effort. Compared to a highlyoptimized A*, HPA * is shown to be up to 10 times faster, while finding paths that are within 1 % of optimal. 1 1
Iterative Broadening
 Artificial Intelligence
, 1990
"... Conventional blind search techniques generally assume that the goal nodes for a given problem are distributed randomly along the fringe of the search tree. We argue that this is often invalid in practice and suggest that a more reasonable assumption is that decisions made at each point in the search ..."
Abstract

Cited by 66 (6 self)
 Add to MetaCart
(Show Context)
Conventional blind search techniques generally assume that the goal nodes for a given problem are distributed randomly along the fringe of the search tree. We argue that this is often invalid in practice and suggest that a more reasonable assumption is that decisions made at each point in the search carry equal weight. We go on to show that a new search technique called iterative broadening leads to ordersofmagnitude savings in the time needed to search a space satisfying this assumption; the basic idea is to search the space using artificial breadth cutoffs that are gradually increased until a goal is found. Both theoretical and experimental results are presented.