Results 1 
6 of
6
UCPOP: A Sound, Complete, Partial Order Planner for ADL
, 1992
"... We describe the ucpop partial order planning algorithm which handles a subset of Pednault's ADL action representation. In particular, ucpop operates with actions that have conditional effects, universally quantified preconditions and effects, and with universally quantified goals. We prove ucpo ..."
Abstract

Cited by 447 (24 self)
 Add to MetaCart
We describe the ucpop partial order planning algorithm which handles a subset of Pednault's ADL action representation. In particular, ucpop operates with actions that have conditional effects, universally quantified preconditions and effects, and with universally quantified goals. We prove ucpop is both sound and complete for this representation and describe a practical implementation that succeeds on all of Pednault's and McDermott's examples, including the infamous "Yale Stacking Problem" [McDermott 1991].
Efficient Progressive Sampling
, 1999
"... Having access to massiveamounts of data does not necessarily imply that induction algorithms must use them all. Samples often provide the same accuracy with far less computational cost. However, the correct sample size is rarely obvious. We analyze methods for progressive samplingstarting with ..."
Abstract

Cited by 96 (9 self)
 Add to MetaCart
(Show Context)
Having access to massiveamounts of data does not necessarily imply that induction algorithms must use them all. Samples often provide the same accuracy with far less computational cost. However, the correct sample size is rarely obvious. We analyze methods for progressive samplingstarting with small samples and progressively increasing them as long as model accuracy improves. We show that a simple, geometric sampling schedule is efficient in an asymptotic sense. We then explore the notion of optimal efficiency: what is the absolute best sampling schedule? We describe the issues involved in instantiating an "optimally efficient" progressive sampler. Finally,we provide empirical results comparing a variety of progressive sampling methods. We conclude that progressive sampling often is preferable to analyzing all data instances.
Efficient memorybounded search methods
"... Memorybounded algorithms such as Korf's IDA* and Chakrabarti et al's MA* are designed to overcome the impractical memory requirements of heuristic search algorithms such as A*. It is shown that IDA * is inefficient when the heuristic function can take on a large number of values � this i ..."
Abstract

Cited by 54 (0 self)
 Add to MetaCart
Memorybounded algorithms such as Korf's IDA* and Chakrabarti et al's MA* are designed to overcome the impractical memory requirements of heuristic search algorithms such as A*. It is shown that IDA * is inefficient when the heuristic function can take on a large number of values � this is a consequence of using too little memory. Two new algorithms are developed. The first, SMA*, simpli es and improves upon MA*, making the best use of all available memory. The second, Iterative Expansion (IE), is a simple recursive algorithm that uses linear space and incurs little overhead. Experiments indicate that both algorithms perform well.
Bidirectional Heuristic Search Reconsidered
 Journal of Artificial Intelligence Research
, 1997
"... The assessment of bidirectional heuristic search has been incorrect since it was first published more than a quarter of a century ago. For quite a long time, this search strategy did not achieve the expected results, and there was a major misunderstanding about the reasons behind it. Although there ..."
Abstract

Cited by 33 (2 self)
 Add to MetaCart
The assessment of bidirectional heuristic search has been incorrect since it was first published more than a quarter of a century ago. For quite a long time, this search strategy did not achieve the expected results, and there was a major misunderstanding about the reasons behind it. Although there is still widespread belief that bidirectional heuristic search is afflicted by the problem of search frontiers passing each other, we demonstrate that this conjecture is wrong. Based on this finding, we present both a new generic approach to bidirectional heuristic search and a new approach to dynamically improving heuristic values that is feasible in bidirectional search only. These approaches are put into perspective with both the traditional and more recently proposed approaches in order to facilitate a better overall understanding. Empirical results of experiments with our new approaches show that bidirectional heuristic search can be performed very efficiently and also with limited mem...
Averagecase analysis of a search algorithm for estimating prior and posterior probabilities in Bayesian networks with extreme probabilities
, 1993
"... This paper provides a searchbased algorithm for computing prior and posterior probabilities in discrete Bayesian Networks. This is an "anytime" algorithm, that at any stage can estimate the probabilities and give an error bound. Whereas the most popular Bayesian net algorithms exploit the ..."
Abstract

Cited by 30 (4 self)
 Add to MetaCart
This paper provides a searchbased algorithm for computing prior and posterior probabilities in discrete Bayesian Networks. This is an "anytime" algorithm, that at any stage can estimate the probabilities and give an error bound. Whereas the most popular Bayesian net algorithms exploit the structure of the network for efficiency, we exploit probability distributions for efficiency. The algorithm is most suited to the case where we have extreme (close to zero or one) probabilities, as is the case in many diagnostic situations where we are diagnosing systems that work most of the time, and for commonsense reasoning tasks where normality assumptions (allegedly) dominate. We give a characterisation of those cases where it works well, and discuss how well it can be expected to work on average. 1 Introduction This paper provides a general purpose searchbased technique for computing posterior probabilities in arbitrarily structured discrete 1 Bayesian networks. Implementations of Bayesia...
TotalOrder and PartialOrder Planning: A Comparative Analysis
 Journal of Artificial Intelligence Research
, 1994
"... For many years, the intuitions underlying partialorder planning were largely taken for granted. Only in the past few years has there been renewed interest in the fundamental principles underlying this paradigm. In this paper, we present a rigorous comparative analysis of partialorder and totalord ..."
Abstract

Cited by 28 (2 self)
 Add to MetaCart
(Show Context)
For many years, the intuitions underlying partialorder planning were largely taken for granted. Only in the past few years has there been renewed interest in the fundamental principles underlying this paradigm. In this paper, we present a rigorous comparative analysis of partialorder and totalorder planning by focusing on two specific planners that can be directly compared. We show that there are some subtle assumptions that underly the widespread intuitions regarding the supposed efficiency of partialorder planning. For instance, the superiority of partialorder planning can depend critically upon the search strategy and the structure of the search space. Understanding the underlying assumptions is crucial for constructing efficient planners. 1. Introduction For many years, the superiority of partialorder planners over totalorder planners has been tacitly assumed by the planning community. Originally, partialorder planning was introduced by Sacerdoti (1975) as a way to improv...