Results 1  10
of
49
The Fast Downward planning system
 Journal of Artifical Intelligence Research
, 2006
"... Fast Downward is a classical planning system based on heuristic search. It can deal with general deterministic planning problems encoded in the propositional fragment of PDDL2.2, including advanced features like ADL conditions and effects and derived predicates (axioms). Like other wellknown planne ..."
Abstract

Cited by 347 (29 self)
 Add to MetaCart
(Show Context)
Fast Downward is a classical planning system based on heuristic search. It can deal with general deterministic planning problems encoded in the propositional fragment of PDDL2.2, including advanced features like ADL conditions and effects and derived predicates (axioms). Like other wellknown planners such as HSP and FF, Fast Downward is a progression planner, searching the space of world states of a planning task in the forward direction. However, unlike other PDDL planning systems, Fast Downward does not use the propositional PDDL representation of a planning task directly. Instead, the input is first translated into an alternative representation called multivalued planning tasks, which makes many of the implicit constraints of a propositional planning task explicit. Exploiting this alternative representation, Fast Downward uses hierarchical decompositions of planning tasks for computing its heuristic function, called the causal graph heuristic, which is very different from traditional HSPlike heuristics based on ignoring negative interactions of operators. In this article, we give a full account of Fast Downward’s approach to solving multivalued planning tasks. We extend our earlier discussion of the causal graph heuristic to tasks involving
The LAMA planner: guiding costbased anytime planning with landmarks.
 Journal Artificial Intelligence Research (JAIR)
, 2010
"... Abstract LAMA is a classical planning system based on heuristic forward search. Its core feature is the use of a pseudoheuristic derived from landmarks, propositional formulas that must be true in every solution of a planning task. LAMA builds on the Fast Downward planning system, using finitedom ..."
Abstract

Cited by 141 (5 self)
 Add to MetaCart
(Show Context)
Abstract LAMA is a classical planning system based on heuristic forward search. Its core feature is the use of a pseudoheuristic derived from landmarks, propositional formulas that must be true in every solution of a planning task. LAMA builds on the Fast Downward planning system, using finitedomain rather than binary state variables and multiheuristic search. The latter is employed to combine the landmark heuristic with a variant of the wellknown FF heuristic. Both heuristics are costsensitive, focusing on highquality solutions in the case where actions have nonuniform cost. A weighted A * search is used with iteratively decreasing weights, so that the planner continues to search for plans of better quality until the search is terminated. LAMA showed best performance among all planners in the sequential satisficing track of the International Planning Competition 2008. In this paper we present the system in detail and investigate which features of LAMA are crucial for its performance. We present individual results for some of the domains used at the competition, demonstrating good and bad cases for the techniques implemented in LAMA. Overall, we find that using landmarks improves performance, whereas the incorporation of action costs into the heuristic estimators proves not to be beneficial. We show that in some domains a search that ignores cost solves far more problems, raising the question of how to deal with action costs more effectively in the future. The iterated weighted A * search greatly improves results, and shows synergy effects with the use of landmarks.
A Planning Heuristic Based on Causal Graph Analysis
"... In recent years, heuristic search methods for classical planning have achieved remarkable results. Their most successful representative, the FF algorithm, performs well over a wide spectrum of planning domains and still sets the state of the art for STRIPS planning. However, there are some planning ..."
Abstract

Cited by 119 (18 self)
 Add to MetaCart
In recent years, heuristic search methods for classical planning have achieved remarkable results. Their most successful representative, the FF algorithm, performs well over a wide spectrum of planning domains and still sets the state of the art for STRIPS planning. However, there are some planning domains in which algorithms like FF and HSP perform poorly because their relaxation method of ignoring the “delete lists” of STRIPS operators loses too much vital information. Planning domains which have many dead ends in the search space are especially problematic in this regard. In some domains, dead ends are readily found by the human observer yet remain undetected by all propositional planning systems we are aware of. We believe that this is partly because the STRIPS representation obscures the important causal structure of the domain, which is evident to humans. In this paper, we propose translating STRIPS problems to a planning formalism with multivalued state variables in order to expose this underlying causal structure. Moreover, we show how this structure can be exploited by an algorithm for detecting dead ends in the search space and by a planning heuristic based on hierarchical problem decomposition. Our experiments show excellent overall performance on the benchmarks from the international planning competitions.
Landmarks revisited
 in: Proceedings of the TwentyThird AAAI Conference on Artificial Intelligence (AAAI2008
, 2008
"... Landmarks for propositional planning tasks are variable assignments that must occur at some point in every solution plan. We propose a novel approach for using landmarks in planning by deriving a pseudoheuristic and combining it with other heuristics in a search framework. The incorporation of land ..."
Abstract

Cited by 89 (15 self)
 Add to MetaCart
(Show Context)
Landmarks for propositional planning tasks are variable assignments that must occur at some point in every solution plan. We propose a novel approach for using landmarks in planning by deriving a pseudoheuristic and combining it with other heuristics in a search framework. The incorporation of landmark information is shown to improve success rates and solution qualities of a heuristic planner. We furthermore show how additional landmarks and orderings can be found using the information present in multivalued state variable representations of planning tasks. Compared to previously published approaches, our landmark extraction algorithm provides stronger guarantees of correctness for the generated landmark orderings, and our novel use of landmarks during search solves more planning tasks and delivers considerably better solutions.
Computational Aspects of Reordering Plans
 Journal of Artificial Intelligence Research
, 1998
"... This article studies the problem of modifying the action ordering of a plan in order to optimise the plan according to various criteria. One of these criteria is to make a plan less constrained and the other is to minimize its parallel execution time. Three candidate definitions are proposed for the ..."
Abstract

Cited by 69 (0 self)
 Add to MetaCart
This article studies the problem of modifying the action ordering of a plan in order to optimise the plan according to various criteria. One of these criteria is to make a plan less constrained and the other is to minimize its parallel execution time. Three candidate definitions are proposed for the first of these criteria, constituting a sequence of increasing optimality guarantees. Two of these are based on deordering plans, which means that ordering relations may only be removed, not added, while the third one uses reordering, where arbitrary modifications to the ordering are allowed. It is shown that only the weakest one of the three criteria is tractable to achieve, the other two being NPhard and even difficult to approximate. Similarly, optimising the parallel execution time of a plan is studied both for deordering and reordering of plans. In the general case, both of these computations are NPhard. However, it is shown that optimal deorderings can be computed in polynomial time...
Concise finitedomain representations for PDDL planning tasks
, 2009
"... We introduce an efficient method for translating planning tasks specified in the standard PDDL formalism into a concise grounded representation that uses finitedomain state variables instead of the straightforward propositional encoding. Translation is performed in four stages. Firstly, we transfo ..."
Abstract

Cited by 63 (13 self)
 Add to MetaCart
(Show Context)
We introduce an efficient method for translating planning tasks specified in the standard PDDL formalism into a concise grounded representation that uses finitedomain state variables instead of the straightforward propositional encoding. Translation is performed in four stages. Firstly, we transform the input task into an equivalent normal form expressed in a restricted fragment of PDDL. Secondly, we synthesize invariants of the planning task that identify groups of mutually exclusive propositions which can be represented by a single finitedomain variable. Thirdly, we perform an efficient relaxed reachability analysis using logic programming techniques to obtain a grounded representation of the input. Finally, we combine the results of the third and fourth stage to generate the final grounded finitedomain representation. The presented approach has originally been implemented as part of the Fast Downward planning system for the 4th International Planning Competition (IPC4). Since then, it has been used in a number of other contexts with considerable success, and the use of concise finitedomain representations has become a common feature of stateoftheart planners.
Structure and Complexity in Planning with Unary Operators
 Journal of Artificial Intelligence Research
, 2003
"... Unary operator domains  i.e., domains in which operators have a single effect  arise naturally in many control problems. In its most general form, the problem of strips planning in unary operator domains is known to be as hard as the general strips planning problem  both are pspacecomplete. H ..."
Abstract

Cited by 53 (10 self)
 Add to MetaCart
Unary operator domains  i.e., domains in which operators have a single effect  arise naturally in many control problems. In its most general form, the problem of strips planning in unary operator domains is known to be as hard as the general strips planning problem  both are pspacecomplete. However, unary operator domains induce a natural structure, called the domain's causal graph. This graph relates between the preconditions and effect of each domain operator. Causal graphs were exploited by Williams and Nayak in order to analyze plan generation for one of the controllers in NASA's DeepSpace One spacecraft. There, they utilized the fact that when this graph is acyclic, a serialization ordering over any subgoal can be obtained quickly. In this paper we conduct a comprehensive study of the relationship between the structure of a domain's causal graph and the complexity of planning in this domain. On the positive side, we show that a nontrivial polynomial time plan generation algorithm exists for domains whose causal graph induces a polytree with a constant bound on its node indegree. On the negative side, we show that even plan existence is hard when the graph is a directedpath singly connected DAG.
Course of action generation for cyber security using classical planning
 In Proc. of ICAPS’05
, 2005
"... We report on the results of applying classical planning techniques to the problem of analyzing computer network vulnerabilities. Specifically, we are concerned with the generation of Adversary Courses of Action, which are extended sequences of exploits leading from some initial state to an attacker’ ..."
Abstract

Cited by 29 (1 self)
 Add to MetaCart
(Show Context)
We report on the results of applying classical planning techniques to the problem of analyzing computer network vulnerabilities. Specifically, we are concerned with the generation of Adversary Courses of Action, which are extended sequences of exploits leading from some initial state to an attacker’s goal. In this application, we have demonstrated the generation of attack plans for a simple but realistic webbased document control system, with excellent performance compared to the prevailing state of the art in this area. In addition to the new capabilities gained in the area of vulnerability analysis, this implementation provided some insights into performance and modeling issues for classical planning systems, both specifically with regard to METRICFF and other forward heuristic planners, and more generally for classical planning. To facilitate additional work in this area, the domain model on which this work was done will be made freely available. See the paper’s Conclusion for details.
Analyzing search topology without running any search: On the connection between causal graphs and h+
 JAIR
, 2011
"... The ignoring delete lists relaxation is of paramount importance for both satisficing and optimal planning. In earlier work, it was observed that the optimal relaxation heuristic h + has amazing qualities in many classical planning benchmarks, in particular pertaining to the complete absence of local ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
(Show Context)
The ignoring delete lists relaxation is of paramount importance for both satisficing and optimal planning. In earlier work, it was observed that the optimal relaxation heuristic h + has amazing qualities in many classical planning benchmarks, in particular pertaining to the complete absence of local minima. The proofs of this are handmade, raising the question whether such proofs can be lead automatically by domain analysis techniques. In contrast to earlier disappointing results – the analysis method has exponential runtime and succeeds only in two extremely simple benchmark domains – we herein answer this question in the affirmative. We establish connections between causal graph structure and h + topology. This results in loworder polynomial time analysis methods, implemented in a tool we call TorchLight. Of the 12 domains where the absence of local minima has been proved, TorchLight gives strong success guarantees in 8 domains. Empirically, its analysis exhibits strong performance in a further 2 of these domains, plus in 4 more domains where local minima may exist but are rare. In this way, TorchLight can distinguish “easy” domains from “hard” ones. By summarizing structural reasons for analysis failure, TorchLight also provides diagnostic output indicating domain aspects that may cause local minima.
Reducing Accidental Complexity in Planning Problems
"... Although even propositional STRIPS planning is a hard problem in general, many instances of the problem, including many of those commonly used as benchmarks, are easy. In spite of this, they are often hard to solve for domainindependent planners, because the encoding of the problem into a general p ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
Although even propositional STRIPS planning is a hard problem in general, many instances of the problem, including many of those commonly used as benchmarks, are easy. In spite of this, they are often hard to solve for domainindependent planners, because the encoding of the problem into a general problem specification formalism such as STRIPS hides structure that needs to be exploited to solve problems easily. We investigate the use of automatic problem transformations to reduce this “accidental ” problem complexity. The main tool is abstraction: we identify a new, weaker, condition under which abstraction is “safe”, in the sense that any solution to the abstracted problem can be refined to a concrete solution (in polynomial time, for most cases) and also show how different kinds of problem reformulations can be applied to create greater opportunities for such safe abstraction. 1