Results 1  10
of
35
Disjoint pattern database heuristics
 Artificial Intelligence
, 2002
"... We explore a method for computing admissible heuristic evaluation functions for search problems. It utilizes pattern databases (Culberson & Schaeffer, 1998), which are precomputed tables of the exact cost of solving various subproblems of an existing problem. Unlike standard pattern database heurist ..."
Abstract

Cited by 102 (24 self)
 Add to MetaCart
We explore a method for computing admissible heuristic evaluation functions for search problems. It utilizes pattern databases (Culberson & Schaeffer, 1998), which are precomputed tables of the exact cost of solving various subproblems of an existing problem. Unlike standard pattern database heuristics, however, we partition our problems into disjoint subproblems, so that the costs of solving the different subproblems can be added together without overestimating the cost of solving the original problem. Previously (Korf & Felner, 2002) we showed how to statically partition the slidingtile puzzles into disjoint groups of tiles to compute an admissible heuristic, using the same partition for each state and problem instance. Here we extend the method and show that it applies to other domains as well. We also present another method for additive heuristics which we call dynamically partitioned pattern databases. Here we partition the problem into disjoint subproblems for each state of the search dynamically. We discuss the pros and cons of each of these methods and apply both methods to three different problem domains: the slidingtile puzzles, the 4peg Towers of Hanoi problem, and finding an optimal vertex cover of a graph. We find that in some problem domains, static partitioning is most effective, while in others dynamic partitioning is a better choice. In each of these problem domains, either statically partitioned or dynamically partitioned pattern database heuristics are the best known heuristics for the problem.
Taming Numbers and Durations in the Model Checking Integrated Planning System
 Journal of Artificial Intelligence Research
, 2002
"... The Model Checking Integrated Planning System (MIPS) has shown distinguished performance in the second and third international planning competitions. With its objectoriented framework architecture MIPS clearly separates the portfolio of explicit and symbolic heuristic search exploration algorith ..."
Abstract

Cited by 43 (10 self)
 Add to MetaCart
The Model Checking Integrated Planning System (MIPS) has shown distinguished performance in the second and third international planning competitions. With its objectoriented framework architecture MIPS clearly separates the portfolio of explicit and symbolic heuristic search exploration algorithms from different online and offline computed estimates and from the grounded planning problem representation.
Breadthfirst heuristic search
 Artificial Intelligence
"... Recent work shows that the memory requirements of bestfirst heuristic search can be reduced substantially by using a divideandconquer method of solution reconstruction. We show that memory requirements can be reduced even further by using a breadthfirst instead of a bestfirst search strategy. We ..."
Abstract

Cited by 40 (7 self)
 Add to MetaCart
Recent work shows that the memory requirements of bestfirst heuristic search can be reduced substantially by using a divideandconquer method of solution reconstruction. We show that memory requirements can be reduced even further by using a breadthfirst instead of a bestfirst search strategy. We describe optimal and approximate breadthfirst heuristic search algorithms that use divideandconquer solution reconstruction. Computational results show that they outperform other optimal and approximate heuristic search algorithms in solving domainindependent planning problems.
Compressing pattern databases
 In Proceedings of the Nineteenth National Conference on Artificial Intelligence (AAAI04
, 2004
"... A pattern database (PDB) is a heuristic function implemented as a lookup table that stores the lengths of optimal solutions for subproblem instances. Standard PDBs have a distinct entry in the table for each subproblem instance. In this paper we investigate compressing PDBs by merging several entrie ..."
Abstract

Cited by 36 (17 self)
 Add to MetaCart
A pattern database (PDB) is a heuristic function implemented as a lookup table that stores the lengths of optimal solutions for subproblem instances. Standard PDBs have a distinct entry in the table for each subproblem instance. In this paper we investigate compressing PDBs by merging several entries into one, thereby allowing the use of PDBs that exceed available memory in their uncompressed form. We introduce a number of methods for determining which entries to merge and discuss their relative merits. These vary from domainindependent approaches that allow any set of entries in the PDB to be merged, to more intelligent methods that take into account the structure of the problem. The choice of the best compression method is based on domaindependent attributes. We present experimental results on a number of combinatorial problems, including the fourpeg Towers of Hanoi problem, the slidingtile puzzles, and the TopSpin puzzle. For the Towers of Hanoi, we show that the search time can be reduced by up to three orders of magnitude by using compressed PDBs compared to uncompressed PDBs of the same size. More modest improvements were observed for the other domains.
Structured duplicate detection in externalmemory graph search
 In Proceedings of the Nineteenth National Conference on Artificial Intelligence (AAAI04
"... We consider how to use external memory, such as disk storage, to improve the scalability of heuristic search in statespace graphs. To limit the number of slow disk I/O operations, we develop a new approach to duplicate detection in graph search that localizes memory references by partitioning the se ..."
Abstract

Cited by 29 (11 self)
 Add to MetaCart
We consider how to use external memory, such as disk storage, to improve the scalability of heuristic search in statespace graphs. To limit the number of slow disk I/O operations, we develop a new approach to duplicate detection in graph search that localizes memory references by partitioning the search graph based on an abstraction of the state space, and expanding the frontier nodes of the graph in an order that respects this partition. We demonstrate the effectiveness of this approach both analytically and empirically.
Largescale directed model checking LTL
 In Model Checking Software (SPIN
, 2006
"... Abstract. To analyze larger models for explicitstate model checking, directed model checking applies errorguided search, external model checking uses secondary storage media, and distributed model checking exploits parallel exploration on multiple processors. In this paper we propose an external, ..."
Abstract

Cited by 22 (8 self)
 Add to MetaCart
Abstract. To analyze larger models for explicitstate model checking, directed model checking applies errorguided search, external model checking uses secondary storage media, and distributed model checking exploits parallel exploration on multiple processors. In this paper we propose an external, distributed and directed onthefly model checking algorithm to check general LTL properties in the model checker SPIN. Previous attempts restricted to checking safety properties. The worstcase I/O complexity is bounded by O(sort(FR)/p + l · scan(FS)), where S and R are the sets of visited states and transitions in the synchronized product of the Büchi automata for the model and the property specification, F is the number of accepting states, l is the length of the shortest counterexample, and p is the number of processors. The algorithm we propose returns minimal lassoshaped counterexamples and includes refinements for propertydriven exploration. 1
Parallel External Directed Model Checking with Linear I/O
 In VMCAI
, 2006
"... In this paper we present Parallel External A*, a parallel variant of external memory directed model checking. As a model scales up, its successors generation becomes complex and, in turn, starts to impact the running time of the model checker. Probings of our external memory model checker IOHSF ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
In this paper we present Parallel External A*, a parallel variant of external memory directed model checking. As a model scales up, its successors generation becomes complex and, in turn, starts to impact the running time of the model checker. Probings of our external memory model checker IOHSFSPIN revealed that in some of the cases about 70% of the whole running time was consumed in the internal processing.
Memorybounded A* graph search
 In Proc. 15th International Flairs Conference
, 2002
"... We describe a framework for reducing the space complexity of graph search algorithms such as A* that use Open and Closed lists to keep track of the frontier and interior nodes of the search space. We propose a sparse representation of the Closed list in which only a fraction of already expanded node ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
We describe a framework for reducing the space complexity of graph search algorithms such as A* that use Open and Closed lists to keep track of the frontier and interior nodes of the search space. We propose a sparse representation of the Closed list in which only a fraction of already expanded nodes need to be stored to perform the two functions of the Closed List preventing duplicate search effort and allowing solution extraction. Our proposal is related to earlier work on search algorithms that do not use a Closed list at all [Korf and Zhang, 2000]. However, the approach we describe has several advantages that make it effective for a wider variety of problems. 1
Beamstack search: Integrating backtracking with beam search
 In International Conference on Automated Planning and Scheduling (ICAPS
, 2005
"... We describe a method for transforming beam search into a complete search algorithm that is guaranteed to find an optimal solution. Called beamstack search, the algorithm uses a new data structure, called a beam stack, that makes it possible to integrate systematic backtracking with beam search. The ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
We describe a method for transforming beam search into a complete search algorithm that is guaranteed to find an optimal solution. Called beamstack search, the algorithm uses a new data structure, called a beam stack, that makes it possible to integrate systematic backtracking with beam search. The resulting search algorithm is an anytime algorithm that finds a good, suboptimal solution quickly, like beam search, and then backtracks and continues to find improved solutions until convergence to an optimal solution. We describe a memoryefficient implementation of beamstack search, called divideandconquer beamstack search, as well as an iterativedeepening version of the algorithm. The approach is applied to domainindependent STRIPS planning, and computational results show its advantages.