Results 1 
9 of
9
Memorybounded A* graph search
 In Proc. 15th International Flairs Conference
, 2002
"... We describe a framework for reducing the space complexity of graph search algorithms such as A* that use Open and Closed lists to keep track of the frontier and interior nodes of the search space. We propose a sparse representation of the Closed list in which only a fraction of already expanded node ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
We describe a framework for reducing the space complexity of graph search algorithms such as A* that use Open and Closed lists to keep track of the frontier and interior nodes of the search space. We propose a sparse representation of the Closed list in which only a fraction of already expanded nodes need to be stored to perform the two functions of the Closed List preventing duplicate search effort and allowing solution extraction. Our proposal is related to earlier work on search algorithms that do not use a Closed list at all [Korf and Zhang, 2000]. However, the approach we describe has several advantages that make it effective for a wider variety of problems. 1
Sweep A*: Spaceefficient heuristic search in partially ordered graphs
 In Proceedings of the 15th IEEE International Conference on Tools with Artificial Intelligence
, 2003
"... We describe a novel heuristic search algorithm, called Sweep A*, that exploits the regular structure of partially ordered graphs to substantially reduce the memory requirements of search. We show that it outperforms previous search algorithms in optimally aligning multiple protein or DNA sequences, ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
We describe a novel heuristic search algorithm, called Sweep A*, that exploits the regular structure of partially ordered graphs to substantially reduce the memory requirements of search. We show that it outperforms previous search algorithms in optimally aligning multiple protein or DNA sequences, an important problem in bioinformatics. Sweep A * also promises to be effective for other search problems with similar structure. 1.
Predicting the Performance of IDA* using Conditional Distributions
, 2010
"... Korf, Reid, and Edelkamp introduced a formula to predict the number of nodes IDA* will expandon a single iteration for a given consistent heuristic, and experimentally demonstrated that it could make very accurate predictions. In this paper we show that, in addition to requiring the heuristic to be ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
Korf, Reid, and Edelkamp introduced a formula to predict the number of nodes IDA* will expandon a single iteration for a given consistent heuristic, and experimentally demonstrated that it could make very accurate predictions. In this paper we show that, in addition to requiring the heuristic to be consistent, their formula’s predictions are accurate only at levels of the bruteforce search tree where the heuristic values obey the unconditional distribution that they defined and then used in their formula. We then propose a new formula that works well without these requirements, i.e., it can make accurate predictions of IDA*’s performance for inconsistent heuristics and if the heuristic values in any level do not obey the unconditional distribution. In order to achieve this we introduce the conditional distribution of heuristic values which is a generalization of their unconditional heuristic distribution. We also provide extensions of our formula that handle individual start states and the augmentation of IDA* with bidirectional pathmax (BPMX), a technique for propagating heuristic values when inconsistent heuristics are used. Experimental results demonstrate the accuracy of our new method and all its variations.
Sequential and parallel algorithms for frontier A* with delayed duplicate detection
 In Proceedings of the 21st national conference on Artificial intelligence (AAAI06
, 2006
"... We present sequential and parallel algorithms for Frontier A * (FA*) algorithm augmented with a form of Delayed Duplicate Detection (DDD). The sequential algorithm, FA*DDD, overcomes the leakback problem associated with the combination of FA * and DDD. The parallel algorithm, PFA*DDD, is a parall ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We present sequential and parallel algorithms for Frontier A * (FA*) algorithm augmented with a form of Delayed Duplicate Detection (DDD). The sequential algorithm, FA*DDD, overcomes the leakback problem associated with the combination of FA * and DDD. The parallel algorithm, PFA*DDD, is a parallel version of FA*DDD that features a novel workload distribution strategy based on intervals. We outline an implementation of PFA*DDD designed to run on a cluster of workstations. The implementation computes intervals at runtime that are tailored to fit the workload at hand. Because the implementation distributes the workload in a manner that is both automated and adaptive, it does not require the user to specify a workload mapping function, and, more importantly, it is applicable to arbitrary problems that may be irregular. We present the results of an experimental evaluation of the implementation where it is used to solve instances of the multiple sequence alignment problem on a cluster of workstations running on top of a commodity network. Results demonstrate that the implementation offers improved capability in addition to improved performance.
An improved search algorithm for optimal multiplesequence alignment
 Journal of Artificial Intelligence Research
, 2005
"... Multiple sequence alignment (MSA) is a ubiquitous problem in computational biology. Although it is NPhard to find an optimal solution for an arbitrary number of sequences, due to the importance of this problem researchers are trying to push the limits of exact algorithms further. Since MSA can be c ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Multiple sequence alignment (MSA) is a ubiquitous problem in computational biology. Although it is NPhard to find an optimal solution for an arbitrary number of sequences, due to the importance of this problem researchers are trying to push the limits of exact algorithms further. Since MSA can be cast as a classical path finding problem, it is attracting a growing number of AI researchers interested in heuristic search algorithms as a challenge with actual practical relevance. In this paper, we first review two previous, complementary lines of research. Based on Hirschberg’s algorithm, Dynamic Programming needs O(kN k−1) space to store both the search frontier and the nodes needed to reconstruct the solution path, for k sequences of length N. Best first search, on the other hand, has the advantage of bounding the search space that has to be explored using a heuristic. However, it is necessary to maintain all explored nodes up to the final solution in order to prevent the search from reexpanding them at higher cost. Earlier approaches to reduce the Closed list are either incompatible with pruning methods for the Open list, or must retain at least the boundary of the Closed
Kgroup A* for multiple sequence alignment with quasinatural gap costs
 In Proceedings of the 16th IEEE International Conference on Tools with Artificial Intelligence (ICTAI04
, 2004
"... Alignment of multiple protein or DNA sequences is an important problem in Bioinformatics. Previous work has shown that the A * search algorithm can find optimal alignments for up to several sequences, and that a Kgroup generalization of A * can find approximate alignments for much larger numbers of ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Alignment of multiple protein or DNA sequences is an important problem in Bioinformatics. Previous work has shown that the A * search algorithm can find optimal alignments for up to several sequences, and that a Kgroup generalization of A * can find approximate alignments for much larger numbers of sequences [6]. In this paper, we describe the first implementation of Kgroup A * that uses quasinatural gap costs, the cost model used in practice by biologists. We also introduce a new method for computing gapopening costs in profile alignment. Our results show that Kgroup A * can efficiently find optimal or closetooptimal alignments for small groups of sequences, and, for large numbers of sequences, it can find higherquality alignments than the widelyused CLUSTAL family of approximate alignment tools. This demonstrates the benefits of A* in aligning large numbers of sequences, as typically compared by biologists, and suggests that Kgroup A * could become a practical tool for multiple sequence alignment. 1.
FAHR: Focused A * Heuristic Recomputation
"... detect and correct large discrepancies between the heuristic costtogo estimate and the true cost function. In situations where these large discrepancies exist, the search may expend significant effort escaping from the “bowl ” of a local minimum. A * typically computes supporting data structures f ..."
Abstract
 Add to MetaCart
detect and correct large discrepancies between the heuristic costtogo estimate and the true cost function. In situations where these large discrepancies exist, the search may expend significant effort escaping from the “bowl ” of a local minimum. A * typically computes supporting data structures for the heuristic once, prior to initiating the search. FAHR directs the search out of the bowl by recomputing parts of the heuristic function opportunistically as the search space is explored. FAHR may be used when the heuristic function is in the form of a pattern database. We demonstrate the effectiveness of the algorithm through experiments on a ground vehicle path planning simulation. I.
A Fast Method for Linear Space Pairwise Sequence Alignment
"... Abstract: Pairwise sequence alignment is an important technique for finding the optimal arrangement between two sequences. A basic dynamicprogramming strategy for sequence alignment needs O(mn) time and also O(mn) space. Hirschberg’s divideandconquer method reduces the required space to roughly 2 ..."
Abstract
 Add to MetaCart
Abstract: Pairwise sequence alignment is an important technique for finding the optimal arrangement between two sequences. A basic dynamicprogramming strategy for sequence alignment needs O(mn) time and also O(mn) space. Hirschberg’s divideandconquer method reduces the required space to roughly 2m (m ≤ n), but it also doubles the computing time. On the other hand, the FastLSA approach adds extra rows and columns to generalize Hirschberg’s algorithm, reducing the number of recomputations at the cost of more memory space. In this paper, we shall present an efficient linear space algorithm, called the NFLSA algorithm, to reduce the ratio of recalculation greater than FastLSA while using the same memory. The NFLSA algorithm is superior to Hirschberg’s algorithm and the FastLSA algorithm in our simulations.