Results 1  10
of
15
Searching with pattern databases
 Advances in Artificial Intelligence (Lecture Notes in Artificial Intelligence 1081
, 1996
"... Abstract. The efficiency of A * searching depends on the quality of the lower bound estimates of the solution cost. Pattern databases enumerate all possible subgoals required by any solution, subject to constraints on the subgoal size. Each subgoal in the database provides a tight lower bound on the ..."
Abstract

Cited by 77 (10 self)
 Add to MetaCart
Abstract. The efficiency of A * searching depends on the quality of the lower bound estimates of the solution cost. Pattern databases enumerate all possible subgoals required by any solution, subject to constraints on the subgoal size. Each subgoal in the database provides a tight lower bound on the cost of achieving it. For a given state in the search space, all possible subgoals are looked up, with the maximum cost over all lookups being the lower bound. For sliding tile puzzles, the database enumerates all possible patterns containing N tiles and, for each one, contains a lower bound on the distance to correctly move all N tiles into their correct final location. For the 15Puzzle, iterative~deepening A * with pattern databases (N=8) reduces the total number of nodes searched on a standard problem set of 100 positions by over 1000fold. 1
Parallel Retrograde Analysis on a Distributed System
 In Supercomputing '95
, 1995
"... Retrograde Analysis (ra) is an AI search technique used to compute endgame databases, which contain optimal solutions for part of the search space of a game. ra has been applied successfully to several games, but its usefulness is restricted by the huge amount of cpu time and internal memory it requ ..."
Abstract

Cited by 18 (11 self)
 Add to MetaCart
Retrograde Analysis (ra) is an AI search technique used to compute endgame databases, which contain optimal solutions for part of the search space of a game. ra has been applied successfully to several games, but its usefulness is restricted by the huge amount of cpu time and internal memory it requires. We present a parallel distributed algorithm for ra that addresses these problems. ra is hard to parallelize efficiently, because the communication overhead potentially is enormous. We show that the overhead can be reduced drastically using message combining. We implemented the algorithm on an Ethernetbased distributed system. For one example game (awari), we have computed a large database in 50 minutes on 64 processors, whereas one machine took 40 hours (a speedup of 48). An even larger database (computed in half a day) would have required 400 MByte of internal memory on a uniprocessor and would compute for weeks. Keywords: gametree search, retrograde analysis, distribute...
A Compressed BreadthFirst Search for Satisfiability
 Proc. 4th Workshop on Algorithm Engineering and Experiments
, 2002
"... Leading algorithms for Boolean satisfiability (SAT) are based on either a depthfirst tree traversal of the search space (the DLL procedure [6]) or resolution (the DP procedure [7]). In this work we introduce a variant of BreadthFirst Search (BFS) based on the ability of ZeroSuppressed Binary De ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
Leading algorithms for Boolean satisfiability (SAT) are based on either a depthfirst tree traversal of the search space (the DLL procedure [6]) or resolution (the DP procedure [7]). In this work we introduce a variant of BreadthFirst Search (BFS) based on the ability of ZeroSuppressed Binary Decision Diagrams (ZDDs) to compactly represent sparse or structured collections of subsets.
A Performance Analysis of TranspositionTableDriven Scheduling in Distributed Search
 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS
, 2002
"... This paper discusses a new workscheduling algorithm for parallel search of singleagent state spaces, called TranspositionTableDriven Work Scheduling, that places the transposition table at the heart of the parallel work scheduling. The scheme results in less synchronization overhead, less proce ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
This paper discusses a new workscheduling algorithm for parallel search of singleagent state spaces, called TranspositionTableDriven Work Scheduling, that places the transposition table at the heart of the parallel work scheduling. The scheme results in less synchronization overhead, less processor idle time, and less redundant search effort. Measurements on a 128processor parallel machine show that the scheme achieves closetolinear speedups; for large problems the speedups are even superlinear due to better memory usage. On the same machine, the algorithm is 1.6 to 12.9 times faster than traditional workstealingbased schemes.
ZRAM: A Library of Parallel Search Algorithms and Its Use in Enumeration and Combinatorial Optimization
, 1998
"... ..."
Transposition Table Driven Work Scheduling in Distributed Search
 IN 16TH NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE (AAAI'99
, 1999
"... This paper introduces a new scheduling algorithm for parallel singleagent search, transposition table driven work scheduling, that places the transposition table at the heart of the parallel work scheduling. The scheme results in less synchronization overhead, less processor idle time, and less ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
This paper introduces a new scheduling algorithm for parallel singleagent search, transposition table driven work scheduling, that places the transposition table at the heart of the parallel work scheduling. The scheme results in less synchronization overhead, less processor idle time, and less redundant search effort. Measurements on a 128processor parallel machine show that the scheme achieves nearlyoptimal performance and scales well. The algorithm performs a factor of 2.0 to 13.7 times better than traditional workstealingbased schemes.
All the needles in a haystack: Can exhaustive search overcome combinatorial chaos?
 LECTURE NOTES IN COMPUTER SCIENCE
, 1995
"... For half a century since computers came into existence, the goal of finding elegant and efficient algorithms to solve "simple" (welldefined and wellstructured) problems has dominated algorithm design. Over the same time period, both processing and storage capacity of computers have increased rough ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
For half a century since computers came into existence, the goal of finding elegant and efficient algorithms to solve "simple" (welldefined and wellstructured) problems has dominated algorithm design. Over the same time period, both processing and storage capacity of computers have increased roughly by a factor of 10 6 . The next few decades may well give us a similar rate of growth in raw computing power, due to various factors such as continuing miniaturization, parallel and distributed computing. If a quantitative change of orders of magnitude leads to qualitative changes, where will the latter take place? Many problems exhibit no detectable regular structure to be exploited, they appear "chaotic ", and do not yield to efficient algorithms. Exhaustive search of large state spaces appears to be the only viable approach. We survey techniques for exhaustive search, typical combinatorial problems that have been solved, and present one case study in detail.
Exhaustive search, combinatorial optimization and enumeration: Exploring the potential of raw computing power
 In SOFSEM 2000, number 1963 in LNCS
, 2000
"... Abstract. For half a century since computers came into existence, the goal of finding elegant and efficient algorithms to solve “simple ” (welldefined and wellstructured) problems has dominated algorithm design. Over the same time period, both processingand storage capacity of computers have increa ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Abstract. For half a century since computers came into existence, the goal of finding elegant and efficient algorithms to solve “simple ” (welldefined and wellstructured) problems has dominated algorithm design. Over the same time period, both processingand storage capacity of computers have increased by roughly a factor of a million. The next few decades may well give us a similar rate of growth in raw computing power, due to various factors such as continuingminiaturization, parallel and distributed computing. If a quantitative change of orders of magnitude leads to qualitative advances, where will the latter take place? Only empirical research can answer this question. Asymptotic complexity theory has emerged as a surprisingly effective tool for predictingrun times of polynomialtime algorithms. For NPhard problems, on the other hand, it yields overly pessimistic bounds. It asserts the nonexistence of algorithms that are efficient across an entire problem class, but ignores the fact that many instances, perhaps
Symbolic Exploration in TwoPlayer Games: Preliminary Results
 In Proceedings of the Sixth International Conference on AI Planning and Scheduling (AIPS02) Workshop on Model Checking
, 2002
"... In this paper symbolic exploration with binary decision diagrams (BDDs) is applied to twoplayer games to improve main memory consumption for reachability analysis and gametheoretical classification, since BDDs provide a compact representation for large set of game positions. A number of examp ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
In this paper symbolic exploration with binary decision diagrams (BDDs) is applied to twoplayer games to improve main memory consumption for reachability analysis and gametheoretical classification, since BDDs provide a compact representation for large set of game positions. A number of examples are evaluated: TicTacToe, Nim, Hex, and Four Connect. In Chess we restrict the considerations to the creation of endgame databases. The results are preliminary, but the study puts forth the idea that BDDs are widely applicable in game playing and provides a universal tool for people interested in quickly solving practical problems.
On Sliding Block Puzzles
"... A graph of a puzzle is obtained by associating each possible position with a vertex and by inserting edges between vertices iff the corresponding positions can be obtained from each other in one move. Computational methods for finding the vertices at maximum distance ffi from a vertex associated wi ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
A graph of a puzzle is obtained by associating each possible position with a vertex and by inserting edges between vertices iff the corresponding positions can be obtained from each other in one move. Computational methods for finding the vertices at maximum distance ffi from a vertex associated with a goal position are presented. Solutions are given for small sliding block puzzles, and methods for obtaining upper and lower bounds on ffi for large puzzles are considered. Old results are surveyed, and a new upper bound for the 24puzzle is obtained: ffi 210. 1. Introduction In the early 1980s, it was impossible to avoid hearing about Rubik's cube, a puzzle that became very popular all over the world. Very soon, mathematicians became interested in this puzzle, and several books have been written on the subject (for example, [4]). Another popularand much olderpuzzle is the 15puzzle, which was invented by Sam Loyd in the 19th century. This puzzle, and its variants, will be consid...