Results 1  10
of
33
Where the really hard problems are
, 1991
"... It is well known that for many NPcomplete problems, such as KSat, etc., typical cases are easy to solve; so that computationally hard cases must be rare (assuming P = NP). This paper shows that NPcomplete problems can be summarized by at least one "order parameter", and that the hard pr ..."
Abstract

Cited by 576 (1 self)
 Add to MetaCart
It is well known that for many NPcomplete problems, such as KSat, etc., typical cases are easy to solve; so that computationally hard cases must be rare (assuming P = NP). This paper shows that NPcomplete problems can be summarized by at least one "order parameter", and that the hard problems occur at a critical value of such a parameter. This critical value separates two regions of characteristically different properties. For example, for Kcolorability, the critical value separates overconstrained from underconstrained random graphs, and it marks the value at which the probability of a solution changes abruptly from near 0 to near 1. It is the high density of wellseparated almost solutions (local minima) at this boundary that cause search algorithms to "thrash". This boundary is a type of phase transition and we show that it is preserved under mappings between problems. We show that for some P problems either there is no phase transition or it occurs for bounded N (and so bounds the cost). These results suggest a way of deciding if a problem is in P or NP and why they are different. 1
Generalized bestfirst search strategies and the optimality of A*
 JOURNAL OF THE ACM
, 1985
"... This paper reports several properties of heuristic bestfirst search strategies whose scoring functions f depend on all the information available from each candidate path, not merely on the current cost g and the estimated completion cost h. It is shown that several known properties of A * retain t ..."
Abstract

Cited by 161 (12 self)
 Add to MetaCart
This paper reports several properties of heuristic bestfirst search strategies whose scoring functions f depend on all the information available from each candidate path, not merely on the current cost g and the estimated completion cost h. It is shown that several known properties of A * retain their form (with the minmax offplaying the role of the optimal cost), which helps establish general tests of admissibility and general conditions for node expansion for these strategies. On the basis of this framework the computational optimality of A*, in the sense of never expanding a node that can be skipped by some other algorithm having access to the same heuristic information that A* uses, is examined. A hierarchy of four optimality types is defined and three classes of algorithms and four domains of problem instances are considered. Computational performances relative to these algorithms and domains are appraised. For each classdomain combination, we then identify the strongest type of optimality that exists and the algorithm for achieving it. The main results of this paper relate to the class of algorithms that, like A*, return optimal solutions (i.e., admissible) when all cost estimates are optimistic (i.e., h 5 h*). On this class, A * is shown to be not optimal and it is also shown that no optimal algorithm exists, but if the performance tests are confirmed to cases in which the estimates are also consistent, then A * is indeed optimal. Additionally, A * is also shown to be optimal over a subset of the latter class containing all bestfirst algorithms that are guided by pathdependent evaluation functions.
Canâ€™t get no satisfaction
 AMERICAN SCIENTIST
, 1997
"... You are chief of protocol for the embassy ball. The crown prince instructs you either to invite Peru or to exclude Qatar. The queen asks you to invite either Qatar or Romania or both. The king, in a spiteful mood, wants to snub either Romania or Peru or both. Is there a guest list that will satisfy ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
You are chief of protocol for the embassy ball. The crown prince instructs you either to invite Peru or to exclude Qatar. The queen asks you to invite either Qatar or Romania or both. The king, in a spiteful mood, wants to snub either Romania or Peru or both. Is there a guest list that will satisfy the whims of the entire royal family? This contrived little puzzle is an instance of a problem that lies near the root of theoretical computer science. It is called the satisfiability problem, or SAT, and it was the first member of the notorious class known as NPcomplete problems. These are computational tasks that seem intrinsically hard, but after 25 years of effort no one has yet proved
Phase Transitions and Backbones of 3SAT and Maximum 3SAT
 In Proc. of 7th Int. Conf. on Principles and Practice of Constraint Programming (CP2001
, 2001
"... Many realworld problems involve constraints that cannot be all satisfied. Solving an overconstrained problem then means to find solutions minimizing the number of constraints violated, which is an optimization problem. In this research, we study the behavior of the phase transitions and backbones o ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
Many realworld problems involve constraints that cannot be all satisfied. Solving an overconstrained problem then means to find solutions minimizing the number of constraints violated, which is an optimization problem. In this research, we study the behavior of the phase transitions and backbones of constraint optimization problems. We rst investigate the relationship between the phase transitions of Boolean satisfiability, or precisely 3SAT (a wellstudied NPcomplete decision problem), and the phase transitions of MAX 3SAT (an NPhard optimization problem). To bridge the gap between the easyhardeasy phase transitions of 3SAT and the easyhard transitions of MAX 3SAT, we analyze bounded 3SAT, in which solutions of bounded quality, e.g., solutions with at most a constant number of constraints violated, are sufficient.
Incremental Search Algorithms for RealTime Decision Making
 Proceedings of the 2 nd Artificial Intelligence Planning Systems Conference (AIPS94
, 1994
"... We propose incremental, realtime search as a general approach to realtime decision making. We model realtime decision making as incremental tree search with a limited number of node expansions between decisions. We show that the decision policy of moving toward the best frontier node is not optim ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
We propose incremental, realtime search as a general approach to realtime decision making. We model realtime decision making as incremental tree search with a limited number of node expansions between decisions. We show that the decision policy of moving toward the best frontier node is not optimal, but nevertheless performs nearly as well as an expectedvaluebased decision policy. We also show that the realtime constraint causes difficulties for traditional bestfirst search algorithms. We then present a new approach that uses a separate heuristic function for choosing where to explore and which decision to make. Empirical results for random trees show that our new algorithm outperforms the traditional bestfirst search approach to realtime decision making, and that depthfirst branchandbound performs nearly as well as the more complicated bestfirst variation. Introduction and Overview We are interested in the general problem of how to make realtime decisions. One example of...
Pha*: Finding the shortest path with a* in unknown physical environments
 JAIR
, 2004
"... We address the problem of nding the shortest path between two points in an unknown real physical environment, where a traveling agent must move around in the environment to explore unknown territory. Weintroduce the PhysicalA * algorithm (PHA*) for solving this problem. PHA * expands all the mandat ..."
Abstract

Cited by 13 (7 self)
 Add to MetaCart
We address the problem of nding the shortest path between two points in an unknown real physical environment, where a traveling agent must move around in the environment to explore unknown territory. Weintroduce the PhysicalA * algorithm (PHA*) for solving this problem. PHA * expands all the mandatory nodes that A * would expand and returns the shortest path between the two points. However, due to the physical nature of the problem, the complexity of the algorithm is measured by the traveling e ort of the moving agent and not by the number of generated nodes, as in standard A*. PHA * is presented as atwolevel algorithm, such that its high level, A*, chooses the next node to be expanded and its low level directs the agent to that node in order to explore it. We presenta number of variations for both the highlevel and lowlevel procedures and evaluate their performance theoretically and experimentally. We show that the travel cost of our best variation is fairly close to the optimal travel cost, assuming that the mandatory nodes of A * are known in advance. We then generalize our algorithm to the multiagent case, where anumber of cooperative agents are designed to solve the problem. Speci cally, weprovide an experimental implementation for such a system. It should be noted that the problem addressed here is not a navigation problem, but rather a problem of nding the shortest path between two points for future usage. 1.
KBFS: KBestFirst Search
 Annals of Mathematics and Artificial Intelligence
, 2003
"... We introduce a new algorithm, Kbestfirst search (KBFS), which is a generalization of the well known bestfirst search. In KBFS, each iteration simultaneously expands the K best nodes from the openlist (rather than just the best as in BFS). We claim that KBFS outperforms BFS in domains where t ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
We introduce a new algorithm, Kbestfirst search (KBFS), which is a generalization of the well known bestfirst search. In KBFS, each iteration simultaneously expands the K best nodes from the openlist (rather than just the best as in BFS). We claim that KBFS outperforms BFS in domains where the heuristic function has large errors in estimation of the real distance to the goal state or does not predict deadends in the search tree. We present empirical results that confirm this claim and show that KBFS outperforms BFS by a factor of 15 on random trees with deadends, and by a factor of 2 and 7 on the Fifteen and TwentyFour tile puzzles, respectively. KBFS also finds better solutions than BFS and hillclimbing for the number partitioning problem. KBFS is only appropriate for finding approximate solutions with inadmissible heuristic functions.
epsilonTransformation: Exploiting Phase Transitions to Solve Combinatorial Optimization Problems
 Artificial Intelligence
, 1994
"... It has been shown that there exists a transition in the averagecase complexity of tree search problems, from exponential to polynomial in the search depth. We develop a new method, called ffl transformation, which makes use of this complexity transition, to find a suboptimal solution. With a rando ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
It has been shown that there exists a transition in the averagecase complexity of tree search problems, from exponential to polynomial in the search depth. We develop a new method, called ffl transformation, which makes use of this complexity transition, to find a suboptimal solution. With a random tree model, we show that the expected number of nodes expanded by branchandbound (BnB) using ffltransformation is at most cubic in the search depth, and that the error of the solution cost found relative to the optimal solution cost is a small constant. We also present an iterative version of ffltransformation that can be used to find both optimal and suboptimal goal nodes. Depthfirst BnB (DFBnB) using iterative ffltransformation significantly improves upon truncated DFBnB on random trees with large branching factors and deep goal nodes, finding better solutions sooner on average. Our experiments on the asymmetric traveling salesman problem show that DFBnB using ffl transformati...
An expectedcost analysis of backtracking and nonbacktracking algorithms
 In Procceedings of the International Joint Conference on Artifical Intelligence (IJCAI
, 1991
"... Consider an infinite binary search tree in which the branches have independent random costs. Suppose that we must find an optimal (cheapest) or nearly optimal path from the root to a node at depth n. Karp and Pearl [1983] show that a boundedlookahead backtracking algorithm A2 usually finds a nearly ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Consider an infinite binary search tree in which the branches have independent random costs. Suppose that we must find an optimal (cheapest) or nearly optimal path from the root to a node at depth n. Karp and Pearl [1983] show that a boundedlookahead backtracking algorithm A2 usually finds a nearly optimal path in linear expected time (when the costs take only the values 0 or 1). From this successful performance one might conclude that similar heuristics should be of more general use. But we find here equal success for a simpler nonbacktracking boundedlookahead algorithm, so the search model cannot support this conclusion. If, however, the search tree is generated by a branching process so that there is a possibility of nodes having no sons (or branches having prohibitive costs), then the nonbacktracking algorithm is hopeless while the backtracking algorithm still performs very well. These results suggest the general guideline that backtracking becomes attractive when there is the possibility of "deadends " or prohibitively costly outcomes.