Results 1 
8 of
8
Algorithms for the Satisfiability (SAT) Problem: A Survey
 DIMACS Series in Discrete Mathematics and Theoretical Computer Science
, 1996
"... . The satisfiability (SAT) problem is a core problem in mathematical logic and computing theory. In practice, SAT is fundamental in solving many problems in automated reasoning, computeraided design, computeraided manufacturing, machine vision, database, robotics, integrated circuit design, compute ..."
Abstract

Cited by 127 (3 self)
 Add to MetaCart
. The satisfiability (SAT) problem is a core problem in mathematical logic and computing theory. In practice, SAT is fundamental in solving many problems in automated reasoning, computeraided design, computeraided manufacturing, machine vision, database, robotics, integrated circuit design, computer architecture design, and computer network design. Traditional methods treat SAT as a discrete, constrained decision problem. In recent years, many optimization methods, parallel algorithms, and practical techniques have been developed for solving SAT. In this survey, we present a general framework (an algorithm space) that integrates existing SAT algorithms into a unified perspective. We describe sequential and parallel SAT algorithms including variable splitting, resolution, local search, global optimization, mathematical programming, and practical SAT algorithms. We give performance evaluation of some existing SAT algorithms. Finally, we provide a set of practical applications of the sat...
Efficient Local Search with Conflict Minimization: A Case Study of the NQueens Problem
 IEEE Transactions on Knowledge and Data Engineering
, 1994
"... Backtracking search is frequently applied to solve a constraintbased search problem but it often suffers from exponential growth of computing time. We present an alternative to backtracking search: local search based on conflict minimization. We have applied this general search framework to study a ..."
Abstract

Cited by 27 (6 self)
 Add to MetaCart
Backtracking search is frequently applied to solve a constraintbased search problem but it often suffers from exponential growth of computing time. We present an alternative to backtracking search: local search based on conflict minimization. We have applied this general search framework to study a benchmark constraintbased search problem, the nqueens problem. An efficient local search algorithm for the nqueens problem was implemented. This algorithm, running in linear time, does not backtrack at all. It is capable of finding a solution for extremely large size nqueens problems. For example, on a workstation computer, it can find a solution for 3,000,000 queens in less than 55 seconds. Keywords: conflict minimization, local search, nqueens problem, nonbacktracking search. 1 This research has been supported in part by the University of Utah research fellowships, in part by the Research Council of Slovenia, and in part by ACM/IEEE academic scholarship awards. 1 Introduction A ...
PlanRefinement Strategies and SearchSpace Size
 PROCEEDINGS OF THE EUROPEAN CONFERENCE ON PLANNING
, 1997
"... During the planning process, a planner may have many options for refinements to perform on the plan being developed. The planner's efficiency depends on how it chooses which refinement to do next. Recent studies have shown that several versions of the popular "least commitment" plan refinement strat ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
During the planning process, a planner may have many options for refinements to perform on the plan being developed. The planner's efficiency depends on how it chooses which refinement to do next. Recent studies have shown that several versions of the popular "least commitment" plan refinement strategy are often outperformed by a fewest alternatives first (FAF) strategy that chooses to refine the plan element that has the smallest number of alternative refinement options. In this paper, we examine the FAF strategy in more detail, to try to gain a better understanding of how well it performs and why. We present the following results: . A refinement planner's search space is an AND/OR graph, and the planner "serializes" this graph by mapping it into an equivalent statespace graph. Different plan refinement strategies produce different serializations of the AND/OR graph. . The sizes of different serializations of the AND/OR graph can differ by an exponential amount. A planner whose re...
Backtracking and Probing
, 1993
"... : We analyze two algorithms for solving constraint satisfaction problems. One of these algorithms, Probe Order Backtracking, has an average running time much faster than any previously analyzed algorithm for problems where solutions are common. Probe Order Backtracking uses a probing assignment (a p ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
: We analyze two algorithms for solving constraint satisfaction problems. One of these algorithms, Probe Order Backtracking, has an average running time much faster than any previously analyzed algorithm for problems where solutions are common. Probe Order Backtracking uses a probing assignment (a preselected test assignment to unset variables) to help guide the search for a solution to a constraint satisfaction problem. If the problem is not satisfied when the unset variables are temporarily set to the probing assignment, the algorithm selects one of the relations that the probing assignment fails to satisfy and selects an unset variable from that relation. Then at each backtracking step it generates subproblems by setting the selected variable each possible way. It simplifies each subproblem, and tries the same technique on them. For random problems with v variables, t clauses, and probability p that a literal appears in a clause, the average time for Probe Order Backtracking is no m...
The Probability of Pure Literals
"... We describe an error in earlier probabilistic analyses of the pure literal heuristic as a procedure for solving kSAT . All probabilistic analyses are in the constant degree model in which a random instance C of kSAT consists of m clauses selected independently and uniformly (with replacement) from ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
We describe an error in earlier probabilistic analyses of the pure literal heuristic as a procedure for solving kSAT . All probabilistic analyses are in the constant degree model in which a random instance C of kSAT consists of m clauses selected independently and uniformly (with replacement) from the set of all kclauses over n variables. We provide a new analysis for k = 2. Specifically, we show with probability approaching 1 as m goes to 1 one can apply the pure literal rule repeatedly to a random instance of 2SAT until the number of clauses is "small" provided n=m ? 1. But if n=m ! 1, with probability approaching 1 if the pure literal rule is applied as much as possible, then at least m 1=5 clauses will remain. Keywords: 2SAT , constant degree model, DavisPutnam Procedure, pure literal (heuristic), probability of a pure literal 1 1
Domain Independant Heuristics in Hybrid Algorithms for CSP's
, 1994
"... Over the years a large number of algorithms has been discovered to solve instances of CSP problems. In a recent paper Prosser [9] proposed a new approach to these algorithms by splitting them up in groups with identical forward (Backtracking, Backjumping, ConflictDirected Backjumping) and backward ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Over the years a large number of algorithms has been discovered to solve instances of CSP problems. In a recent paper Prosser [9] proposed a new approach to these algorithms by splitting them up in groups with identical forward (Backtracking, Backjumping, ConflictDirected Backjumping) and backward (Backtracking, Backmarking, Forward Checking) moves. By combining the forward move of an algorithm from the first group and the backward move of an algorithm from the second group he was able to develop four new hybrid algorithms: Backmarking with Backjumping (BMJ), Backmarking with ConflictDirected Backjumping (BMCBJ) , Forward Checking with Backjumping (FCBJ) and Forward Checking with ConflictDirected Backjumping (FCCBJ). Variable reordering heuristics have been suggested by, among others, by Haralick [6] and Purdom [11, 14] to improve the standard CSP algorithms. They obtained both analytical and empiral results about the performance of these heuristics in their research. In this thes...
Probe Order Backtracking
, 1997
"... . The algorithm for constraintsatisfaction problems, Probe Order Backtracking, has an average running time much faster than any previously analyzed algorithm under conditions where solutions are common. The algorithm uses a probing assignment (a preselected test assignment to unset variables) to he ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
. The algorithm for constraintsatisfaction problems, Probe Order Backtracking, has an average running time much faster than any previously analyzed algorithm under conditions where solutions are common. The algorithm uses a probing assignment (a preselected test assignment to unset variables) to help guide the search for a solution. If the problem is not satisfied when the unset variables are temporarily set to the probing assignment, the algorithm selects one of the relations which is not satisfied by the probing assignment and selects an unset variable which a#ects the value of that relation. It then does a backtracking (splitting) step, where it generates subproblems by setting the selected variable each possible way. Each subproblem is simplified and then solved recursively. For random problems with v variables, t clauses, and probability p that a literal appears in a clause, the average time for Probe Order Backtracking is no more than v n when p # (ln t)/v plus lowerorder t...
LETAVEC AND RUGGIERO The nQueens Problem The nQueens Problem
"... The nqueens problem, originally introduced in 1850 by Carl Gauss, may be stated as follows: find a placement of n queens on an n×n chessboard, such that no one queen can be taken by any other. While it has been well known that the solution to the nqueens problem is n, numerous solutions have been ..."
Abstract
 Add to MetaCart
The nqueens problem, originally introduced in 1850 by Carl Gauss, may be stated as follows: find a placement of n queens on an n×n chessboard, such that no one queen can be taken by any other. While it has been well known that the solution to the nqueens problem is n, numerous solutions have been published since the original problem was proposed. Many of these solutions rely on providing a specific formula for placing queens or transposing smaller solutions sets to provide solutions for larger values of n (Bernhardsson, 1991 and Hoffman et al., 1969). Empirical observations of smallersize problems show that the number of solutions increases exponentially with increasing n (Sosi and Gu, 1994). Alternatively, searchbased algorithms have been