Results 1 
9 of
9
Algorithms for the Satisfiability (SAT) Problem: A Survey
 DIMACS Series in Discrete Mathematics and Theoretical Computer Science
, 1996
"... . The satisfiability (SAT) problem is a core problem in mathematical logic and computing theory. In practice, SAT is fundamental in solving many problems in automated reasoning, computeraided design, computeraided manufacturing, machine vision, database, robotics, integrated circuit design, compute ..."
Abstract

Cited by 125 (3 self)
 Add to MetaCart
. The satisfiability (SAT) problem is a core problem in mathematical logic and computing theory. In practice, SAT is fundamental in solving many problems in automated reasoning, computeraided design, computeraided manufacturing, machine vision, database, robotics, integrated circuit design, computer architecture design, and computer network design. Traditional methods treat SAT as a discrete, constrained decision problem. In recent years, many optimization methods, parallel algorithms, and practical techniques have been developed for solving SAT. In this survey, we present a general framework (an algorithm space) that integrates existing SAT algorithms into a unified perspective. We describe sequential and parallel SAT algorithms including variable splitting, resolution, local search, global optimization, mathematical programming, and practical SAT algorithms. We give performance evaluation of some existing SAT algorithms. Finally, we provide a set of practical applications of the sat...
Breadth first search 3SAT algorithms for DNA computers
, 1996
"... This paper demonstrates that some practical 3SAT algorithms on conventional computers can be implemented on a DNA computer as a polynomial time breadth first search procedure based only on the fundamental chemical operations identified by Adleman and Lipton’s method. In particular, the MonienSpecke ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
This paper demonstrates that some practical 3SAT algorithms on conventional computers can be implemented on a DNA computer as a polynomial time breadth first search procedure based only on the fundamental chemical operations identified by Adleman and Lipton’s method. In particular, the MonienSpeckenmeyer algorithm, when implemented on DNA, becomes an ¢¡¤£¦¥¨§�©�����������£���� ���� � ���� � time, space algorithm, with significant increase in time and significant decrease in space. This paper also proposes a fast breadth first search method with fixed split points. The running time is at most twice as that of Lipton. Although theoretical analysis of the algorithm is yet to be done, simulations on a conventional computer suggest that the algorithm could significantly reduce the search space for 3SAT for most cases. If the observation is correct, the algorithm would allow DNA computers to handle 3SAT formulas of more than 120 variables, thereby doubling the limit given by Lipton. 1
Elimination Of Infrequent Variables Improves Average Case Performance Of Satisfiability Algorithms
 SIAM J. Comput
, 1991
"... . We consider preprocessing a random instance I of CNF Satisfiability in order to remove infrequent variables (those which appear once or twice in an instance) from I. The model used to generate random instances is the popular randomclausesize model with parametersn, the number of clauses, r, the ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
. We consider preprocessing a random instance I of CNF Satisfiability in order to remove infrequent variables (those which appear once or twice in an instance) from I. The model used to generate random instances is the popular randomclausesize model with parametersn, the number of clauses, r, the number of Boolean variables from which clauses are composed, and p, the probability that a variable appears in a clause as a positive (or negative) literal. It is shown that exhaustive search over such preprocessed instances runs in polynomial average time over a significantly larger parameter space than has been shown for any other algorithm under the randomclausesize model when n = r ffl , ffl ! 1, and pr ! p fflr ln(r). Specifically, the results are that random instances of Satisfiability are "easy" in the average case if n = r ffl , 2=3 ? ffl ? 0, and pr ! (ln(n)=4) 1=3 r 2=3\Gammaffl ; or n = r ffl , 1 ? ffl 2=3, pr ! (1 \Gamma ffl \Gamma ffi) ln(n)=ffl for any ffi ? 0...
Average Case Results for Satisfiability Algorithms Under the Random Clause Width Model
 Annals of Mathematics and Artificial Intelligence
, 1995
"... In the probabilistic analysis of algorithms for the Satisfiability problem, the randomclausewidth model is one of the most popular for generating random instances. This model is parameterized and it is not difficult to show that virtually the entire parameter space is covered by a collection of ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
In the probabilistic analysis of algorithms for the Satisfiability problem, the randomclausewidth model is one of the most popular for generating random instances. This model is parameterized and it is not difficult to show that virtually the entire parameter space is covered by a collection of polynomial time algorithms that find solutions to random instances with probability tending to 1 as instance size increases. But finding a collection of polynomial average time algorithms that cover the parameter space has proved much harder and such results have spanned approximately ten years. However, it can now be said that virtually the entire parameter space is covered by polynomial average time algorithms. This paper relates dominant, exploitable properties of random formulas over the parameter space to mechanisms of polynomial average time algorithms. The probabilistic discussion of such properties is new; main averagecase results over the last ten years are reviewed. 1 Intr...
Backtracking and Probing
, 1993
"... : We analyze two algorithms for solving constraint satisfaction problems. One of these algorithms, Probe Order Backtracking, has an average running time much faster than any previously analyzed algorithm for problems where solutions are common. Probe Order Backtracking uses a probing assignment (a p ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
: We analyze two algorithms for solving constraint satisfaction problems. One of these algorithms, Probe Order Backtracking, has an average running time much faster than any previously analyzed algorithm for problems where solutions are common. Probe Order Backtracking uses a probing assignment (a preselected test assignment to unset variables) to help guide the search for a solution to a constraint satisfaction problem. If the problem is not satisfied when the unset variables are temporarily set to the probing assignment, the algorithm selects one of the relations that the probing assignment fails to satisfy and selects an unset variable from that relation. Then at each backtracking step it generates subproblems by setting the selected variable each possible way. It simplifies each subproblem, and tries the same technique on them. For random problems with v variables, t clauses, and probability p that a literal appears in a clause, the average time for Probe Order Backtracking is no m...
The Probability of Pure Literals
"... We describe an error in earlier probabilistic analyses of the pure literal heuristic as a procedure for solving kSAT . All probabilistic analyses are in the constant degree model in which a random instance C of kSAT consists of m clauses selected independently and uniformly (with replacement) from ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
We describe an error in earlier probabilistic analyses of the pure literal heuristic as a procedure for solving kSAT . All probabilistic analyses are in the constant degree model in which a random instance C of kSAT consists of m clauses selected independently and uniformly (with replacement) from the set of all kclauses over n variables. We provide a new analysis for k = 2. Specifically, we show with probability approaching 1 as m goes to 1 one can apply the pure literal rule repeatedly to a random instance of 2SAT until the number of clauses is "small" provided n=m ? 1. But if n=m ! 1, with probability approaching 1 if the pure literal rule is applied as much as possible, then at least m 1=5 clauses will remain. Keywords: 2SAT , constant degree model, DavisPutnam Procedure, pure literal (heuristic), probability of a pure literal 1 1
Probe Order Backtracking
, 1997
"... . The algorithm for constraintsatisfaction problems, Probe Order Backtracking, has an average running time much faster than any previously analyzed algorithm under conditions where solutions are common. The algorithm uses a probing assignment (a preselected test assignment to unset variables) to he ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
. The algorithm for constraintsatisfaction problems, Probe Order Backtracking, has an average running time much faster than any previously analyzed algorithm under conditions where solutions are common. The algorithm uses a probing assignment (a preselected test assignment to unset variables) to help guide the search for a solution. If the problem is not satisfied when the unset variables are temporarily set to the probing assignment, the algorithm selects one of the relations which is not satisfied by the probing assignment and selects an unset variable which a#ects the value of that relation. It then does a backtracking (splitting) step, where it generates subproblems by setting the selected variable each possible way. Each subproblem is simplified and then solved recursively. For random problems with v variables, t clauses, and probability p that a literal appears in a clause, the average time for Probe Order Backtracking is no more than v n when p # (ln t)/v plus lowerorder t...
On the Occurrence of Null Clauses in Random Instances of Satisfiability
 Discrete Applied Mathematics
, 1989
"... We analyze a popular probabilistic model for generating instances of Satisfiability. According to this model, each literal of a set L = fv 1 ; v 1 ; v 2 ; v 2 ; :::; v r ; v r g of literals appears independently in each of n clauses with probability p. This model allows null clauses and the frequ ..."
Abstract
 Add to MetaCart
We analyze a popular probabilistic model for generating instances of Satisfiability. According to this model, each literal of a set L = fv 1 ; v 1 ; v 2 ; v 2 ; :::; v r ; v r g of literals appears independently in each of n clauses with probability p. This model allows null clauses and the frequency of occurrence of such clauses depends on the relationship between the parameters n, r, and p. If an instance contains a null clause it is trivially unsatisfiable. Several papers present polynomial average time results under this model when null clauses are numerous (e.g. [4,5]) but, until now, not all such cases have been covered by averagecase efficient algorithms. In fact, a recent paper [2] shows that the average complexity of the pure literal rule is superpolynomial even when most random instances contain a null clause. We show here that a simple strategy based on locating null clauses in a given random input has polynomial average complexity if either n r :5 , and pr ! ln(n)=2...
Some Interesting Research Directions in Satisfiability
, 2000
"... Some reflections on past research in SAT algorithms is presented. Some of the more important goals of SAT research are stated and results given. Possible future SAT research topics are outlined. ..."
Abstract
 Add to MetaCart
Some reflections on past research in SAT algorithms is presented. Some of the more important goals of SAT research are stated and results given. Possible future SAT research topics are outlined.