Results 1  10
of
24
GRASP: A Search Algorithm for Propositional Satisfiability
 IEEE Transactions on Computers
, 1999
"... AbstractÐThis paper introduces GRASP (Generic seaRch Algorithm for the Satisfiability Problem), a new search algorithm for Propositional Satisfiability (SAT). GRASP incorporates several searchpruning techniques that proved to be quite powerful on a wide variety of SAT problems. Some of these techni ..."
Abstract

Cited by 374 (36 self)
 Add to MetaCart
AbstractÐThis paper introduces GRASP (Generic seaRch Algorithm for the Satisfiability Problem), a new search algorithm for Propositional Satisfiability (SAT). GRASP incorporates several searchpruning techniques that proved to be quite powerful on a wide variety of SAT problems. Some of these techniques are specific to SAT, whereas others are similar in spirit to approaches in other fields of Artificial Intelligence. GRASP is premised on the inevitability of conflicts during the search and its most distinguishing feature is the augmentation of basic backtracking search with a powerful conflict analysis procedure. Analyzing conflicts to determine their causes enables GRASP to backtrack nonchronologically to earlier levels in the search tree, potentially pruning large portions of the search space. In addition, by ªrecordingº the causes of conflicts, GRASP can recognize and preempt the occurrence of similar conflicts later on in the search. Finally, straightforward bookkeeping of the causality chains leading up to conflicts allows GRASP to identify assignments that are necessary for a solution to be found. Experimental results obtained from a large number of benchmarks indicate that application of the proposed conflict analysis techniques to SAT algorithms can be extremely effective for a large number of representative classes of SAT instances. Index TermsÐSatisfiability, search algorithms, conflict diagnosis, conflictdirected nonchronological backtracking, conflictbased equivalence, failuredriven assertions, unique implication points. 1
The Rhetorical Parsing, Summarization, and Generation of Natural Language Texts
, 1997
"... This thesis is an inquiry into the nature of the highlevel, rhetorical structure of unrestricted natural language texts, computational means to enable its derivation, and two applications (in automatic summarization and natural language generation) that follow from the ability to build such structu ..."
Abstract

Cited by 108 (9 self)
 Add to MetaCart
This thesis is an inquiry into the nature of the highlevel, rhetorical structure of unrestricted natural language texts, computational means to enable its derivation, and two applications (in automatic summarization and natural language generation) that follow from the ability to build such structures automatically. The thesis proposes a firstorder formalization of the highlevel, rhetorical structure of text. The formalization assumes that text can be sequenced into elementary units; that discourse relations hold between textual units of various sizes; that some textual units are more important to the writer's purpose than others; and that trees are a good approximation of the abstract structure of text. The formalization also introduces a linguistically motivated compositionality criterion, which is shown to hold for the text structures that are valid. The thesis proposes, analyzes theoretically, and compares empirically four algorithms for determining the valid text structures of ...
The Probabilistic Analysis of a Greedy Satisfiability Algorithm
, 2002
"... Consider the following simple, greedy DavisPutnam algorithm applied to a random 3CNF formula of fixed density (clauses to variables ratio): Arbitrarily select and set to True a literal that appears in as many clauses as possible, irrespective of their size (and irrespective of the number of occu ..."
Abstract

Cited by 72 (5 self)
 Add to MetaCart
Consider the following simple, greedy DavisPutnam algorithm applied to a random 3CNF formula of fixed density (clauses to variables ratio): Arbitrarily select and set to True a literal that appears in as many clauses as possible, irrespective of their size (and irrespective of the number of occurrences of the negation of the literal). Delete these clauses from the formula, and also delete the negation of this literal from any clauses it appears. Repeat. If however unit clauses ever appear, then first repeatedly and in any order set the literals in them to True and delete and shrink clauses accordingly, until no unit clause remains. Also if at any step an empty clause appears, then do not backtrack, but just terminate the algorithm and report failure. A slight modification of this algorithm is probabilistically analyzed in this paper (rigorously). It is proved that for random formulas of n variables and density up to 3.42, it succeeds in producing a satisfying truth assignment with bounded away from zero probability, as n approaches infinity. Therefore the satisfiability threshold is at least 3.42.
Lookahead versus lookback for satisfiability problems
 THIRD INTERNATIONAL CONFERENCE ON PRINCIPLES AND PRACTICE OF CONSTRAINT PROGRAMMING, LECTURE NOTES IN COMPUTER SCIENCE
, 1997
"... CNF propositional satis ability (SAT) is a special kind of the more general Constraint Satisfaction Problem (CSP). While lookback techniques appear to be of little use to solve hard random SAT problems, it is supposed that they are necessary to solve hard structured SAT problems. In this paper, we ..."
Abstract

Cited by 62 (1 self)
 Add to MetaCart
CNF propositional satis ability (SAT) is a special kind of the more general Constraint Satisfaction Problem (CSP). While lookback techniques appear to be of little use to solve hard random SAT problems, it is supposed that they are necessary to solve hard structured SAT problems. In this paper, we propose a very simple DPL procedure called Satz which only employs some lookahead techniques: a variable ordering heuristic, a forward consistency checking (Unit Propagation) and a limited resolution before the search, where the heuristic is itself based on unit propagation. Satz is favorably compared on random 3SAT problems with three DPL procedures among the best in the literature for these problems. Furthermore on a great number of problems in 4 wellknown SAT benchmarks Satz reaches or outspeeds the performance of three other DPL procedures among the best in the literature for structured SAT problems. The comparative results suggest that a suitable exploitation of lookahead techniques, while very simple and efficient for random SAT problems, may allow to do without sophisticated lookback techniques in a DPL procedure.
The Taming of the (X)OR
 CL 2000
, 2000
"... Many key verification problems such as bounded modelchecking, circuit verification and logical cryptanalysis are formalized with combined clausal and affine logic (i.e. clauses with xor as the connective) and cannot be efficiently (if at all) solved by using CNFonly provers. We present a decision ..."
Abstract

Cited by 56 (7 self)
 Add to MetaCart
Many key verification problems such as bounded modelchecking, circuit verification and logical cryptanalysis are formalized with combined clausal and affine logic (i.e. clauses with xor as the connective) and cannot be efficiently (if at all) solved by using CNFonly provers. We present a decision procedure to efficiently decide such problems. The GaussDPLL procedure is a tight integration in a unifying framework of a GaussElimination procedure (for affine logic) and a DavisPutnamLogemanLoveland procedure (for usual clause logic). The key idea, which distinguishes our approach from others, is the full interaction bewteen the two parts which makes it possible to maximize (deterministic) simplification rules by passing around newly created unit or binary clauses in either of these parts. We show the correcteness and the termination of GaussDPLL under very liberal assumptions.
Clustering at the Phase Transition
 In Proc. of the 14th Nat. Conf. on AI
, 1997
"... Many problem ensembles exhibit a phase transition that is associated with a large peak in the average cost of solving the problem instances. However, this peak is not necessarily due to a lack of solutions: indeed the average number of solutions is typically exponentially large. Here, we study this ..."
Abstract

Cited by 39 (3 self)
 Add to MetaCart
Many problem ensembles exhibit a phase transition that is associated with a large peak in the average cost of solving the problem instances. However, this peak is not necessarily due to a lack of solutions: indeed the average number of solutions is typically exponentially large. Here, we study this situation within the context of the satisfiability transition in Random 3SAT. We find that a significant subclass of instances emerges as we cross the phase transition. These instances are characterized by having about 8595% of their variables occurring in unary prime implicates (UPIs), with their remaining variables being subject to few constraints. In such instances the models are not randomly distributed but all lie in a cluster that is exponentially large, but still admits a simple description. Studying the effect of UPIs on the local search algorithm Wsat shows that these "singlecluster" instances are harder to solve, and we relate their appearance at the phase transition to the peak...
Supermodels and Robustness
 In AAAI/IAAI
, 1998
"... When search techniques are used to solve a practical problem, the solution produced is often brittle in the sense that small execution difficulties can have an arbitrarily large effect on the viability of the solution. The AI community has responded to this difficulty by investigating the developmen ..."
Abstract

Cited by 39 (4 self)
 Add to MetaCart
When search techniques are used to solve a practical problem, the solution produced is often brittle in the sense that small execution difficulties can have an arbitrarily large effect on the viability of the solution. The AI community has responded to this difficulty by investigating the development of "robust problem solvers" that are intended to be proof against this difficulty. We argue that robustness is best cast not as a property of the problem solver, but as a property of the solution. We introduce a new class of models for a logical theory, called supermodels, that captures this idea. Supermodels guarantee that the model in question is robust, and allow us to quantify the degree to which it is so. We investigate the theoretical properties of supermodels, showing that finding supermodels is typically of the same theoretical complexity as finding models. We provide a general way to modify a logical theory so that a model of the modified theory is a supermodel of the original. Ex...
On the Complexity of kSAT
, 2001
"... The kSAT problem is to determine if a given kCNF has a satisfying assignment. It is a celebrated open question as to whether it requires exponential time to solve kSAT for k 3. Here exponential time means 2 $n for some $>0. In this paper, assuming that, for k 3, kSAT requires exponential time co ..."
Abstract

Cited by 38 (2 self)
 Add to MetaCart
The kSAT problem is to determine if a given kCNF has a satisfying assignment. It is a celebrated open question as to whether it requires exponential time to solve kSAT for k 3. Here exponential time means 2 $n for some $>0. In this paper, assuming that, for k 3, kSAT requires exponential time complexity, we show that the complexity of kSAT increases as k increases. More precisely, for k 3, define s k=inf[$: there exists 2 $n algorithm for solving kSAT]. Define ETH (ExponentialTime Hypothesis) for kSAT as follows: for k 3, s k>0. In this paper, we show that s k is increasing infinitely often assuming ETH for kSAT. Let s be the limit of s k. We will in fact show that s k (1&d k) s for some constant d>0. We prove this result by bringing together the ideas of critical clauses and the Sparsification Lemma to reduce the satisfiability of a kCNF to the satisfiability of a disjunction of 2 =n k$CNFs in fewer variables for some k $ k and arbitrarily small =>0. We also show that such a disjunction can be computed in time 2 =n for arbitrarily small =>0.
A Spectral Technique for Random Satisfiable 3CNF Formulas
, 2002
"... Let I be a random 3CNF formula generated by choosing a truth assignment φ for variables x_1, ..., x_n uniformly at random and including every clause with i literals set true by φ with probability p_i, independently. We show that for any 0 ≤ η_2, η_3 ≤ 1 there is a constant d_mi ..."
Abstract

Cited by 31 (3 self)
 Add to MetaCart
Let I be a random 3CNF formula generated by choosing a truth assignment φ for variables x_1, ..., x_n uniformly at random and including every clause with i literals set true by φ with probability p_i, independently. We show that for any 0 ≤ η_2, η_3 ≤ 1 there is a constant d_min so that for all d ≥ d_min a spectral algorithm similar to the graph coloring algorithm of [1] will find a satisfying assignment with high probability for p_1 = d/n², p_2 = ...
Survey propagation: an algorithm for satisfiability
, 2002
"... ABSTRACT: We study the satisfiability of randomly generated formulas formed by M clauses of exactly K literals over N Boolean variables. For a given value of N the problem is known to be most difficult when α = M/N is close to the experimental threshold αc separating the region where almost all form ..."
Abstract

Cited by 30 (1 self)
 Add to MetaCart
ABSTRACT: We study the satisfiability of randomly generated formulas formed by M clauses of exactly K literals over N Boolean variables. For a given value of N the problem is known to be most difficult when α = M/N is close to the experimental threshold αc separating the region where almost all formulas are SAT from the region where all formulas are UNSAT. Recent results from a statistical physics analysis suggest that the difficulty is related to the existence of a clustering phenomenon of the solutions when α is close to (but smaller than) αc. We introduce a new type of message passing algorithm which allows to find efficiently a satisfying assignment of the variables in this difficult region. This algorithm is iterative and composed of two main parts. The first is a messagepassing procedure which generalizes the usual methods like SumProduct or Belief Propagation: It passes messages that may be thought of as surveys over clusters of the ordinary messages. The second part uses the detailed probabilistic information obtained from the surveys in order to fix variables and simplify the problem. Eventually, the simplified problem that remains is solved by a conventional