Results 1 
7 of
7
New methods for 3SAT decision and worstcase analysis
 THEORETICAL COMPUTER SCIENCE
, 1999
"... We prove the worstcase upper bound 1:5045 n for the time complexity of 3SAT decision, where n is the number of variables in the input formula, introducing new methods for the analysis as well as new algorithmic techniques. We add new 2 and 3clauses, called "blocked clauses", generalizing the e ..."
Abstract

Cited by 66 (12 self)
 Add to MetaCart
We prove the worstcase upper bound 1:5045 n for the time complexity of 3SAT decision, where n is the number of variables in the input formula, introducing new methods for the analysis as well as new algorithmic techniques. We add new 2 and 3clauses, called "blocked clauses", generalizing the extension rule of "Extended Resolution." Our methods for estimating the size of trees lead to a refined measure of formula complexity of 3clausesets and can be applied also to arbitrary trees. Keywords: 3SAT, worstcase upper bounds, analysis of algorithms, Extended Resolution, blocked clauses, generalized autarkness. 1 Introduction In this paper we study the exponential part of time complexity for 3SAT decision and prove the worstcase upper bound 1:5044:: n for n the number of variables in the input formula, using new algorithmic methods as well as new methods for the analysis. These methods also deepen the already existing approaches in a systematically manner. The following results...
Algorithms for SAT/TAUT decision based on various measures
 Information and Computation
, 1999
"... We investigate algorithms deciding propositional tautologies for DNF and coNPcomplete subclasses given by restrictions on the number of occurrences of literals. Especially polynomial use of resolution for reductions in combination with a new combinatorial principle called "Generalized Sign Princip ..."
Abstract

Cited by 11 (8 self)
 Add to MetaCart
We investigate algorithms deciding propositional tautologies for DNF and coNPcomplete subclasses given by restrictions on the number of occurrences of literals. Especially polynomial use of resolution for reductions in combination with a new combinatorial principle called "Generalized Sign Principle" is studied. Upper bounds on time complexity are given with exponential part 2 ff\Delta(F ) where the measure (F ) for a clause set F either is the number n(F ) of variables, the number `(F ) of literal occurrences or the number k(F ) of clauses. ff is called a "power coefficient" for the class of formulas under consideration w.r.t. measure . Power coefficients are derived with the help of a method estimating the size of trees, which is also used to find "good" branching rules. Under natural conditions power coefficients ff; fi; fl for n; k; ` respectively fulfill ff fi fl. We obtain the following power coefficients.  0:1112 for DNF w.r.t. `  0:3334 for DNF w.r.t. k These result...
WorstCase Study of Local Search for MAXkSAT
 Discrete Applied Mathematics
, 2003
"... During the past three years there was a considerable growth in the number of algorithms solving MAXSAT and MAX2SAT in worstcase time of the order c where c < 2 is a constant, and K is the number of clauses of the input formula. However, similar bounds w.r.t. the number of variables instead of ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
During the past three years there was a considerable growth in the number of algorithms solving MAXSAT and MAX2SAT in worstcase time of the order c where c < 2 is a constant, and K is the number of clauses of the input formula. However, similar bounds w.r.t. the number of variables instead of the number of clauses are not known.
Algorithms for kSAT based on covering codes
, 2000
"... We show that for any k and , satisfiability of propositional formulas in kCNF can be checked by a deterministic algorithm running in time poly(n) (2k=(k + 1) + ) n , where n is the number of variables in the input formula. This is the best known worstcase upper bound for deterministic kSAT algori ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
We show that for any k and , satisfiability of propositional formulas in kCNF can be checked by a deterministic algorithm running in time poly(n) (2k=(k + 1) + ) n , where n is the number of variables in the input formula. This is the best known worstcase upper bound for deterministic kSAT algorithms. Our algorithm can be viewed as a derandomized version of Schöning's recent algorithm [17] whose bound poly(n) (2(k 1)=k) n is the best known bound for probabilistic 3SAT algorithms. The key point of our derandomization is the use of covering codes. Like Schöning's algorithm, our algorithm is quite simple. We show how to obtain slightly improved bounds by using a more thorough (but a more intricate) version of the algorithm. For example, for 3SAT the modi ed algorithm gives the bound poly(n) 1:490 n .
Worstcase time bounds for MAXkSAT w.r.t. the number of variables using local search
, 2000
"... During the past three years there was an explosion of algorithms solving MAXSAT and MAX2SAT in worstcase time of the order c K , where c < 2 is a constant, and K is the number of clauses in the input formula. Such bounds w.r.t. the number of variables instead of the number of clauses are not kno ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
During the past three years there was an explosion of algorithms solving MAXSAT and MAX2SAT in worstcase time of the order c K , where c < 2 is a constant, and K is the number of clauses in the input formula. Such bounds w.r.t. the number of variables instead of the number of clauses are not known. Also, it was proved that approximate solutions for these problems (even beyond inapproximability ratios) can be obtained faster than exact solutions. However, the corresponding exponents still depended on the number of clauses in the input formula. In this paper we give a randomized (1 )approximation algorithm for MAXkSAT. This algorithm runs in time of the order c N k; , where N is the number of variables, and c k; < 2 is a constant depending on k and .
Density Condensation of Boolean Formulas
 Sixth International Conference on the Theory and Applications of Satisfiability Testing, Santa Margherita Ligure
, 2003
"... Backgrounds and Motivations. Conventional complexity theory gives us only asymptotic information and does not give us any information about the complexity of each individual instance. It is also true, however, that many of us are feeling that the complexity of each instance is quite different from o ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Backgrounds and Motivations. Conventional complexity theory gives us only asymptotic information and does not give us any information about the complexity of each individual instance. It is also true, however, that many of us are feeling that the complexity of each instance is quite different from one to another. Instance complexity, denoted by ic(x: A), has been thus introduced [7, 9] as a measure of the complexity of an individual instance x for a decision problem A. ic(x: A) is defined as the length of the shortest program that gives the correct answer to the query “x ∈ A? ” and that does not make any mistake for other inputs (although “don’t know ” answers are permitted). It is closely related with (at least, upper bounded by) Kolmogorov complexity, which is the length of the shortest program that computes x from the empty input. Under this new measure, each element in A can have a different instance complexity; some are easy and some are hard. Now it is obviously desirable if we can convert a hard instance into an easy one by reducing its instance complexity. More concretely, this can be done by designing a mapping (algorithm) δ such that for each instance x, (i) δ(x) ∈ A iff x ∈ A and (ii) ic(δ(x) : A) < ic(x: A). Note that determining the answer (yes/no) of an instance x is a special case of a complexity reduction, i.e., the complete reduction which converts x into a trivial instance whose answer is instantly known. Thus a
Hard Formulas For SAT Local Search Algorithms
, 1998
"... In 1992 B. Selman, H. Levesque and D. Mitchell proposed GSAT, a greedy local search algorithm for the Boolean satisfiability problem. Good performance of this algorithm and its modifications has been demonstrated by many experimental results. In 1993 I. P. Gent and T. Walsh proposed CSAT, a version ..."
Abstract
 Add to MetaCart
In 1992 B. Selman, H. Levesque and D. Mitchell proposed GSAT, a greedy local search algorithm for the Boolean satisfiability problem. Good performance of this algorithm and its modifications has been demonstrated by many experimental results. In 1993 I. P. Gent and T. Walsh proposed CSAT, a version of GSAT that almost does not use greediness. It has been recently proved that CSAT can find a satisfying assignment for a restricted class of formulas in the time c n , where c ! 2 is a constant. In this paper we prove a lower bound of the order 2 n for GSAT and CSAT. Namely, we construct formulas F n of n variables, such that GSAT or CSAT finds a satisfying assignment for F n only if this assignment or one of its n neighbours is chosen as the initial assignment for the search. 1 Introduction In the past six years there has been an increased interest to local search algorithms for the Boolean satisfiability problem. Though this problem is NPcomplete (see e.g. [4]), B. Selman, H. Leve...