Results 1  10
of
63
Which Problems Have Strongly Exponential Complexity?
 Journal of Computer and System Sciences
, 1998
"... For several NPcomplete problems, there have been a progression of better but still exponential algorithms. In this paper, we address the relative likelihood of subexponential algorithms for these problems. We introduce a generalized reduction which we call SubExponential Reduction Family (SERF) t ..."
Abstract

Cited by 120 (5 self)
 Add to MetaCart
For several NPcomplete problems, there have been a progression of better but still exponential algorithms. In this paper, we address the relative likelihood of subexponential algorithms for these problems. We introduce a generalized reduction which we call SubExponential Reduction Family (SERF) that preserves subexponential complexity. We show that CircuitSAT is SERFcomplete for all NPsearch problems, and that for any fixed k, kSAT, kColorability, kSet Cover, Independent Set, Clique, Vertex Cover, are SERFcomplete for the class SNP of search problems expressible by second order existential formulas whose first order part is universal. In particular, subexponential complexity for any one of the above problems implies the same for all others. We also look at the issue of proving strongly exponential lower bounds for AC 0 ; that is, bounds of the form 2 \Omega\Gamma n) . This problem is even open for depth3 circuits. In fact, such a bound for depth3 circuits with even l...
UnitWalk: A new SAT solver that uses local search guided by unit clause elimination
, 2002
"... In this paper we present a new randomized algorithm for SAT, i.e., the satisfiability problem for Boolean formulas in conjunctive normal form. Despite its simplicity, this algorithm performs well on many common benchmarks ranging from graph coloring problems to microprocessor verification. ..."
Abstract

Cited by 61 (1 self)
 Add to MetaCart
In this paper we present a new randomized algorithm for SAT, i.e., the satisfiability problem for Boolean formulas in conjunctive normal form. Despite its simplicity, this algorithm performs well on many common benchmarks ranging from graph coloring problems to microprocessor verification.
Measure and conquer: domination  a case study
 PROCEEDINGS OF THE 32ND INTERNATIONAL COLLOQUIUM ON AUTOMATA, LANGUAGES AND PROGRAMMING (ICALP 2005), SPRINGER LNCS
, 2005
"... DavisPutnamstyle exponentialtime backtracking algorithms are the most common algorithms used for finding exact solutions of NPhard problems. The analysis of such recursive algorithms is based on the bounded search tree technique: a measure of the size of the subproblems is defined; this measure ..."
Abstract

Cited by 47 (20 self)
 Add to MetaCart
DavisPutnamstyle exponentialtime backtracking algorithms are the most common algorithms used for finding exact solutions of NPhard problems. The analysis of such recursive algorithms is based on the bounded search tree technique: a measure of the size of the subproblems is defined; this measure is used to lower bound the progress made by the algorithm at each branching step. For the last 30 years the research on exact algorithms has been mainly focused on the design of more and more sophisticated algorithms. However, measures used in the analysis of backtracking algorithms are usually very simple. In this paper we stress that a more careful choice of the measure can lead to significantly better worst case time analysis. As an example, we consider the minimum dominating set problem. The currently fastest algorithm for this problem has running time O(2 0.850n) on nnodes graphs. By measuring the progress of the (same) algorithm in a different way, we refine the time bound to O(2 0.598n). A good choice of the measure can provide such a (surprisingly big) improvement; this suggests that the running time of many other exponentialtime recursive algorithms is largely overestimated because of a “bad” choice of the measure.
Improved Algorithms for 3Coloring, 3EdgeColoring, and Constraint Satisfaction
, 2001
"... We consider worst case time bounds for NPcomplete problems including 3SAT, 3coloring, 3edgecoloring, and 3list coloring. Our algorithms are based on a constraint satisfaction (CSP) formulation of these problems; 3SAT is equivalent to (2, 3)CSP while the other problems above are special cases ..."
Abstract

Cited by 44 (3 self)
 Add to MetaCart
We consider worst case time bounds for NPcomplete problems including 3SAT, 3coloring, 3edgecoloring, and 3list coloring. Our algorithms are based on a constraint satisfaction (CSP) formulation of these problems; 3SAT is equivalent to (2, 3)CSP while the other problems above are special cases of (3, 2)CSP. We give a fast algorithm for (3, 2) CSP and use it to improve the time bounds for solving the other problems listed above. Our techniques involve a mixture of DavisPutnamstyle backtracking with more sophisticated matching and network flow based ideas.
Upper Bounds for Vertex Cover Further Improved
"... . The problem instance of Vertex Cover consists of an undirected graph G = (V; E) and a positive integer k, the question is whether there exists a subset C V of vertices such that each edge in E has at least one of its endpoints in C with jCj k. We improve two recent worst case upper bounds fo ..."
Abstract

Cited by 43 (17 self)
 Add to MetaCart
. The problem instance of Vertex Cover consists of an undirected graph G = (V; E) and a positive integer k, the question is whether there exists a subset C V of vertices such that each edge in E has at least one of its endpoints in C with jCj k. We improve two recent worst case upper bounds for Vertex Cover. First, Balasubramanian et al. showed that Vertex Cover can be solved in time O(kn + 1:32472 k k 2 ), where n is the number of vertices in G. Afterwards, Downey et al. improved this to O(kn+ 1:31951 k k 2 ). Bringing the exponential base significantly below 1:3, we present the new upper bound O(kn + 1:29175 k k 2 ). 1 Introduction Vertex Cover is a problem of central importance in computer science: { It was among the rst NPcomplete problems [7]. { There have been numerous eorts to design ecient approximation algorithms [3], but it is also known to be hard to approximate [1]. { It is of central importance in parameterized complexity theory and has one ...
Improved upper bounds for 3sat
 In 15th ACMSIAM Symposium on Discrete Algorithms (SODA 2004). ACM and SIAM
"... The CNF Satisfiability problem is to determine, given a CNF formula F, whether or not there exists a satisfying assignment for F. If each clause of F contains at most k literals, then F is called a kCNF formula and the problem is called kSAT. For small k’s, especially for k = 3, there exists a lot ..."
Abstract

Cited by 38 (1 self)
 Add to MetaCart
The CNF Satisfiability problem is to determine, given a CNF formula F, whether or not there exists a satisfying assignment for F. If each clause of F contains at most k literals, then F is called a kCNF formula and the problem is called kSAT. For small k’s, especially for k = 3, there exists a lot of algorithms which run significantly faster than the trivial 2n bound. The following list summarizes those algorithms where a constant c means that the algorithm runs in time O(cn). Roughly speaking most algorithms are based on DavisPutnam. [Sch99] is the first local search algorithm which gives a guaranteed performance for general instances and [DGH+02], [HSSW02], [BS03] and [Rol03] follow up this Schöning’s approach. 3SAT 4SAT 5SAT 6SAT type ref. 1.782 1.835 1.867 1.888 det. [PPZ97]
On the Complexity of kSAT
, 2001
"... The kSAT problem is to determine if a given kCNF has a satisfying assignment. It is a celebrated open question as to whether it requires exponential time to solve kSAT for k 3. Here exponential time means 2 $n for some $>0. In this paper, assuming that, for k 3, kSAT requires exponential time co ..."
Abstract

Cited by 37 (2 self)
 Add to MetaCart
The kSAT problem is to determine if a given kCNF has a satisfying assignment. It is a celebrated open question as to whether it requires exponential time to solve kSAT for k 3. Here exponential time means 2 $n for some $>0. In this paper, assuming that, for k 3, kSAT requires exponential time complexity, we show that the complexity of kSAT increases as k increases. More precisely, for k 3, define s k=inf[$: there exists 2 $n algorithm for solving kSAT]. Define ETH (ExponentialTime Hypothesis) for kSAT as follows: for k 3, s k>0. In this paper, we show that s k is increasing infinitely often assuming ETH for kSAT. Let s be the limit of s k. We will in fact show that s k (1&d k) s for some constant d>0. We prove this result by bringing together the ideas of critical clauses and the Sparsification Lemma to reduce the satisfiability of a kCNF to the satisfiability of a disjunction of 2 =n k$CNFs in fewer variables for some k $ k and arbitrarily small =>0. We also show that such a disjunction can be computed in time 2 =n for arbitrarily small =>0.
New WorstCase Upper Bounds for SAT
 Journal of Automated Reasoning
, 2000
"... In 1980 Monien and Speckenmeyer proved that satisfiability of a propositional formula consisting of K clauses (of arbitrary length) can be checked in time of the order 2^{K/3}. Recently Kullmann and Luckhardt proved the worstcase upper bound 2^{L/9}, where L is the length of the input formula. The ..."
Abstract

Cited by 35 (8 self)
 Add to MetaCart
In 1980 Monien and Speckenmeyer proved that satisfiability of a propositional formula consisting of K clauses (of arbitrary length) can be checked in time of the order 2^{K/3}. Recently Kullmann and Luckhardt proved the worstcase upper bound 2^{L/9}, where L is the length of the input formula. The algorithms leading to these bounds are based on the splitting method which goes back to the Davis{Putnam procedure. Transformation rules (pure literal elimination, unit propagation etc.) constitute a substantial part of this method. In this paper we present a new transformation rule and two algorithms using this rule. We prove that these algorithms have the worstcase upper bounds 2^{0.30897K} and 2^{0.10299L}, respectively.
A new algorithm for optimal constraint satisfaction and its implications
 Alexander D. Scott) Mathematical Institute, University of Oxford
, 2004
"... We present a novel method for exactly solving (in fact, counting solutions to) general constraint satisfaction optimization with at most two variables per constraint (e.g. MAX2CSP and MIN2CSP), which gives the first exponential improvement over the trivial algorithm; more precisely, it is a cons ..."
Abstract

Cited by 30 (1 self)
 Add to MetaCart
We present a novel method for exactly solving (in fact, counting solutions to) general constraint satisfaction optimization with at most two variables per constraint (e.g. MAX2CSP and MIN2CSP), which gives the first exponential improvement over the trivial algorithm; more precisely, it is a constant factor improvement in the base of the runtime exponent. In the case where constraints have arbitrary weights, there is a (1 + ǫ)approximation with roughly the same runtime, modulo polynomial factors. Our algorithm may be used to count the number of optima in MAX2SAT and MAXCUT instances in O(m 3 2 ωn/3) time, where ω < 2.376 is the matrix product exponent over a ring. This is the first known algorithm solving MAX2SAT and MAXCUT in provably less than c n steps in the worst case, for some c < 2; similar new results are obtained for related problems. Our main construction may also be used to show that any improvement in the runtime exponent of either kclique solution (even when k = 3) or matrix multiplication over GF(2) would improve the runtime exponent for solving 2CSP optimization. As a corollary, we prove that an n o(k)time kclique algorithm implies SNP ⊆ DTIME[2 o(n)], for any k(n) ∈ o ( √ n / log n). Further extensions of our technique yield connections between the complexity of some (polynomial time) high dimensional geometry problems and that of some general NPhard problems. For example, if there are sufficiently faster algorithms for computing the diameter of n points in ℓ1, then there is an (2 −ǫ) n algorithm for MAXLIN. Such results may be construed as either lower bounds on these highdimensional problems, or hope that better algorithms exist for more general NPhard problems. 1
A probabilistic 3SAT algorithm further improved
, 2002
"... In [Sch99], Schöning proposed a simple yet ecient randomized algorithm for solving the k SAT problem. In the case of 3SAT, the algorithm has an expected running time of poly(n) (4=3) O(1:3334 ) when given a formula F on n variables. This was the up to now best running time known for an algo ..."
Abstract

Cited by 30 (2 self)
 Add to MetaCart
In [Sch99], Schöning proposed a simple yet ecient randomized algorithm for solving the k SAT problem. In the case of 3SAT, the algorithm has an expected running time of poly(n) (4=3) O(1:3334 ) when given a formula F on n variables. This was the up to now best running time known for an algorithm solving 3SAT. In this paper, we describe an algorithm which improves upon this time bound by combining an improved version of the above randomized algorithm with other randomized algorithms. Our new expected time bound for 3SAT is O(1:3302 ).