Results 1  10
of
98
Measure and conquer: domination  a case study
 PROCEEDINGS OF THE 32ND INTERNATIONAL COLLOQUIUM ON AUTOMATA, LANGUAGES AND PROGRAMMING (ICALP 2005), SPRINGER LNCS
, 2005
"... DavisPutnamstyle exponentialtime backtracking algorithms are the most common algorithms used for finding exact solutions of NPhard problems. The analysis of such recursive algorithms is based on the bounded search tree technique: a measure of the size of the subproblems is defined; this measure ..."
Abstract

Cited by 48 (20 self)
 Add to MetaCart
DavisPutnamstyle exponentialtime backtracking algorithms are the most common algorithms used for finding exact solutions of NPhard problems. The analysis of such recursive algorithms is based on the bounded search tree technique: a measure of the size of the subproblems is defined; this measure is used to lower bound the progress made by the algorithm at each branching step. For the last 30 years the research on exact algorithms has been mainly focused on the design of more and more sophisticated algorithms. However, measures used in the analysis of backtracking algorithms are usually very simple. In this paper we stress that a more careful choice of the measure can lead to significantly better worst case time analysis. As an example, we consider the minimum dominating set problem. The currently fastest algorithm for this problem has running time O(2 0.850n) on nnodes graphs. By measuring the progress of the (same) algorithm in a different way, we refine the time bound to O(2 0.598n). A good choice of the measure can provide such a (surprisingly big) improvement; this suggests that the running time of many other exponentialtime recursive algorithms is largely overestimated because of a “bad” choice of the measure.
Deriving Filtering Algorithms from Constraint Checkers
 Principles and Practice of Constraint Programming (CP’2004), volume 3258 of LNCS
, 2004
"... Abstract. This reportdeals with global constraints for which the set of solutions can be recognized by an extended finite automaton whose size is bounded by a polynomial in ¦ , where ¦ is the number of variables of the corresponding global constraint. By reformulating the automaton as a conjunction ..."
Abstract

Cited by 39 (6 self)
 Add to MetaCart
Abstract. This reportdeals with global constraints for which the set of solutions can be recognized by an extended finite automaton whose size is bounded by a polynomial in ¦ , where ¦ is the number of variables of the corresponding global constraint. By reformulating the automaton as a conjunction of signature and transition constraints we show how to systematically obtain a filtering algorithm. Under some restrictions on the signature and transition constraints this filtering algorithm achieves arcconsistency. An implementation based on some constraints as well as on the metaprogramming facilities of SICStus Prolog is available. For a restricted class of automata we provide a filtering algorithm for the relaxed case, where the violation cost is the minimum number of variables to unassign in order to get back to a solution. Keywords: Constraint Programming,
Tight lower bounds for certain parameterized NPhard problems
 Information and Computation
, 2004
"... Based on the framework of parameterized complexity theory, we derive tight lower bounds on the computational complexity for a number of wellknown NPhard problems. We start by proving a general result, namely that the parameterized weighted satisfiability problem on deptht circuits cannot be solve ..."
Abstract

Cited by 37 (5 self)
 Add to MetaCart
Based on the framework of parameterized complexity theory, we derive tight lower bounds on the computational complexity for a number of wellknown NPhard problems. We start by proving a general result, namely that the parameterized weighted satisfiability problem on deptht circuits cannot be solved in time n o(k) m O(1) , where n is the circuit input length, m is the circuit size, and k is the parameter, unless the (t − 1)st level W [t − 1] of the Whierarchy collapses to FPT. By refining this technique, we prove that a group of parameterized NPhard problems, including weighted sat, hitting set, set cover, and feature set, cannot be solved in time n o(k) m O(1) , where n is the size of the universal set from which the k elements are to be selected and m is the instance size, unless the first level W [1] of the Whierarchy collapses to FPT. We also prove that another group of parameterized problems which includes weighted qsat (for any fixed q ≥ 2), clique, independent set, and dominating set, cannot be solved in time n o(k) unless all search problems in the syntactic class SNP, introduced by Papadimitriou and Yannakakis, are solvable in subexponential time. Note that all these parameterized problems have trivial algorithms of running time either n k m O(1) or O(n k). 1
A new algorithm for optimal constraint satisfaction and its implications
 Alexander D. Scott) Mathematical Institute, University of Oxford
, 2004
"... We present a novel method for exactly solving (in fact, counting solutions to) general constraint satisfaction optimization with at most two variables per constraint (e.g. MAX2CSP and MIN2CSP), which gives the first exponential improvement over the trivial algorithm; more precisely, it is a cons ..."
Abstract

Cited by 33 (1 self)
 Add to MetaCart
We present a novel method for exactly solving (in fact, counting solutions to) general constraint satisfaction optimization with at most two variables per constraint (e.g. MAX2CSP and MIN2CSP), which gives the first exponential improvement over the trivial algorithm; more precisely, it is a constant factor improvement in the base of the runtime exponent. In the case where constraints have arbitrary weights, there is a (1 + ǫ)approximation with roughly the same runtime, modulo polynomial factors. Our algorithm may be used to count the number of optima in MAX2SAT and MAXCUT instances in O(m 3 2 ωn/3) time, where ω < 2.376 is the matrix product exponent over a ring. This is the first known algorithm solving MAX2SAT and MAXCUT in provably less than c n steps in the worst case, for some c < 2; similar new results are obtained for related problems. Our main construction may also be used to show that any improvement in the runtime exponent of either kclique solution (even when k = 3) or matrix multiplication over GF(2) would improve the runtime exponent for solving 2CSP optimization. As a corollary, we prove that an n o(k)time kclique algorithm implies SNP ⊆ DTIME[2 o(n)], for any k(n) ∈ o ( √ n / log n). Further extensions of our technique yield connections between the complexity of some (polynomial time) high dimensional geometry problems and that of some general NPhard problems. For example, if there are sufficiently faster algorithms for computing the diameter of n points in ℓ1, then there is an (2 −ǫ) n algorithm for MAXLIN. Such results may be construed as either lower bounds on these highdimensional problems, or hope that better algorithms exist for more general NPhard problems. 1
A measure & conquer approach for the analysis of exact algorithms
, 2007
"... For more than 40 years Branch & Reduce exponentialtime backtracking algorithms have been among the most common tools used for finding exact solutions of NPhard problems. Despite that, the way to analyze such recursive algorithms is still far from producing tight worstcase running time bounds. Mot ..."
Abstract

Cited by 30 (7 self)
 Add to MetaCart
For more than 40 years Branch & Reduce exponentialtime backtracking algorithms have been among the most common tools used for finding exact solutions of NPhard problems. Despite that, the way to analyze such recursive algorithms is still far from producing tight worstcase running time bounds. Motivated by this we use an approach, that we call “Measure & Conquer”, as an attempt to step beyond such limitations. The approach is based on the careful design of a nonstandard measure of the subproblem size; this measure is then used to lower bound the progress made by the algorithm at each branching step. The idea is that a smarter measure may capture behaviors of the algorithm that a standard measure might not be able to exploit, and hence lead to a significantly better worstcase time analysis. In order to show the potentialities of Measure & Conquer, we consider two wellstudied NPhard problems: minimum dominating set and maximum independent set. For the first problem, we consider the current best algorithm, and prove (thanks to a better measure) a much tighter running time bound for it. For the second problem, we describe a new, simple algorithm, and show that its running time is competitive with the current best time bounds, achieved with far more complicated algorithms (and standard analysis). Our examples
Measure and Conquer: A Simple O(2^0.288n) Independent Set Algorithm
"... For more than 30 years DavisPutnamstyle exponentialtime backtracking algorithms have been the most common tools used for finding exact solutions of NPhard problems. Despite of that, the way to analyze such recursive algorithms is still far from producing tight worst case running time bounds. The ..."
Abstract

Cited by 28 (2 self)
 Add to MetaCart
For more than 30 years DavisPutnamstyle exponentialtime backtracking algorithms have been the most common tools used for finding exact solutions of NPhard problems. Despite of that, the way to analyze such recursive algorithms is still far from producing tight worst case running time bounds. The “Measure and Conquer” approach is one of the recent attempts to step beyond such limitations. The approach is based on the choice of the measure of the subproblems recursively generated by the algorithm considered; this measure is used to lower bound the progress made by the algorithm at each branching step. A good choice of the measure can lead to a significantly better worst case time analysis. In this paper we apply “Measure and Conquer ” to the analysis of a very simple backtracking algorithm solving the wellstudied maximum independent set problem. The result of the analysis is striking: the running time of the algorithm is O(2 0.288n), which is competitive with the current best time bounds obtained with far more complicated algorithms (and naive analysis). Our example shows that a good choice of the measure, made in the very first stages of exact algorithms design, can have a tremendous impact on the running time bounds achievable.
Exact algorithms for treewidth and minimum fillin
 In Proceedings of the 31st International Colloquium on Automata, Languages and Programming (ICALP 2004). Lecture Notes in Comput. Sci
, 2004
"... We show that the treewidth and the minimum fillin of an nvertex graph can be computed in time O(1.8899 n). Our results are based on combinatorial proofs that an nvertex graph has O(1.7087 n) minimal separators and O(1.8135 n) potential maximal cliques. We also show that for the class of ATfree g ..."
Abstract

Cited by 26 (15 self)
 Add to MetaCart
We show that the treewidth and the minimum fillin of an nvertex graph can be computed in time O(1.8899 n). Our results are based on combinatorial proofs that an nvertex graph has O(1.7087 n) minimal separators and O(1.8135 n) potential maximal cliques. We also show that for the class of ATfree graphs the running time of our algorithms can be reduced to O(1.4142 n).
Space and time complexity of exact algorithms: Some open problems (invited talk
 of Lecture
"... Abstract. We discuss open questions around worst case time and space bounds for NPhard problems. We are interested in exponential time solutions for these problems with a relatively good worst case behavior. 1 ..."
Abstract

Cited by 25 (0 self)
 Add to MetaCart
Abstract. We discuss open questions around worst case time and space bounds for NPhard problems. We are interested in exponential time solutions for these problems with a relatively good worst case behavior. 1