Results 1  10
of
24
Exact algorithms for NPhard problems: A survey
 Combinatorial Optimization  Eureka, You Shrink!, LNCS
"... Abstract. We discuss fast exponential time solutions for NPcomplete problems. We survey known results and approaches, we provide pointers to the literature, and we discuss several open problems in this area. The list of discussed NPcomplete problems includes the travelling salesman problem, schedu ..."
Abstract

Cited by 118 (3 self)
 Add to MetaCart
(Show Context)
Abstract. We discuss fast exponential time solutions for NPcomplete problems. We survey known results and approaches, we provide pointers to the literature, and we discuss several open problems in this area. The list of discussed NPcomplete problems includes the travelling salesman problem, scheduling under precedence constraints, satisfiability, knapsack, graph coloring, independent sets in graphs, bandwidth of a graph, and many more. 1
Tight lower bounds for certain parameterized NPhard problems
 Proc. 19th Annual IEEE Conference on Computational Complexity (CCC’04
, 2004
"... Based on the framework of parameterized complexity theory, we derive tight lower bounds on the computational complexity for a number of wellknown NPhard problems. We start by proving a general result, namely that the parameterized weighted satisfiability problem on deptht circuits cannot be solve ..."
Abstract

Cited by 41 (6 self)
 Add to MetaCart
Based on the framework of parameterized complexity theory, we derive tight lower bounds on the computational complexity for a number of wellknown NPhard problems. We start by proving a general result, namely that the parameterized weighted satisfiability problem on deptht circuits cannot be solved in time no(k) poly(m), where n is the circuit input length, m is the circuit size, and k is the parameter, unless the (t − 1)st level W [t − 1] of the Whierarchy collapses to FPT. By refining this technique, we prove that a group of parameterized NPhard problems, including weighted sat, dominating set, hitting set, set cover, and feature set, cannot be solved in time no(k) poly(m), where n is the size of the universal set from which the k elements are to be selected and m is the instance size, unless the first level W [1] of the Whierarchy collapses to FPT. We also prove that another group of parameterized problems which includes weighted qsat (for any fixed q ≥ 2), clique, and independent set, cannot be solved in time no(k) unless all search problems in the syntactic class SNP, introduced by Papadimitriou and Yannakakis, are solvable in subexponential time. Note that all these parameterized problems have trivial algorithms of running time either n k poly(m) or O(n k). 1
A new algorithm for optimal constraint satisfaction and its implications
 Alexander D. Scott) Mathematical Institute, University of Oxford
, 2004
"... We present a novel method for exactly solving (in fact, counting solutions to) general constraint satisfaction optimization with at most two variables per constraint (e.g. MAX2CSP and MIN2CSP), which gives the first exponential improvement over the trivial algorithm; more precisely, it is a cons ..."
Abstract

Cited by 34 (1 self)
 Add to MetaCart
(Show Context)
We present a novel method for exactly solving (in fact, counting solutions to) general constraint satisfaction optimization with at most two variables per constraint (e.g. MAX2CSP and MIN2CSP), which gives the first exponential improvement over the trivial algorithm; more precisely, it is a constant factor improvement in the base of the runtime exponent. In the case where constraints have arbitrary weights, there is a (1 + ǫ)approximation with roughly the same runtime, modulo polynomial factors. Our algorithm may be used to count the number of optima in MAX2SAT and MAXCUT instances in O(m 3 2 ωn/3) time, where ω < 2.376 is the matrix product exponent over a ring. This is the first known algorithm solving MAX2SAT and MAXCUT in provably less than c n steps in the worst case, for some c < 2; similar new results are obtained for related problems. Our main construction may also be used to show that any improvement in the runtime exponent of either kclique solution (even when k = 3) or matrix multiplication over GF(2) would improve the runtime exponent for solving 2CSP optimization. As a corollary, we prove that an n o(k)time kclique algorithm implies SNP ⊆ DTIME[2 o(n)], for any k(n) ∈ o ( √ n / log n). Further extensions of our technique yield connections between the complexity of some (polynomial time) high dimensional geometry problems and that of some general NPhard problems. For example, if there are sufficiently faster algorithms for computing the diameter of n points in ℓ1, then there is an (2 −ǫ) n algorithm for MAXLIN. Such results may be construed as either lower bounds on these highdimensional problems, or hope that better algorithms exist for more general NPhard problems. 1
Space and time complexity of exact algorithms: Some open problems
, 2004
"... We discuss open questions around worst case time and space bounds for NPhard problems. We are interested in exponential time solutions for these problems with a relatively good worst case behavior. ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
We discuss open questions around worst case time and space bounds for NPhard problems. We are interested in exponential time solutions for these problems with a relatively good worst case behavior.
Strong computational lower bounds via parameterized complexity
, 2006
"... We develop new techniques for deriving strong computational lower bounds for a class of wellknown NPhard problems. This class includes weighted satisfiability, dominating set, hitting set, set cover, clique, and independent set. For example, although a trivial enumeration can easily test in time O ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
(Show Context)
We develop new techniques for deriving strong computational lower bounds for a class of wellknown NPhard problems. This class includes weighted satisfiability, dominating set, hitting set, set cover, clique, and independent set. For example, although a trivial enumeration can easily test in time O(n k) if a given graph of n vertices has a clique of size k, we prove that unless an unlikely collapse occurs in parameterized complexity theory, the problem is not solvable in time f(k)n o(k) for any function f, even if we restrict the parameter values to be bounded by an arbitrarily small function of n. Under the same assumption, we prove that even if we restrict the parameter values k to be of the order Θ(µ(n)) for any reasonable function µ, no algorithm of running time n o(k) can test if a graph of n vertices has a clique of size k. Similar strong lower bounds on the computational complexity are also derived for other NPhard problems in the above class. Our techniques can be further extended to derive computational lower bounds on polynomial time approximation schemes for NPhard optimization problems. For example, we prove that the NPhard distinguishing substring selection problem, for which a polynomial time approximation scheme has been recently developed, has no polynomial time approximation schemes of running time f(1/ɛ)n o(1/ɛ) for any function f unless an unlikely collapse occurs in parameterized complexity theory.
On the possibility of faster SAT algorithms
"... We describe reductions from the problem of determining the satisfiability of Boolean CNF formulas (CNFSAT) to several natural algorithmic problems. We show that attaining any of the following bounds would improve the state of the art in algorithms for SAT: • an O(n k−ε) algorithm for kDominating S ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
We describe reductions from the problem of determining the satisfiability of Boolean CNF formulas (CNFSAT) to several natural algorithmic problems. We show that attaining any of the following bounds would improve the state of the art in algorithms for SAT: • an O(n k−ε) algorithm for kDominating Set, for any k ≥ 3, • a (computationally efficient) protocol for 3party set disjointness with o(m) bits of communication, • an n o(d) algorithm for dSUM, • an O(n 2−ε) algorithm for 2SAT with m = n 1+o(1) clauses, where two clauses may have unrestricted length, and • an O((n + m) k−ε) algorithm for HornSat with k unrestricted length clauses. One may interpret our reductions as new attacks on the complexity of SAT, or sharp lower bounds conditional on exponential hardness of SAT.
Algorithms and Resource Requirements for Fundamental Problems
, 2007
"... no. DGE0234630. The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies, either expressed or implied, of any sponsoring institution, the U.S. government or any other entity. ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
no. DGE0234630. The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies, either expressed or implied, of any sponsoring institution, the U.S. government or any other entity.
Public Key Cryptography from Different Assumptions
, 2008
"... We construct a new public key encryption based on two assumptions: 1. One can obtain a pseudorandom generator with small locality by connecting the outputs to the inputs using any sufficiently good unbalanced expander. 2. It is hard to distinguish between a random graph that is such an expander and ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
We construct a new public key encryption based on two assumptions: 1. One can obtain a pseudorandom generator with small locality by connecting the outputs to the inputs using any sufficiently good unbalanced expander. 2. It is hard to distinguish between a random graph that is such an expander and a random graph where a (planted) random logarithmicsized subset S of the outputs is connected to fewer than S  inputs. The validity and strength of the assumptions raise interesting new algorithmic and pseudorandomness questions, and we explore their relation to the current stateofart. 1
How hard is it to approximate the best Nash equilibrium?
, 2009
"... The quest for a PTAS for Nash equilibrium in a twoplayer game seeks to circumvent the PPADcompleteness of an (exact) Nash equilibrium by finding an approximate equilibrium, and has emerged as a major open question in Algorithmic Game Theory. A closely related problem is that of finding an equilibri ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
The quest for a PTAS for Nash equilibrium in a twoplayer game seeks to circumvent the PPADcompleteness of an (exact) Nash equilibrium by finding an approximate equilibrium, and has emerged as a major open question in Algorithmic Game Theory. A closely related problem is that of finding an equilibrium maximizing a certain objective, such as the social welfare. This optimization problem was shown to be NPhard by Gilboa and Zemel [Games and Economic Behavior 1989]. However, this NPhardness is unlikely to extend to finding an approximate equilibrium, since the latter admits a quasipolynomial time algorithm, as proved by Lipton, Markakis and Mehta [Proc. of 4th EC, 2003]. We show that this optimization problem, namely, finding in a twoplayer game an approximate equilibrium achieving large social welfare is unlikely to have a polynomial time algorithm. One interpretation of our results is that the quest for a PTAS for Nash equilibrium should not extend to a PTAS for finding the best Nash equilibrium, which stands in contrast to certain algorithmic techniques used so far (e.g. sampling and enumeration). Technically, our result is a reduction from a notoriously difficult problem in modern Combinatorics, of finding a planted (but hidden) clique in a random graph G(n, 1/2). Our reduction starts from an instance with planted clique size k = O(log n). For comparison, the currently known algorithms due to Alon, Krivelevich and Sudakov [Random Struct. & Algorithms, 1998], and Krauthgamer and Feige [Random Struct. & Algorithms, 2000], are effective for a much larger clique size k = Ω(√n).
Finding small balanced separators
, 2006
"... Let G be an nvertex graph that has a vertex separator of size k that partitions the graph into connected components of size smaller than αn, for some fixed 2/3 ≤ α < 1. Such a separator is called an αseparator. Finding an αseparator of size at most k is NPhard. Moreover, under reasonable comp ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Let G be an nvertex graph that has a vertex separator of size k that partitions the graph into connected components of size smaller than αn, for some fixed 2/3 ≤ α < 1. Such a separator is called an αseparator. Finding an αseparator of size at most k is NPhard. Moreover, under reasonable complexity theoretic assumptions, it is shown that this problem is not polynomially solvable even when k = O(log n). In this paper, we give a randomized algorithm that finds an αseparator of size k in the given graph, unless the graph contains an (α + ɛ)separator of size strictly less than k, in which case our algorithm finds one such separator. For fixed ɛ, the running time of our algorithm is n O(1) 2 O(k) , which is polynomial for k = O(log n). For bounded degree graphs (as well as for the case of finding balanced edge separators), we present a deterministic algorithm with similar running time. Our algorithm involves (among other things) a new concept that we call (ɛ, k)samples. This is related to the notion of detection sets for network failures, introduced by Kleinberg [FOCS 2000]. Our proofs adapt and simplify techniques that were introduced by Kleinberg. As a byproduct, our proof improves the known bounds on the size of detection sets. We also show applications of (ɛ, k)samples to problems in approximation algorithms and rigorous analysis of heuristics.