Results 1  10
of
82
Vertex Cover: Further Observations and Further Improvements
 Journal of Algorithms
, 1999
"... Recently, there have been increasing interests and progresses in lowering the worst case time complexity for wellknown NPhard problems, in particular for the Vertex Cover problem. In this paper, new properties for the Vertex Cover problem are indicated and several simple and new techniques are int ..."
Abstract

Cited by 155 (15 self)
 Add to MetaCart
Recently, there have been increasing interests and progresses in lowering the worst case time complexity for wellknown NPhard problems, in particular for the Vertex Cover problem. In this paper, new properties for the Vertex Cover problem are indicated and several simple and new techniques are introduced, which lead to an improved algorithm of time O(kn + 1:271 k k 2 ) for the problem. Our algorithm also induces improvement on previous algorithms for the Independent Set problem on graphs of small degree. 1 Introduction Many optimization problems from industrial applications are NPhard. According to the NPcompleteness theory [10], these problems cannot be solved in polynomial time unless P = NP. However, this fact does not obviate the need for solving these problems for their practical importance. There has been a number of approaches to attacking the NPhardness of optimization problems, including approximation algorithms, heuristic algorithms, and average time analysis. Recent...
Exact algorithms for NPhard problems: A survey
 Combinatorial Optimization  Eureka, You Shrink!, LNCS
"... Abstract. We discuss fast exponential time solutions for NPcomplete problems. We survey known results and approaches, we provide pointers to the literature, and we discuss several open problems in this area. The list of discussed NPcomplete problems includes the travelling salesman problem, schedu ..."
Abstract

Cited by 118 (3 self)
 Add to MetaCart
Abstract. We discuss fast exponential time solutions for NPcomplete problems. We survey known results and approaches, we provide pointers to the literature, and we discuss several open problems in this area. The list of discussed NPcomplete problems includes the travelling salesman problem, scheduling under precedence constraints, satisfiability, knapsack, graph coloring, independent sets in graphs, bandwidth of a graph, and many more. 1
Finding Maximum Independent Sets in Sparse and General Graphs
, 1999
"... a fvg); fvg [ SparseFindMIS(G \Gamma fvg \Gamma N(v))) end Figure 1: An algorithm that runs in time 2 0:114e on eedge graphs function FindMIS(G) begin if G contains at most 5n=2 edges then return SparseFindMIS(G) if G contains an edge fu; vg such that N(u) ae N(v) [ fvg then return FindMI ..."
Abstract

Cited by 55 (1 self)
 Add to MetaCart
a fvg); fvg [ SparseFindMIS(G \Gamma fvg \Gamma N(v))) end Figure 1: An algorithm that runs in time 2 0:114e on eedge graphs function FindMIS(G) begin if G contains at most 5n=2 edges then return SparseFindMIS(G) if G contains an edge fu; vg such that N(u) ae N(v) [ fvg then return FindMIS(G \Gamma fvg) let d be the maximum degree of any vertex in G (* so d 6 *) if d 8 then let v be a degreed vertex return max(FindMIS(G \Gamm
Measure and conquer: domination  a case study
 PROCEEDINGS OF THE 32ND INTERNATIONAL COLLOQUIUM ON AUTOMATA, LANGUAGES AND PROGRAMMING (ICALP 2005), SPRINGER LNCS
, 2005
"... DavisPutnamstyle exponentialtime backtracking algorithms are the most common algorithms used for finding exact solutions of NPhard problems. The analysis of such recursive algorithms is based on the bounded search tree technique: a measure of the size of the subproblems is defined; this measure ..."
Abstract

Cited by 50 (20 self)
 Add to MetaCart
DavisPutnamstyle exponentialtime backtracking algorithms are the most common algorithms used for finding exact solutions of NPhard problems. The analysis of such recursive algorithms is based on the bounded search tree technique: a measure of the size of the subproblems is defined; this measure is used to lower bound the progress made by the algorithm at each branching step. For the last 30 years the research on exact algorithms has been mainly focused on the design of more and more sophisticated algorithms. However, measures used in the analysis of backtracking algorithms are usually very simple. In this paper we stress that a more careful choice of the measure can lead to significantly better worst case time analysis. As an example, we consider the minimum dominating set problem. The currently fastest algorithm for this problem has running time O(2 0.850n) on nnodes graphs. By measuring the progress of the (same) algorithm in a different way, we refine the time bound to O(2 0.598n). A good choice of the measure can provide such a (surprisingly big) improvement; this suggests that the running time of many other exponentialtime recursive algorithms is largely overestimated because of a “bad” choice of the measure.
On the Complexity of kSAT
, 2001
"... The kSAT problem is to determine if a given kCNF has a satisfying assignment. It is a celebrated open question as to whether it requires exponential time to solve kSAT for k 3. Here exponential time means 2 $n for some $>0. In this paper, assuming that, for k 3, kSAT requires exponential time ..."
Abstract

Cited by 45 (3 self)
 Add to MetaCart
The kSAT problem is to determine if a given kCNF has a satisfying assignment. It is a celebrated open question as to whether it requires exponential time to solve kSAT for k 3. Here exponential time means 2 $n for some $>0. In this paper, assuming that, for k 3, kSAT requires exponential time complexity, we show that the complexity of kSAT increases as k increases. More precisely, for k 3, define s k=inf[$: there exists 2 $n algorithm for solving kSAT]. Define ETH (ExponentialTime Hypothesis) for kSAT as follows: for k 3, s k>0. In this paper, we show that s k is increasing infinitely often assuming ETH for kSAT. Let s be the limit of s k. We will in fact show that s k (1&d k) s for some constant d>0. We prove this result by bringing together the ideas of critical clauses and the Sparsification Lemma to reduce the satisfiability of a kCNF to the satisfiability of a disjunction of 2 =n k$CNFs in fewer variables for some k $ k and arbitrarily small =>0. We also show that such a disjunction can be computed in time 2 =n for arbitrarily small =>0.
A graphtheoretic algorithm for comparative modeling of protein structure
 J Mol Biol
, 1998
"... The rapidly increasing number of known protein structures has resulted in a situation where approximate structures corresponding to new sequences are often available from one of two ..."
Abstract

Cited by 35 (10 self)
 Add to MetaCart
The rapidly increasing number of known protein structures has resulted in a situation where approximate structures corresponding to new sequences are often available from one of two
Load Balancing for Distributed Branch & Bound Algorithms
, 1992
"... In this paper, we present a new load balancing algorithm and its application to distributed branch & bound algorithms. We demonstrate the efficiency of this scheme by solving some NPcomplete problems on a network of up to 256 Transputers. The parallelization of our branch & bound algorithm ..."
Abstract

Cited by 35 (7 self)
 Add to MetaCart
In this paper, we present a new load balancing algorithm and its application to distributed branch & bound algorithms. We demonstrate the efficiency of this scheme by solving some NPcomplete problems on a network of up to 256 Transputers. The parallelization of our branch & bound algorithm is fully distributed. Every processor performs the same algorithm but on a different part of the solution tree. In this case, it is necessary to distribute subproblems among the processors to achieve a well balanced workload. We present a load balancing method which overcomes the problem of search overhead and idle times by an appropriate load model and avoids trashing effects by a feedback control strategy. To show the performance of our strategy, we solved the Vertex Cover and the weighted Vertex Cover problem for graphs of up to 150 nodes, using highly efficient branch and bound algorithms. Although the computing times were very short on a 256 processor network, we were able to achieve a speedup ...
Measure and Conquer: A Simple O(2^0.288n) Independent Set Algorithm
"... For more than 30 years DavisPutnamstyle exponentialtime backtracking algorithms have been the most common tools used for finding exact solutions of NPhard problems. Despite of that, the way to analyze such recursive algorithms is still far from producing tight worst case running time bounds. The ..."
Abstract

Cited by 33 (2 self)
 Add to MetaCart
For more than 30 years DavisPutnamstyle exponentialtime backtracking algorithms have been the most common tools used for finding exact solutions of NPhard problems. Despite of that, the way to analyze such recursive algorithms is still far from producing tight worst case running time bounds. The “Measure and Conquer” approach is one of the recent attempts to step beyond such limitations. The approach is based on the choice of the measure of the subproblems recursively generated by the algorithm considered; this measure is used to lower bound the progress made by the algorithm at each branching step. A good choice of the measure can lead to a significantly better worst case time analysis. In this paper we apply “Measure and Conquer ” to the analysis of a very simple backtracking algorithm solving the wellstudied maximum independent set problem. The result of the analysis is striking: the running time of the algorithm is O(2 0.288n), which is competitive with the current best time bounds obtained with far more complicated algorithms (and naive analysis). Our example shows that a good choice of the measure, made in the very first stages of exact algorithms design, can have a tremendous impact on the running time bounds achievable.
A measure & conquer approach for the analysis of exact algorithms
, 2007
"... For more than 40 years Branch & Reduce exponentialtime backtracking algorithms have been among the most common tools used for finding exact solutions of NPhard problems. Despite that, the way to analyze such recursive algorithms is still far from producing tight worstcase running time bounds. ..."
Abstract

Cited by 29 (8 self)
 Add to MetaCart
For more than 40 years Branch & Reduce exponentialtime backtracking algorithms have been among the most common tools used for finding exact solutions of NPhard problems. Despite that, the way to analyze such recursive algorithms is still far from producing tight worstcase running time bounds. Motivated by this we use an approach, that we call “Measure & Conquer”, as an attempt to step beyond such limitations. The approach is based on the careful design of a nonstandard measure of the subproblem size; this measure is then used to lower bound the progress made by the algorithm at each branching step. The idea is that a smarter measure may capture behaviors of the algorithm that a standard measure might not be able to exploit, and hence lead to a significantly better worstcase time analysis. In order to show the potentialities of Measure & Conquer, we consider two wellstudied NPhard problems: minimum dominating set and maximum independent set. For the first problem, we consider the current best algorithm, and prove (thanks to a better measure) a much tighter running time bound for it. For the second problem, we describe a new, simple algorithm, and show that its running time is competitive with the current best time bounds, achieved with far more complicated algorithms (and standard analysis). Our examples