Results 1 - 10
of
194
Fixed Parameter Algorithms for Dominating Set and Related Problems on Planar Graphs
, 2002
"... We present an algorithm that constructively produces a solution to the k-dominating set problem for planar graphs in time O(c . To obtain this result, we show that the treewidth of a planar graph with domination number (G) is O( (G)), and that such a tree decomposition can be found in O( (G)n) time. ..."
Abstract
-
Cited by 112 (22 self)
- Add to MetaCart
We present an algorithm that constructively produces a solution to the k-dominating set problem for planar graphs in time O(c . To obtain this result, we show that the treewidth of a planar graph with domination number (G) is O( (G)), and that such a tree decomposition can be found in O( (G)n) time. The same technique can be used to show that the k-face cover problem ( find a size k set of faces that cover all vertices of a given plane graph) can be solved in O(c n) time, where c 1 = 3 and k is the size of the face cover set. Similar results can be obtained in the planar case for some variants of k-dominating set, e.g., k-independent dominating set and k-weighted dominating set.
Polynomial-Time Data Reduction for DOMINATING SET
- Journal of the ACM
, 2004
"... Dealing with the NP-complete Dominating Set problem on graphs, we demonstrate the power of data reduction by preprocessing from a theoretical as well as a practical side. In particular, we prove that Dominating Set restricted to planar graphs has a so-called problem kernel of linear size, achiev ..."
Abstract
-
Cited by 64 (8 self)
- Add to MetaCart
(Show Context)
Dealing with the NP-complete Dominating Set problem on graphs, we demonstrate the power of data reduction by preprocessing from a theoretical as well as a practical side. In particular, we prove that Dominating Set restricted to planar graphs has a so-called problem kernel of linear size, achieved by two simple and easy to implement reduction rules. Moreover, having implemented our reduction rules, first experiments indicate the impressive practical potential of these rules. Thus, this work seems to open up a new and prospective way how to cope with one of the most important problems in graph theory and combinatorial optimization.
Concise finite-domain representations for PDDL planning tasks
, 2009
"... We introduce an efficient method for translating planning tasks specified in the standard PDDL formalism into a concise grounded representation that uses finite-domain state variables instead of the straight-forward propositional encoding. Translation is performed in four stages. Firstly, we transfo ..."
Abstract
-
Cited by 63 (13 self)
- Add to MetaCart
(Show Context)
We introduce an efficient method for translating planning tasks specified in the standard PDDL formalism into a concise grounded representation that uses finite-domain state variables instead of the straight-forward propositional encoding. Translation is performed in four stages. Firstly, we transform the input task into an equivalent normal form expressed in a restricted fragment of PDDL. Secondly, we synthesize invariants of the planning task that identify groups of mutually exclusive propositions which can be represented by a single finite-domain variable. Thirdly, we perform an efficient relaxed reachability analysis using logic programming techniques to obtain a grounded representation of the input. Finally, we combine the results of the third and fourth stage to generate the final grounded finite-domain representation. The presented approach has originally been implemented as part of the Fast Downward planning system for the 4th International Planning Competition (IPC4). Since then, it has been used in a number of other contexts with considerable success, and the use of concise finite-domain representations has become a common feature of state-of-the-art planners.
A proposal for a heterogeneous cluster ScaLAPACK (dense linear solvers)
, 2001
"... In this paper, we study the implementation of dense linear algebra kernels, such as matrix multiplication or linear system solvers, on heterogeneous networks of workstations. The uniform block-cyclic data distribution scheme commonly used for homogeneous collections of processors limits the perform ..."
Abstract
-
Cited by 59 (24 self)
- Add to MetaCart
(Show Context)
In this paper, we study the implementation of dense linear algebra kernels, such as matrix multiplication or linear system solvers, on heterogeneous networks of workstations. The uniform block-cyclic data distribution scheme commonly used for homogeneous collections of processors limits the performance of these linear algebra kernels on heterogeneous grids to the speed of the slowest processor. We present and study more sophisticated data allocation strategies that balance the load on heterogeneous platforms with respect to the performance of the processors. When targeting unidimensional grids, the load-balancing problem can be solved rather easily. When targeting two-dimensional grids, which are the key to scalability and efficiency for numerical kernels, the problem turns out to be surprisingly difficult. We formally state the 2D load-balancing problem and prove its NP-completeness. Next, we introduce a data allocation heuristic, which turns out to be very satisfactory: Its practical usefulness is demonstrated by MPI experiments conducted with a heterogeneous network of workstations.
Parameterized complexity and approximation algorithms
- Comput. J
, 2006
"... Approximation algorithms and parameterized complexity are usually considered to be two separate ways of dealing with hard algorithmic problems. In this paper, our aim is to investigate how these two fields can be combined to achieve better algorithms than what any of the two theories could offer. We ..."
Abstract
-
Cited by 56 (2 self)
- Add to MetaCart
(Show Context)
Approximation algorithms and parameterized complexity are usually considered to be two separate ways of dealing with hard algorithmic problems. In this paper, our aim is to investigate how these two fields can be combined to achieve better algorithms than what any of the two theories could offer. We discuss the different ways parameterized complexity can be extended to approximation algorithms, survey results of this type and propose directions for future research. 1.
Matrix Multiplication on Heterogeneous Platforms
, 2001
"... this paper, we address the issue of implementing matrix multiplication on heterogeneous platforms. We target two different classes of heterogeneous computing resources: heterogeneous networks of workstations and collections of heterogeneous clusters. Intuitively, the problem is to load balance the ..."
Abstract
-
Cited by 53 (15 self)
- Add to MetaCart
this paper, we address the issue of implementing matrix multiplication on heterogeneous platforms. We target two different classes of heterogeneous computing resources: heterogeneous networks of workstations and collections of heterogeneous clusters. Intuitively, the problem is to load balance the work with different speed resources while minimizing the communication volume. We formally state this problem in a geometric framework and prove its NP-completeness. Next, we introduce a (polynomial) column-based heuristic, which turns out to be very satisfactory: We derive a theoretical performance guarantee for the heuristic and we assess its practical usefulness through MPI experiments
A short proof that phylogenetic tree reconstruction by maximum likelihood is hard
- IEEE Trans Comput Biol and Bioinformatics
"... Maximum likelihood is one of the most widely used techniques to infer evolutionary histories. Although it is thought to be intractable, a proof of its hardness has been lacking. Here, we give a short proof that computing the maximum likelihood tree is NP-hard by exploiting a connection between likel ..."
Abstract
-
Cited by 48 (7 self)
- Add to MetaCart
(Show Context)
Maximum likelihood is one of the most widely used techniques to infer evolutionary histories. Although it is thought to be intractable, a proof of its hardness has been lacking. Here, we give a short proof that computing the maximum likelihood tree is NP-hard by exploiting a connection between likelihood and parsimony observed by Tuffley and Steel. 1
The bidimensionality Theory and Its Algorithmic Applications
- Computer Journal
, 2005
"... This paper surveys the theory of bidimensionality. This theory characterizes a broad range of graph problems (‘bidimensional’) that admit efficient approximate or fixed-parameter solutions in a broad range of graphs. These graph classes include planar graphs, map graphs, bounded-genus graphs and gra ..."
Abstract
-
Cited by 47 (3 self)
- Add to MetaCart
This paper surveys the theory of bidimensionality. This theory characterizes a broad range of graph problems (‘bidimensional’) that admit efficient approximate or fixed-parameter solutions in a broad range of graphs. These graph classes include planar graphs, map graphs, bounded-genus graphs and graphs excluding any fixed minor. In particular, bidimensionality theory builds on the Graph Minor Theory of Robertson and Seymour by extending the mathematical results and building new algorithmic tools. Here, we summarize the known combinatorial and algorithmic results of bidimensionality theory with the high-level ideas involved in their proof; we describe the previous work on which the theory is based and/or extends; and we mention several remaining open problems. 1.