Results 1  10
of
21
Parameterized complexity and approximation algorithms
 Comput. J
, 2006
"... Approximation algorithms and parameterized complexity are usually considered to be two separate ways of dealing with hard algorithmic problems. In this paper, our aim is to investigate how these two fields can be combined to achieve better algorithms than what any of the two theories could offer. We ..."
Abstract

Cited by 56 (2 self)
 Add to MetaCart
Approximation algorithms and parameterized complexity are usually considered to be two separate ways of dealing with hard algorithmic problems. In this paper, our aim is to investigate how these two fields can be combined to achieve better algorithms than what any of the two theories could offer. We discuss the different ways parameterized complexity can be extended to approximation algorithms, survey results of this type and propose directions for future research. 1.
Fixedparameter approximation: Conceptual framework and approximability results.
 Algorithmica,
, 2010
"... Abstract The notion of fixedparameter approximation is introduced to investigate the approximability of optimization problems within the framework of fixedparameter computation. This work partially aims at enhancing the world of fixedparameter computation in parallel with the conventional theory o ..."
Abstract

Cited by 29 (0 self)
 Add to MetaCart
(Show Context)
Abstract The notion of fixedparameter approximation is introduced to investigate the approximability of optimization problems within the framework of fixedparameter computation. This work partially aims at enhancing the world of fixedparameter computation in parallel with the conventional theory of computation that includes both exact and approximate computations. In particular, it is proved that fixedparameter approximability is closely related to the approximation of smallcost solutions in polynomial time. It is also demonstrated that many fixedparameter intractable problems are not fixedparameter approximable. On the other hand, fixedparameter approximation appears to be a viable approach to solving some inapproximable yet important optimization problems. For instance, all problems in the class MAX SNP admit fixedparameter approximation schemes in time O(2 O((1− /O(1))k) p(n)) for any small > 0.
Parameterizing above or below guaranteed values
 J. of Computer and System Sciences
"... Abstract We consider new parameterizations of NPoptimization problems that have nontrivial lower and/or upper bounds on their optimum solution size. The natural parameter, we argue, is the quantity above the lower bound or below the upper bound. We show that for every problem in MAX SNP, the optim ..."
Abstract

Cited by 29 (3 self)
 Add to MetaCart
(Show Context)
Abstract We consider new parameterizations of NPoptimization problems that have nontrivial lower and/or upper bounds on their optimum solution size. The natural parameter, we argue, is the quantity above the lower bound or below the upper bound. We show that for every problem in MAX SNP, the optimum value is bounded below by an unbounded function of the inputsize, and that the aboveguarantee parameterization with respect to this lower bound is fixedparameter tractable. We also observe that approximation algorithms give nontrivial lower or upper bounds on the solution size and that the above or below guarantee question with respect to these bounds is fixedparameter tractable for a subclass of NPoptimization problems. We then introduce the notion of 'tight' lower and upper bounds and exhibit a number of problems for which the aboveguarantee and belowguarantee parameterizations with respect to a tight bound is fixedparameter tractable or Whard. We show that if we parameterize "sufficiently" above or below the tight bounds, then these parameterized versions are not fixedparameter tractable unless P = NP, for a subclass of NPoptimization problems. We also list several directions to explore in this paradigm.
Backdoors to satisfaction
 The Multivariate Algorithmic Revolution and Beyond  Essays Dedicated to Michael R. Fellows on the Occasion of His 60th Birthday, volume 7370 of Lecture
"... ar ..."
(Show Context)
Parameterized approximability of the disjoint cycle problem
 Proc. ICALP 2007, Lecture Notes in Computer Science
, 2007
"... Abstract. We give an fpt approximation algorithm for the directed vertex disjoint cycle problem. Given a directed graph G with n vertices and a positive integer k, the algorithm constructs a family of at least k/ρ(k) disjoint cycles of G if the graph G has a family of at least k disjoint cycles (and ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We give an fpt approximation algorithm for the directed vertex disjoint cycle problem. Given a directed graph G with n vertices and a positive integer k, the algorithm constructs a family of at least k/ρ(k) disjoint cycles of G if the graph G has a family of at least k disjoint cycles (and otherwise may still produce a solution, or just report failure). Here ρ is a computable function such that k/ρ(k) is nondecreasing and unbounded. The running time of our algorithm is polynomial. The directed vertex disjoint cycle problem is hard for the parameterized complexity class W[1], and to the best of our knowledge our algorithm is the first fpt approximation algorithm for a natural W[1]hard problem. Key words: approximation algorithms, fixedparameter tractability, parameterized complexity theory. 1
Logic, Graphs, and Algorithms
, 2007
"... Algorithmic meta theorems are algorithmic results that apply to whole families of combinatorial problems, instead of just specific problems. These families are usually defined in terms of logic and graph theory. An archetypal algorithmic meta theorem is Courcelle’s Theorem [9], which states that all ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
Algorithmic meta theorems are algorithmic results that apply to whole families of combinatorial problems, instead of just specific problems. These families are usually defined in terms of logic and graph theory. An archetypal algorithmic meta theorem is Courcelle’s Theorem [9], which states that all graph properties definable in monadic secondorder logic can be decided in linear time on graphs of bounded tree width. This article is an introduction into the theory underlying such meta theorems and a survey of the most important results in this area.
Completely inapproximable monotone and antimonotone parameterized problems
"... We prove that weighted monotone/antimonotone circuit satisfiability has no fixedparameter tractable approximation algorithm with any approximation ratio function ρ, unless FPT 6 = W [1]. In particular, not having such an fptapproximation algorithm implies that these problems have no polynomialti ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
We prove that weighted monotone/antimonotone circuit satisfiability has no fixedparameter tractable approximation algorithm with any approximation ratio function ρ, unless FPT 6 = W [1]. In particular, not having such an fptapproximation algorithm implies that these problems have no polynomialtime approximation algorithms with ratio ρ(OPT) for any nontrivial function ρ.
SubExponential and FPTtime inapproximability of independent set and related problems
 In IPEC
, 2013
"... ar ..."
(Show Context)
Parameterized algorithms for boxicity
 In: Proceedings of ISAAC 2010, LNCS 6506
"... In this paper we initiate an algorithmic study of Boxicity, a combinatorially well studied graph invariant, from the viewpoint of parameterized algorithms. The boxicity of an arbitrary graph G with the vertex set V (G) and the edge set E(G), denoted by box(G), is the minimum number of interval grap ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
In this paper we initiate an algorithmic study of Boxicity, a combinatorially well studied graph invariant, from the viewpoint of parameterized algorithms. The boxicity of an arbitrary graph G with the vertex set V (G) and the edge set E(G), denoted by box(G), is the minimum number of interval graphs on the same set of vertices such that the intersection of the edge sets of the interval graphs is E(G). In the Boxicity problem we are given a graph G together with a positive integer k, and asked whether the box(G) is at most k. The problem is notoriously hard and it is known to be NPcomplete even to determine whether the boxicity of a graph is at most two. This rules out any possibility of having an algorithm with running time V (G)O(f(k)), where f is an arbitrary function depending on k alone. Hence we look for other structural parameters like “vertex cover number ” and “max leaf number ” and see their effect on the problem complexity.
Parameterized Approximation via Fidelity Preserving Transformations
 PROC. OF ICALP (1) 2012
, 2012
"... We motivate and describe a new parameterized approximation paradigm which studies the interaction between performance ratio and running time for any parameterization of a given optimization problem. As a key tool, we introduce the concept of αshrinking transformation, for α ≥ 1. Applying such tra ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We motivate and describe a new parameterized approximation paradigm which studies the interaction between performance ratio and running time for any parameterization of a given optimization problem. As a key tool, we introduce the concept of αshrinking transformation, for α ≥ 1. Applying such transformation to a parameterized problem instance decreases the parameter value, while preserving approximation ratio of α (or αfidelity). For example, it is wellknown that Vertex Cover cannot be approximated within any constant factor better than 2 [22] (under usual assumptions). Our parameterized αapproximation algorithm for kVertex Cover, parameterized by the solution size, has a running time of 1.273 (2−α)k, where the running time of the best FPT algorithm is 1.273 k [10]. Our algorithms define a continuous tradeoff between running times and approximation ratios, allowing practitioners to appropriately allocate computational resources. Moving even beyond the performance ratio, we call for a new type of approximative kernelization race. Our αshrinking transformations can be used to obtain kernels which are smaller than the best known for a given problem. For the Vertex Cover problem we obtain a kernel size of 2(2 − α)k. The smaller “αfidelity” kernels allow us to solve exactly problem instances more efficiently, while obtaining an approximate solution for the original instance. We show that such transformations exist for several fundamental problems, including Vertex Cover, dHitting Set, Connected Vertex Cover and Steiner Tree. We note that most of our algorithms are easy to implement and are therefore practical in use.