Results 1  10
of
323
On problems without polynomial kernels
 Lect. Notes Comput. Sci
, 2007
"... Abstract. Kernelization is a strong and widelyapplied technique in parameterized complexity. In a nutshell, a kernelization algorithm, or simply a kernel, is a polynomialtime transformation that transforms any given parameterized instance to an equivalent instance of the same problem, with size an ..."
Abstract

Cited by 65 (9 self)
 Add to MetaCart
Abstract. Kernelization is a strong and widelyapplied technique in parameterized complexity. In a nutshell, a kernelization algorithm, or simply a kernel, is a polynomialtime transformation that transforms any given parameterized instance to an equivalent instance of the same problem, with size and parameter bounded by a function of the parameter in the input. A kernel is polynomial if the size and parameter of the output are polynomiallybounded by the parameter of the input. In this paper we develop a framework which allows showing that a wide range of FPT problems do not have polynomial kernels. Our evidence relies on hypothesis made in the classical world (i.e. nonparametric complexity), and evolves around a new type of algorithm for classical decision problems, called a distillation algorithm, which might be of independent interest. Using the notion of distillation algorithms, we develop a generic lowerbound engine which allows us to show that a variety of FPT problems, fulfilling certain criteria, cannot have polynomial kernels unless the polynomial hierarchy collapses. These problems include kPath, kCycle, kExact Cycle, kShort Cheap Tour, kGraph Minor Order Test, kCutwidth, kSearch Number, kPathwidth, kTreewidth, kBranchwidth, and several optimization problems parameterized by treewidth or cliquewidth. 1
Subexponential Parameterized Algorithms on Graphs of Bounded Genus and HMinorFree Graphs
, 2003
"... We introduce a new framework for designing fixedparameter algorithms with subexponential running time2 . Our results apply to a broad family of graph problems, called bidimensional problems, which includes many domination and covering problems such as vertex cover, feedback vertex set, minimum m ..."
Abstract

Cited by 42 (12 self)
 Add to MetaCart
We introduce a new framework for designing fixedparameter algorithms with subexponential running time2 . Our results apply to a broad family of graph problems, called bidimensional problems, which includes many domination and covering problems such as vertex cover, feedback vertex set, minimum maximal matching, dominating set, edge dominating set, cliquetransversal set, and many others restricted to bounded genus graphs. Furthermore, it is fairly straightforward to prove that a problem is bidimensional. In particular, our framework includes as special cases all previously known problems to have such subexponential algorithms. Previously, these algorithms applied to planar graphs, singlecrossingminorfree graphs, and/or map graphs; we extend these results to apply to boundedgenus graphs as well. In a parallel development of combinatorial results, we establish an upper bound on the treewidth (or branchwidth) of a boundedgenus graph that excludes some planar graph H as a minor. This bound depends linearly on the size (H) of the excluded graph H and the genus g(G) of the graph G, and applies and extends the graphminors work of Robertson and Seymour. Building on these results...
Infeasibility of instance compression and succinct PCPs for NP
 Electronic Colloquium on Computational Complexity (ECCC
"... The ORSAT problem asks, given Boolean formulae φ1,..., φm each of size at most n, whether at least one of the φi’s is satisfiable. We show that there is no reduction from ORSAT to any set A where the length of the output is bounded by a polynomial in n, unless NP ⊆ coNP/poly, and the PolynomialTi ..."
Abstract

Cited by 33 (1 self)
 Add to MetaCart
The ORSAT problem asks, given Boolean formulae φ1,..., φm each of size at most n, whether at least one of the φi’s is satisfiable. We show that there is no reduction from ORSAT to any set A where the length of the output is bounded by a polynomial in n, unless NP ⊆ coNP/poly, and the PolynomialTime Hierarchy collapses. This result settles an open problem proposed by Bodlaender et. al. [4] and Harnik and Naor [15] and has a number of implications. • A number of parametric NP problems, including Satisfiability, Clique, Dominating Set and Integer Programming, are not instance compressible or polynomially kernelizable unless NP ⊆ coNP/poly. • Satisfiability does not have PCPs of size polynomial in the number of variables unless NP ⊆ coNP/poly. • An approach of Harnik and Naor to constructing collisionresistant hash functions from oneway functions is unlikely to be viable in its present form. • (BuhrmanHitchcock) There are no subexponentialsize hard sets for NP unless NP is in coNP/poly. We also study probabilistic variants of compression, and show various results about and connections between these variants. To this end, we introduce a new strong derandomization hypothesis, the Oracle Derandomization Hypothesis, and discuss how it relates to traditional derandomization assumptions. Categories and Subject Descriptors
Locally excluding a minor
, 2007
"... We introduce the concept of locally excluded minors. Graph classes locally excluding a minor generalise the concept of excluded minor classes but also of graph classes with bounded local treewidth and graph classes with bounded expansion. We show that firstorder modelchecking is fixedparameter t ..."
Abstract

Cited by 33 (12 self)
 Add to MetaCart
We introduce the concept of locally excluded minors. Graph classes locally excluding a minor generalise the concept of excluded minor classes but also of graph classes with bounded local treewidth and graph classes with bounded expansion. We show that firstorder modelchecking is fixedparameter tractable on any class of graphs locally excluding a minor. This strictly generalises analogous results by Flum and Grohe on excluded minor classes and Frick and Grohe on classes with bounded local treewidth. As an important consequence of the proof we obtain fixedparameter algorithms for problems such as dominating or independent set on graph classes excluding a minor, where now the parameter is the size of the dominating set and the excluded minor. We also study graph classes with excluded minors, where the minor may grow slowly with the size of the graphs and show that again, firstorder modelchecking is fixedparameter tractable on any such class of graphs.
Complexity of Consistent Query Answering in Databases under CardinalityBased and Incremental Repair Semantics
 In ICDT
, 2007
"... Abstract. Consistent Query Answering (CQA) is the problem of computing from a database the answers to a query that are consistent with respect to certain integrity constraints that the database, as a whole, may fail to satisfy. Consistent answers have been characterized as those that are invariant u ..."
Abstract

Cited by 30 (9 self)
 Add to MetaCart
Abstract. Consistent Query Answering (CQA) is the problem of computing from a database the answers to a query that are consistent with respect to certain integrity constraints that the database, as a whole, may fail to satisfy. Consistent answers have been characterized as those that are invariant under certain minimal forms of restoration of the database consistency. In this paper we investigate algorithmic and complexity theoretic issues of CQA under database repairs that minimally departwrt the cardinality of the symmetric difference from the original database. Research on this kind of repairs has been suggested in the literature, but no systematic study had been done. Here we obtain first tight complexity bounds. We also address, considering for the first time a dynamic scenario for CQA, the problem of incremental complexity of CQA, that naturally occurs when an originally consistent database becomes inconsistent after the execution of a sequence of update operations. Tight bounds on incremental complexity are provided for various semantics under denial constraints, e.g. (a) minimum tuplebased repairs wrt cardinality, (b) minimal tuplebased repairs wrt set inclusion, and (c) minimum numerical aggregation of attributebased repairs. Fixed parameter tractability is also investigated in this dynamic context, where the size of the update sequence becomes the relevant parameter. 1
A more effective linear kernelization for Cluster Editing
 Theor. Comput. Sci
, 2009
"... Abstract. In the NPhard Cluster Editing problem, we have as input an undirected graph G andanintegerk ≥ 0. The question is whether we can transform G, by inserting and deleting at most k edges, into a cluster graph, that is, a union of disjoint cliques. We first confirm a conjecture by Michael Fell ..."
Abstract

Cited by 30 (7 self)
 Add to MetaCart
Abstract. In the NPhard Cluster Editing problem, we have as input an undirected graph G andanintegerk ≥ 0. The question is whether we can transform G, by inserting and deleting at most k edges, into a cluster graph, that is, a union of disjoint cliques. We first confirm a conjecture by Michael Fellows [IWPEC 2006] that there is a polynomialtime kernelization for Cluster Editing that leads to a problem kernel with at most 6k vertices. More precisely, we present a cubictime algorithm that, given a graph G andanintegerk ≥ 0, finds a graph G ′ and an integer k ′ ≤ k such that G can be transformed into a cluster graph by at most k edge modifications iff G ′ can be transformed into a cluster graph by at most k ′ edge modifications, and the problem kernel G ′ has at most 6k vertices. So far, only a problem kernel of 24k vertices was known. Second, we show that this bound for the number of vertices of G ′ can be further improved to 4k. Finally, we consider the variant of Cluster Editing where the number of cliques that the cluster graph can contain is stipulated to be a constant d>0. We present a simple kernelization for this variant leaving a problem kernel of at most (d +2)k + d vertices. 1
Parameterized complexity and approximation algorithms
 Comput. J
, 2006
"... Approximation algorithms and parameterized complexity are usually considered to be two separate ways of dealing with hard algorithmic problems. In this paper, our aim is to investigate how these two fields can be combined to achieve better algorithms than what any of the two theories could offer. We ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
Approximation algorithms and parameterized complexity are usually considered to be two separate ways of dealing with hard algorithmic problems. In this paper, our aim is to investigate how these two fields can be combined to achieve better algorithms than what any of the two theories could offer. We discuss the different ways parameterized complexity can be extended to approximation algorithms, survey results of this type and propose directions for future research. 1.
The discrete basis problem
, 2005
"... We consider the Discrete Basis Problem, which can be described as follows: given a collection of Boolean vectors find a collection of k Boolean basis vectors such that the original vectors can be represented using disjunctions of these basis vectors. We show that the decision version of this problem ..."
Abstract

Cited by 25 (9 self)
 Add to MetaCart
We consider the Discrete Basis Problem, which can be described as follows: given a collection of Boolean vectors find a collection of k Boolean basis vectors such that the original vectors can be represented using disjunctions of these basis vectors. We show that the decision version of this problem is NPcomplete and that the optimization version cannot be approximated within any finite ratio. We also study two variations of this problem, where the Boolean basis vectors must be mutually otrhogonal. We show that the other variation is closely related with the wellknown Metric kmedian Problem in Boolean space. To solve these problems, two algorithms will be presented. One is designed for the variations mentioned above, and it is solely based on solving the kmedian problem, while another is a heuristic intended to solve the general Discrete Basis Problem. We will also study the results of extensive experiments made with these two algorithms with both synthetic and realworld data. The results are twofold: with the synthetic data, the algorithms did rather well, but with the realworld data the results were not as good.
Incompressibility through Colors and IDs
"... In parameterized complexity each problem instance comes with a parameter k and the parameterized problem is said to admit a polynomial kernel if there are polynomial time preprocessing rules that reduce the input instance down to an instance with size polynomial in k. Many problems have been shown t ..."
Abstract

Cited by 24 (5 self)
 Add to MetaCart
In parameterized complexity each problem instance comes with a parameter k and the parameterized problem is said to admit a polynomial kernel if there are polynomial time preprocessing rules that reduce the input instance down to an instance with size polynomial in k. Many problems have been shown to admit polynomial kernels, but it is only recently that a framework for showing the nonexistence of polynomial kernels for specific problems has been developed by Bodlaender et al. [6] and Fortnow and Santhanam [15]. With few exceptions, all known kernelization lower bounds result have been obtained by directly applying this framework. In this paper we show how to combine these results with combinatorial reductions which use colors and IDs in order to prove kernelization lower bounds for a variety of basic problems. Below we give a summary of our main results. All our results are under the assumption that the polynomial hierarchy does not collapse to the third level. • We show that the Steiner Tree problem parameterized by the number of terminals and solution size, and the Connected Vertex Cover and Capacitated Vertex Cover problems do not admit a polynomial kernel. The two latter results are surprising because the closely related Vertex Cover problem admits a kernel of size 2k.