Results 1  10
of
47
CrossComposition: A New Technique for Kernelization Lower Bounds
, 2011
"... We introduce a new technique for proving kernelization lower bounds, called crosscomposition. A classical problem L crosscomposes into a parameterized problem Q if an instance of Q with polynomially bounded parameter value can express the logical OR of a sequence of instances of L. Building on wor ..."
Abstract

Cited by 40 (7 self)
 Add to MetaCart
We introduce a new technique for proving kernelization lower bounds, called crosscomposition. A classical problem L crosscomposes into a parameterized problem Q if an instance of Q with polynomially bounded parameter value can express the logical OR of a sequence of instances of L. Building on work by Bodlaender et al. (ICALP 2008) and using a result by Fortnow and Santhanam (STOC 2008) we show that if an NPhard problem crosscomposes into a parameterized problem Q then Q does not admit a polynomial kernel unless the polynomial hierarchy collapses. Our technique generalizes and strengthens the recent techniques of using orcomposition algorithms and of transferring the lower bounds via polynomial parameter transformations. We show its applicability by proving kernelization lower bounds for a number of important graphs problems with structural (nonstandard) parameterizations, e.g., Chromatic Number, Clique, and Weighted Feedback Vertex Set do not admit polynomial kernels with respect to the vertex cover number of the input graphs unless the polynomial hierarchy collapses, contrasting the fact that these problems are trivially fixedparameter tractable for this parameter. We have similar lower bounds for Feedback Vertex Set.
Kernelization of Packing Problems
, 2011
"... Kernelization algorithms are polynomialtime reductions from a problem to itself that guarantee their output to have a size not exceeding some bound. For example, dSet Matching for integers d ≥ 3 is the problem of nding a matching of size at least k in a given duniform hypergraph and has kernels w ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
Kernelization algorithms are polynomialtime reductions from a problem to itself that guarantee their output to have a size not exceeding some bound. For example, dSet Matching for integers d ≥ 3 is the problem of nding a matching of size at least k in a given duniform hypergraph and has kernels with O(k d) edges. Recently, Bodlaender et al. [ICALP 2008], Fortnow and Santhanam [STOC 2008], Dell and Van Melkebeek [STOC 2010] developed a framework for proving lower bounds on the kernel size for certain problems, under the complexitytheoretic hypothesis that coNP is not contained in NP/poly. Under the same hypothesis, we show lower bounds for the kernelization of dSet Matching and other packing problems. Our bounds are tight for dSet Matching: It does not have kernels with O(k d−ɛ) edges for any ɛ> 0 unless the hypothesis fails. By reduction, this transfers to a bound of O(k d−1−ɛ) for the problem of nding k vertexdisjoint cliques of size d in standard graphs. It is natural to ask for tight bounds on the kernel sizes of such graph packing problems. We make rst progress in that direction by showing nontrivial kernels with O(k 2.5) edges for the problem of nding k vertexdisjoint paths of three edges each. This does not quite match the best lower bound of O(k 2−ɛ) that we can prove. Most of our lower bound proofs follow a general scheme that we discover: To exclude kernels of size O(k d−ɛ) for a problem in duniform hypergraphs, one should reduce from a carefully chosen dpartite problem that is still NPhard. As an illustration, we apply this scheme to the vertex cover problem, which allows us to replace the numbertheoretical construction by Dell and Van Melkebeek [STOC 2010] with shorter elementary arguments. 1
New Limits to Classical and Quantum Instance Compression
 ELECTRONIC COLLOQUIUM ON COMPUTATIONAL COMPLEXITY, REPORT NO. 112
, 2012
"... Given an instance of a hard decision problem, a limited goal is to compress that instance into a smaller, equivalent instance of a second problem. As one example, consider the problem where, given Boolean formulas ψ 1,...,ψ t, we must determine if at least one ψ j is satisfiable. An ORcompression s ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
Given an instance of a hard decision problem, a limited goal is to compress that instance into a smaller, equivalent instance of a second problem. As one example, consider the problem where, given Boolean formulas ψ 1,...,ψ t, we must determine if at least one ψ j is satisfiable. An ORcompression scheme for SAT is a polynomialtime reduction R that maps (ψ 1,...,ψ t) to a string z, such that z lies in some “target ” language L ′ if and only if ∨ j [ψj ∈ SAT] holds. (Here, L ′ can be arbitrarily complex.) ANDcompression schemes are defined similarly. A compression scheme is strong if z  is polynomially bounded in n = maxj ψ j , independent of t. Strong compression for SAT seems unlikely. Work of Harnik and Naor (FOCS ’06/SICOMP ’10) and Bodlaender, Downey, Fellows, and Hermelin (ICALP ’08/JCSS ’09) showed that the infeasibility of strong ORcompression for SAT would show limits to instance compression for a large number of natural problems. Bodlaender et al. also showed that the infeasibility of strong ANDcompression for SAT would have consequences for a different list of problems. Motivated by this, Fortnow and Santhanam (STOC ’08/JCSS ’11) showed that if SAT is strongly ORcompressible,
Compression via Matroids: A Randomized Polynomial Kernel for Odd Cycle Transversal
"... The Odd Cycle Transversal problem (OCT) asks whether a given graph can be made bipartite by deleting at most k of its vertices. In a breakthrough result Reed, Smith, and Vetta (Operations Research Letters, 2004) gave a O(4 k kmn) time algorithm for it, the first algorithm with polynomial runtime of ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
(Show Context)
The Odd Cycle Transversal problem (OCT) asks whether a given graph can be made bipartite by deleting at most k of its vertices. In a breakthrough result Reed, Smith, and Vetta (Operations Research Letters, 2004) gave a O(4 k kmn) time algorithm for it, the first algorithm with polynomial runtime of uniform degree for every fixed k. It is known that this implies a polynomialtime compression algorithm that turns OCT instances into equivalent instances of size at most O(4 k), a socalled kernelization. Since then the existence of a polynomial kernel for OCT, i.e., a kernelization with size bounded polynomially in k, has turned into one of the main open questions in the study of kernelization. Despite the impressive progress in the area, including the recent development of lower bound techniques (Bodlaender
Weak Compositions and Their Applications to Polynomial Lower Bounds for Kernelization
"... Abstract. We introduce a new form of composition called weak composition that allows us to obtain polynomial kernelization lowerbounds for several natural parameterized problems. Let d ≥ 2 be some constant and let L1, L2 ⊆ {0, 1} ∗ × N be two parameterized problems where the unparameterized versi ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
(Show Context)
Abstract. We introduce a new form of composition called weak composition that allows us to obtain polynomial kernelization lowerbounds for several natural parameterized problems. Let d ≥ 2 be some constant and let L1, L2 ⊆ {0, 1} ∗ × N be two parameterized problems where the unparameterized version of L1 is NPhard. Assuming coNP ̸ ⊆ NP/poly, our framework essentially states that composing t L1instances each with parameter k, to an L2instance with parameter k ′ ≤ t 1/d k O(1) , implies that L2 does not have a kernel of size O(k d−ε) for any ε> 0. We show two examples of weak composition and derive polynomial kernelization lower bounds for dBipartite Regular Perfect Code and dDimensional Matching, parameterized by the solution size k. By reduction, using linear parameter transformations, we then derive the following lowerbounds for kernel sizes when the parameter is the solution size k (assuming coNP ̸ ⊆ NP/poly): – dSet Packing, dSet Cover, dExact Set Cover, Hitting Set with dBounded Occurrences, and Exact Hitting Set with dBounded Occurrences have no kernels of size O(k d−3−ε) for any ε> 0. – Kd Packing and Induced K1,d Packing have no kernels of size O(k d−4−ε) for any ε> 0. – dRedBlue Dominating Set and dSteiner Tree have no kernels of sizes O(k d−3−ε) and
Hitting forbidden minors: Approximation and kernelization
 IN PROCEEDINGS OF THE 8TH INTERNATIONAL SYMPOSIUM ON THEORETICAL ASPECTS OF COMPUTER SCIENCE (STACS 2011
"... We study a general class of problems called FDeletion problems. In an FDeletion problem, we are asked whether a subset of at most k vertices can be deleted from a graph G such that the resulting graph does not contain as a minor any graph from the family F of forbidden minors. We obtain a number o ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
(Show Context)
We study a general class of problems called FDeletion problems. In an FDeletion problem, we are asked whether a subset of at most k vertices can be deleted from a graph G such that the resulting graph does not contain as a minor any graph from the family F of forbidden minors. We obtain a number of algorithmic results on the FDeletion problem when F contains a planar graph. We give • a linear vertex kernel on graphs excluding tclaw K1,t, the star with t leves, as an induced subgraph, where t is a fixed integer. • an approximation algorithm achieving an approximation ratio of O(log 3/2 OPT), where OPT is the size of an optimal solution on general undirected graphs. Finally, we obtain polynomial kernels for the case when F contains graph θc as a minor for a fixed integer c. The graph θc consists of two vertices connected by c parallel edges. Even though this may appear to be a very restricted class of problems it already encompasses wellstudied problems such as Vertex Cover, Feedback Vertex Set and Diamond Hitting Set. The generic kernelization algorithm is based on a nontrivial application of protrusion techniques, previously used only for problems on topological graph classes.
Vertex cover kernelization revisited: Upper and lower bounds for a refined parameter
 CoRR
"... Kernelization is a concept that enables the formal mathematical analysis of data reduction through the framework of parameterized complexity. Intensive research into the Vertex Cover problem has shown that there is a preprocessing algorithm which given an instance (G, k) of Vertex Cover outputs an ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
Kernelization is a concept that enables the formal mathematical analysis of data reduction through the framework of parameterized complexity. Intensive research into the Vertex Cover problem has shown that there is a preprocessing algorithm which given an instance (G, k) of Vertex Cover outputs an equivalent instance (G′, k′) in polynomial time with the guarantee that G ′ has at most 2k ′ vertices (and thus O((k′)2) edges) with k ′ ≤ k. Using the terminology of parameterized complexity we say that kVertex Cover has a kernel with 2k vertices. There is complexitytheoretic evidence that both 2k vertices and Θ(k2) edges are optimal for the kernel size. In this paper we consider the Vertex Cover problem with a different parameter, the size fvs(G) of a minimum feedback vertex set for G. This refined parameter is structurally smaller than the parameter k associated to the vertex covering number vc(G) since fvs(G) ≤ vc(G) and the difference can be arbitrarily large. We give a kernel for Vertex Cover with a number of vertices that is cubic in fvs(G): an instance (G,X, k) of Vertex Cover, where X is a feedback vertex set for G, can be transformed in polynomial time into an equivalent instance (G′, X ′, k′) such that k ′ ≤ k, X ′  ≤ X  and most importantly V (G′)  ≤ 2k and V (G′)  ∈ O(X ′3). A similar result holds when the feedback vertex set X is not given along with the input. In sharp contrast we show that the Weighted Vertex Cover problem does not have a polynomial kernel when parameterized by fvs(G) unless the polynomial hierarchy collapses to the third level (PH = Σp3). Our work is one of the first examples of research in kernelization using a nonstandard parameter, and shows that this approach can yield interesting computational insights. To obtain our results we make extensive use of the combinatorial structure of independent sets in forests.
Conondeterminism in compositions: A kernelization lower bound for a Ramseytype problem
, 2012
"... Until recently, techniques for obtaining lower bounds for kernelization were one of the most sought after tools in the field of parameterized complexity. Now, after a strong influx of techniques, we are in the fortunate situation of having tools available that are even stronger than what has been re ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
(Show Context)
Until recently, techniques for obtaining lower bounds for kernelization were one of the most sought after tools in the field of parameterized complexity. Now, after a strong influx of techniques, we are in the fortunate situation of having tools available that are even stronger than what has been required in their applications so far. Based on a result of Fortnow and Santhanam (STOC 2008, JCSS 2011), Bodlaender et al. (ICALP 2008, JCSS 2009) showed that, unless NP ⊆ coNP/poly, the existence of a deterministic polynomialtime composition algorithm, i.e., an algorithm which outputs an instance of bounded parameter value which is yes if and only if one of t input instances is yes, rules out the existence of polynomial kernels for a problem. Dell and van Melkebeek (STOC 2010) continued this line