Results 11  20
of
627
Complexity of Consistent Query Answering in Databases under CardinalityBased and Incremental Repair Semantics
 In ICDT
, 2007
"... Abstract. Consistent Query Answering (CQA) is the problem of computing from a database the answers to a query that are consistent with respect to certain integrity constraints that the database, as a whole, may fail to satisfy. Consistent answers have been characterized as those that are invariant u ..."
Abstract

Cited by 41 (12 self)
 Add to MetaCart
(Show Context)
Abstract. Consistent Query Answering (CQA) is the problem of computing from a database the answers to a query that are consistent with respect to certain integrity constraints that the database, as a whole, may fail to satisfy. Consistent answers have been characterized as those that are invariant under certain minimal forms of restoration of the database consistency. In this paper we investigate algorithmic and complexity theoretic issues of CQA under database repairs that minimally departwrt the cardinality of the symmetric difference from the original database. Research on this kind of repairs has been suggested in the literature, but no systematic study had been done. Here we obtain first tight complexity bounds. We also address, considering for the first time a dynamic scenario for CQA, the problem of incremental complexity of CQA, that naturally occurs when an originally consistent database becomes inconsistent after the execution of a sequence of update operations. Tight bounds on incremental complexity are provided for various semantics under denial constraints, e.g. (a) minimum tuplebased repairs wrt cardinality, (b) minimal tuplebased repairs wrt set inclusion, and (c) minimum numerical aggregation of attributebased repairs. Fixed parameter tractability is also investigated in this dynamic context, where the size of the update sequence becomes the relevant parameter. 1
Improving Exhaustive Search Implies Superpolynomial Lower Bounds
, 2009
"... The P vs NP problem arose from the question of whether exhaustive search is necessary for problems with short verifiable solutions. We do not know if even a slight algorithmic improvement over exhaustive search is universally possible for all NP problems, and to date no major consequences have been ..."
Abstract

Cited by 40 (7 self)
 Add to MetaCart
The P vs NP problem arose from the question of whether exhaustive search is necessary for problems with short verifiable solutions. We do not know if even a slight algorithmic improvement over exhaustive search is universally possible for all NP problems, and to date no major consequences have been derived from the assumption that an improvement exists. We show that there are natural NP and BPP problems for which minor algorithmic improvements over the trivial deterministic simulation already entail lower bounds such as NEXP ̸ ⊆ P/poly and LOGSPACE ̸ = NP. These results are especially interesting given that similar improvements have been found for many other hard problems. Optimistically, one might hope our results suggest a new path to lower bounds; pessimistically, they show that carrying out the seemingly modest program of finding slightly better algorithms for all search problems may be extremely difficult (if not impossible). We also prove unconditional superpolynomial timespace lower bounds for improving on exhaustive search: there is a problem verifiable with k(n) length witnesses in O(n a) time (for some a and some function k(n) ≤ n) that cannot be solved in k(n) c n a+o(1) time and k(n) c n o(1) space, for every c ≥ 1. While such problems can always be solved by exhaustive search in O(2 k(n) n a) time and O(k(n) + n a) space, we can prove a superpolynomial lower bound in the parameter k(n) when space usage is restricted.
Width parameters beyond treewidth and their applications
 Computer Journal
, 2007
"... Besides the very successful concept of treewidth (see [Bodlaender, H. and Koster, A. (2007) Combinatorial optimisation on graphs of bounded treewidth. These are special issues on Parameterized Complexity]), many concepts and parameters measuring the similarity or dissimilarity of structures compare ..."
Abstract

Cited by 40 (0 self)
 Add to MetaCart
Besides the very successful concept of treewidth (see [Bodlaender, H. and Koster, A. (2007) Combinatorial optimisation on graphs of bounded treewidth. These are special issues on Parameterized Complexity]), many concepts and parameters measuring the similarity or dissimilarity of structures compared to trees have been born and studied over the past years. These concepts and parameters have proved to be useful tools in many applications, especially in the design of efficient algorithms. Our presented novel look at the contemporary developments of these ‘width ’ parameters in combinatorial structures delivers—besides traditional treewidth and derived dynamic programming schemes—also a number of other useful parameters like branchwidth, rankwidth (cliquewidth) or hypertreewidth. In this contribution, we demonstrate how ‘width ’ parameters of graphs and generalized structures (such as matroids or hypergraphs), can be used to improve the design of parameterized algorithms and the structural analysis in other applications on an abstract level.
The discrete basis problem
, 2005
"... We consider the Discrete Basis Problem, which can be described as follows: given a collection of Boolean vectors find a collection of k Boolean basis vectors such that the original vectors can be represented using disjunctions of these basis vectors. We show that the decision version of this problem ..."
Abstract

Cited by 38 (13 self)
 Add to MetaCart
(Show Context)
We consider the Discrete Basis Problem, which can be described as follows: given a collection of Boolean vectors find a collection of k Boolean basis vectors such that the original vectors can be represented using disjunctions of these basis vectors. We show that the decision version of this problem is NPcomplete and that the optimization version cannot be approximated within any finite ratio. We also study two variations of this problem, where the Boolean basis vectors must be mutually otrhogonal. We show that the other variation is closely related with the wellknown Metric kmedian Problem in Boolean space. To solve these problems, two algorithms will be presented. One is designed for the variations mentioned above, and it is solely based on solving the kmedian problem, while another is a heuristic intended to solve the general Discrete Basis Problem. We will also study the results of extensive experiments made with these two algorithms with both synthetic and realworld data. The results are twofold: with the synthetic data, the algorithms did rather well, but with the realworld data the results were not as good.
A more effective linear kernelization for Cluster Editing
 Theor. Comput. Sci
, 2009
"... Abstract. In the NPhard Cluster Editing problem, we have as input an undirected graph G andanintegerk ≥ 0. The question is whether we can transform G, by inserting and deleting at most k edges, into a cluster graph, that is, a union of disjoint cliques. We first confirm a conjecture by Michael Fell ..."
Abstract

Cited by 38 (10 self)
 Add to MetaCart
(Show Context)
Abstract. In the NPhard Cluster Editing problem, we have as input an undirected graph G andanintegerk ≥ 0. The question is whether we can transform G, by inserting and deleting at most k edges, into a cluster graph, that is, a union of disjoint cliques. We first confirm a conjecture by Michael Fellows [IWPEC 2006] that there is a polynomialtime kernelization for Cluster Editing that leads to a problem kernel with at most 6k vertices. More precisely, we present a cubictime algorithm that, given a graph G andanintegerk ≥ 0, finds a graph G ′ and an integer k ′ ≤ k such that G can be transformed into a cluster graph by at most k edge modifications iff G ′ can be transformed into a cluster graph by at most k ′ edge modifications, and the problem kernel G ′ has at most 6k vertices. So far, only a problem kernel of 24k vertices was known. Second, we show that this bound for the number of vertices of G ′ can be further improved to 4k. Finally, we consider the variant of Cluster Editing where the number of cliques that the cluster graph can contain is stipulated to be a constant d>0. We present a simple kernelization for this variant leaving a problem kernel of at most (d +2)k + d vertices. 1
A Duality between Clause Width and Clause Density for SAT
 In IEEE Conference on Computational Complexity (CCC
"... We consider the relationship between the complexities of and those of restricted to formulas of constant density. Let be the infimum of those such that on variables can be decided in time and be the infimum of those such that on variables and clauses can be decided in time. We show that. So, for a ..."
Abstract

Cited by 37 (7 self)
 Add to MetaCart
(Show Context)
We consider the relationship between the complexities of and those of restricted to formulas of constant density. Let be the infimum of those such that on variables can be decided in time and be the infimum of those such that on variables and clauses can be decided in time. We show that. So, for any, can be solved in time independent of if and only if the same is true for with any fixed density of clauses to variables. We derive some interesting consequences from this. For example, assuming thatis exponentially hard (that is,), of any fixed density can be solved in time whose exponent is strictly less than that for general. We also give an improvement to the sparsification lemma of [12] showing that instances of of density slightly more than exponential in are almost the hardest instances of. The previous result showed this for densities doubly exponential in. 1.
Subexponential parameterized algorithms
 Computer Science Review
"... We give a review of a series of techniques and results on the design of subexponential parameterized algorithms for graph problems. The design of such algorithms usually consists of two main steps: first find a branch (or tree) decomposition of the input graph whose width is bounded by a sublinear ..."
Abstract

Cited by 36 (17 self)
 Add to MetaCart
We give a review of a series of techniques and results on the design of subexponential parameterized algorithms for graph problems. The design of such algorithms usually consists of two main steps: first find a branch (or tree) decomposition of the input graph whose width is bounded by a sublinear function of the parameter and, second, use this decomposition to solve the problem in time that is single exponential to this bound. The main tool for the first step is Bidimensionality Theory. Here we present the potential, but also the boundaries, of this theory. For the second step, we describe recent techniques, associating the analysis of subexponential algorithms to combinatorial bounds related to Catalan numbers. As a result, we have 2 O( √ k) · n O(1) time algorithms for a wide variety of parameterized problems on graphs, where n is the size of the graph and k is the parameter. 1
Reflections on multivariate algorithmics and problem parameterization
 PROC. 27TH STACS
, 2010
"... Research on parameterized algorithmics for NPhard problems has steadily grown over the last years. We survey and discuss how parameterized complexity analysis naturally develops into the field of multivariate algorithmics. Correspondingly, we describe how to perform a systematic investigation and e ..."
Abstract

Cited by 36 (21 self)
 Add to MetaCart
(Show Context)
Research on parameterized algorithmics for NPhard problems has steadily grown over the last years. We survey and discuss how parameterized complexity analysis naturally develops into the field of multivariate algorithmics. Correspondingly, we describe how to perform a systematic investigation and exploitation of the “parameter space” of computationally hard problems.
A quadratic kernel for feedback vertex set
 in Proc. 20th SODA, ACM/SIAM, 2009
"... We prove that given an undirected graph G on n vertices and an integer k, one can compute in polynomial time in n a graph G ′ with at most 5k 2 +k vertices and an integer k ′ such that G has a feedback vertex set of size at most k iff G ′ has a feedback vertex set of size at most k ′. This result im ..."
Abstract

Cited by 33 (3 self)
 Add to MetaCart
(Show Context)
We prove that given an undirected graph G on n vertices and an integer k, one can compute in polynomial time in n a graph G ′ with at most 5k 2 +k vertices and an integer k ′ such that G has a feedback vertex set of size at most k iff G ′ has a feedback vertex set of size at most k ′. This result improves a previous O(k 11) kernel of Burrage et al. [6], and a more recent cubic kernel of Bodlaender [3]. This problem was communicated by Fellows in [5]. 1
Fixedparameter tractability of multicut parameterized by the size of the cutset
, 2011
"... Given an undirected graph G, a collection {(s1, t1),...,(sk, tk)} of pairs of vertices, and an integer p, the EDGE MULTICUT problem ask if there is a set S of at most p edges such that the removal of S disconnects every si from the corresponding ti. VERTEX MULTICUT is the analogous problem where S i ..."
Abstract

Cited by 33 (5 self)
 Add to MetaCart
Given an undirected graph G, a collection {(s1, t1),...,(sk, tk)} of pairs of vertices, and an integer p, the EDGE MULTICUT problem ask if there is a set S of at most p edges such that the removal of S disconnects every si from the corresponding ti. VERTEX MULTICUT is the analogous problem where S is a set of at most p vertices. Our main result is that both problems can be solved in time 2O(p3) · nO(1), i.e., fixedparameter tractable parameterized by the size p of the cutset in the solution. By contrast, it is unlikely that an algorithm with running time of the form f (p) · nO(1) exists for the directed version of the problem, as we show it to be W[1]hard parameterized by the size of the cutset.