Results 1  10
of
29
A Threshold of ln n for Approximating Set Cover
 JOURNAL OF THE ACM
, 1998
"... Given a collection F of subsets of S = f1; : : : ; ng, set cover is the problem of selecting as few as possible subsets from F such that their union covers S, and max kcover is the problem of selecting k subsets from F such that their union has maximum cardinality. Both these problems are NPhar ..."
Abstract

Cited by 626 (6 self)
 Add to MetaCart
Given a collection F of subsets of S = f1; : : : ; ng, set cover is the problem of selecting as few as possible subsets from F such that their union covers S, and max kcover is the problem of selecting k subsets from F such that their union has maximum cardinality. Both these problems are NPhard. We prove that (1 \Gamma o(1)) ln n is a threshold below which set cover cannot be approximated efficiently, unless NP has slightly superpolynomial time algorithms. This closes the gap (up to low order terms) between the ratio of approximation achievable by the greedy algorithm (which is (1 \Gamma o(1)) ln n), and previous results of Lund and Yannakakis, that showed hardness of approximation within a ratio of (log 2 n)=2 ' 0:72 lnn. For max kcover we show an approximation threshold of (1 \Gamma 1=e) (up to low order terms), under the assumption that P != NP .
Approximating the Domatic Number
"... A set of vertices in a graph is a dominating set if every vertex outside the set has aneighbor in the set. The domatic number problem is that of partitioning the vertices of a graph into the maximum number of disjoint dominating sets. Let n denote the number ofvertices, ffi the minimum degree, and ..."
Abstract

Cited by 64 (7 self)
 Add to MetaCart
A set of vertices in a graph is a dominating set if every vertex outside the set has aneighbor in the set. The domatic number problem is that of partitioning the vertices of a graph into the maximum number of disjoint dominating sets. Let n denote the number ofvertices, ffi the minimum degree, and \Delta the maximum degree.We show that every graph has a domatic partition with (1o(1))(ffi + 1) / ln n dominatingsets, and moreover, that such a domatic partition can be found in polynomial time. This implies a (1 + o(1)) ln n approximation algorithm for domatic number, since the domaticnumber is always at most ffi + 1. We also show this to be essentially best possible. Namely,extending the approximation hardness of set cover by combining multiprover protocols with zeroknowledge techniques, we show that for every ffl> 0, a (1 ffl) ln napproximation impliesthat N P ` DT IM E(nO(log log n)). This makes domatic number the first natural maximization problem (known to the authors) that is provably approximable to within polylogarithmic factors but no better.We also show that every graph has a domatic partition with (1o(1))(ffi + 1) / ln \Delta dominating sets, where the " o(1) " term goes to zero as \Delta increases. This can be turned intoan efficient algorithm that produces a domatic partition of \Omega ( ffi / ln \Delta) sets.
Algorithmic construction of sets for krestrictions
 ACM TRANSACTIONS ON ALGORITHMS
, 2006
"... This work addresses krestriction problems, which unify combinatorial problems of the following type: The goal is to construct a short list of strings in Σ m that satisfies a given set of kwise demands. For every k positions and every demand, there must be at least one string in the list that satis ..."
Abstract

Cited by 44 (2 self)
 Add to MetaCart
This work addresses krestriction problems, which unify combinatorial problems of the following type: The goal is to construct a short list of strings in Σ m that satisfies a given set of kwise demands. For every k positions and every demand, there must be at least one string in the list that satisfies the demand at these positions. Problems of this form frequently arise in different fields in Computer Science. The standard approach for deterministically solving such problems is via almost kwise independence or kwise approximations for other distributions. We offer a generic algorithmic method that yields considerably smaller constructions. To this end, we generalize a previous work of Naor, Schulman and Srinivasan [18]. Among other results, we greatly enhance the combinatorial objects in the heart of their method, called splitters, and construct multiway splitters, using a new discrete version of the topological Necklace Splitting Theorem [1]. We utilize our methods to show improved constructions for group testing [19] and generalized hashing [3], and an improved inapproximability result for SetCover under the assumption P != NP.
The Cell Probe Complexity of Succinct Data Structures
 In Automata, Languages and Programming, 30th International Colloquium (ICALP 2003
, 2003
"... We show lower bounds in the cell probe model for the redundancy/query time tradeoff of solutions to static data structure problems. ..."
Abstract

Cited by 30 (0 self)
 Add to MetaCart
We show lower bounds in the cell probe model for the redundancy/query time tradeoff of solutions to static data structure problems.
Local Computations on Static and Dynamic Graphs
, 1995
"... The purpose of this paper is a study of computation that can be done locally in a dynamic distributed network. By locally we mean within time (or distance) independent of the size of the network and by dynamic we mean that the underlying graph is not stable and links continuously fail and comeup. O ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
The purpose of this paper is a study of computation that can be done locally in a dynamic distributed network. By locally we mean within time (or distance) independent of the size of the network and by dynamic we mean that the underlying graph is not stable and links continuously fail and comeup. One of the main contributions of this work is a definition of robustness, which captures the nature of an algorithm performing well in such an environment. The second
Parameterized complexity of cardinality constrained optimization problems
, 2006
"... We study the parameterized complexity of cardinality constrained optimization problems, i.e. optimization problems that require their solutions to contain specified numbers of elements to optimize solution values. For this purpose, we consider around 20 such optimization problems, as well as their p ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
We study the parameterized complexity of cardinality constrained optimization problems, i.e. optimization problems that require their solutions to contain specified numbers of elements to optimize solution values. For this purpose, we consider around 20 such optimization problems, as well as their parametric duals, that deal with various fundamental relations among vertices and edges in graphs. We have almost completely settled their parameterized complexity by giving either FPT algorithms or W[1]hardness proofs. Furthermore, we obtain faster exact algorithms for several cardinality constrained optimization problems by transforming them into problems of finding maximum (minimum) weight triangles in weighted graphs.
Hardness of Set Cover with Intersection 1
, 2000
"... We consider a restricted version of the general Set Covering problem in which each set in the given set system intersects with any other set in at most 1 element. We show that the Set Covering problem with intersection 1 cannot be approximated within a o(log n) factor in random polynomial time u ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
We consider a restricted version of the general Set Covering problem in which each set in the given set system intersects with any other set in at most 1 element. We show that the Set Covering problem with intersection 1 cannot be approximated within a o(log n) factor in random polynomial time unless NP ` ZT IME(n ). We also observe that the main challenge in derandomizing this reduction lies in find a hitting set for large volume combinatorial rectangles satisfying certain intersection properties. These properties are not satisfied by current methods of hitting set construction. An example
Random separation: a new method for solving fixedcardinality optimization problems
 Proceedings 2nd International Workshop on Parameterized and Exact Computation, IWPEC 2006
, 2006
"... Abstract. We develop a new randomized method, random separation, for solving fixedcardinality optimization problems on graphs, i.e., problems concerning solutions with exactly a fixed number k of elements (e.g., k vertices V ′ ) that optimize solution values (e.g., the number of edges covered by V ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
Abstract. We develop a new randomized method, random separation, for solving fixedcardinality optimization problems on graphs, i.e., problems concerning solutions with exactly a fixed number k of elements (e.g., k vertices V ′ ) that optimize solution values (e.g., the number of edges covered by V ′). The key idea of the method is to partition the vertex set of a graph randomly into two disjoint sets to separate a solution from the rest of the graph into connected components, and then select appropriate components to form a solution. We can use universal sets to derandomize algorithms obtained from this method. This new method is versatile and powerful as it can be used to solve a wide range of fixedcardinality optimization problems for degreebounded graphs, graphs of bounded degeneracy (a large family of graphs that contains degreebounded graphs, planar graphs, graphs of bounded treewidth, and nontrivial minorclosed families of graphs), and even general graphs.
Balanced families of perfect hash functions and their applications
 Proc. ICALP
, 2007
"... Abstract. The construction of perfect hash functions is a wellstudied topic. In this paper, this concept is generalized with the following definition. We say that a family of functions from [n] to[k] isaδbalanced (n, k)family of perfect hash functions if for every S ⊆ [n], S  = k, the number o ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Abstract. The construction of perfect hash functions is a wellstudied topic. In this paper, this concept is generalized with the following definition. We say that a family of functions from [n] to[k] isaδbalanced (n, k)family of perfect hash functions if for every S ⊆ [n], S  = k, the number of functions that are 11 on S is between T/δ and δT for some constant T>0. The standard definition of a family of perfect hash functions requires that there will be at least one function that is 11 on S,for each S of size k. In the new notion of balanced families, we require the number of 11 functions to be almost the same (taking δ to be close to 1) for every such S. Our main result is that for any constant δ>1, a δbalanced (n, k)family of perfect hash functions of size 2 O(k log log k) log n can be constructed in time 2 O(k log log k) nlog n. Using the technique of colorcoding we can apply our explicit constructions to devise approximation algorithms for various counting problems in graphs. In particular, we exhibit a deterministic polynomial time algorithm for approximating both the number of simple paths of length k and the number of simple log n cycles of size k for any k ≤ O() in a graph with n vertices. The log log log n approximation is up to any fixed desirable relative error.
Guessing secrets efficiently via list decoding
 ACM Transactions on Algorithms
"... We consider the guessing secrets problem defined by Chung, Graham, and Leighton [CGL01]. This is a variant of the standard 20 questions game where the player has a set of k> 1 secrets from a universe of N possible secrets. The player is asked Boolean questions about the secret. For each question, th ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
We consider the guessing secrets problem defined by Chung, Graham, and Leighton [CGL01]. This is a variant of the standard 20 questions game where the player has a set of k> 1 secrets from a universe of N possible secrets. The player is asked Boolean questions about the secret. For each question, the player picks one of the k secrets adversarially, and answers according to this secret. We present an explicit set of O(log N) questions together with an efficient (i.e., poly(log N) time) algorithm to solve the guessing secrets problem for the case of 2 secrets. This answers the main algorithmic question left unanswered by [CGL01]. The main techniques we use are small εbiased spaces and the notion of list decoding. We also establish bounds on the number of questions needed to solve the ksecrets game for k> 2, and discuss how list decoding can be used to get partial information about the secrets; specifically, to find a small core of secrets that must intersect the actual set of k secrets.