Results 1  10
of
96
A Threshold of ln n for Approximating Set Cover
 JOURNAL OF THE ACM
, 1998
"... Given a collection F of subsets of S = f1; : : : ; ng, set cover is the problem of selecting as few as possible subsets from F such that their union covers S, and max kcover is the problem of selecting k subsets from F such that their union has maximum cardinality. Both these problems are NPhar ..."
Abstract

Cited by 637 (5 self)
 Add to MetaCart
Given a collection F of subsets of S = f1; : : : ; ng, set cover is the problem of selecting as few as possible subsets from F such that their union covers S, and max kcover is the problem of selecting k subsets from F such that their union has maximum cardinality. Both these problems are NPhard. We prove that (1 \Gamma o(1)) ln n is a threshold below which set cover cannot be approximated efficiently, unless NP has slightly superpolynomial time algorithms. This closes the gap (up to low order terms) between the ratio of approximation achievable by the greedy algorithm (which is (1 \Gamma o(1)) ln n), and previous results of Lund and Yannakakis, that showed hardness of approximation within a ratio of (log 2 n)=2 ' 0:72 lnn. For max kcover we show an approximation threshold of (1 \Gamma 1=e) (up to low order terms), under the assumption that P != NP .
Approximation Algorithms for Connected Dominating Sets
 Algorithmica
, 1996
"... The dominating set problem in graphs asks for a minimum size subset of vertices with the following property: each vertex is required to either be in the dominating set, or adjacent to some node in the dominating set. We focus on the question of finding a connected dominating set of minimum size, whe ..."
Abstract

Cited by 281 (9 self)
 Add to MetaCart
The dominating set problem in graphs asks for a minimum size subset of vertices with the following property: each vertex is required to either be in the dominating set, or adjacent to some node in the dominating set. We focus on the question of finding a connected dominating set of minimum size, where the graph induced by vertices in the dominating set is required to be connected as well. This problem arises in network testing, as well as in wireless communication. Two polynomial time algorithms that achieve approximation factors of O(H (\Delta)) are presented, where \Delta is the maximum degree, and H is the harmonic function. This question also arises in relation to the traveling tourist problem, where one is looking for the shortest tour such that each vertex is either visited, or has at least one of its neighbors visited. We study a generalization of the problem when the vertices have weights, and give an algorithm which achieves a performance ratio of 3 ln n. We also consider the ...
ConstantTime Distributed Dominating Set Approximation
 In Proc. of the 22 nd ACM Symposium on the Principles of Distributed Computing (PODC
, 2003
"... Finding a small dominating set is one of the most fundamental problems of traditional graph theory. In this paper, we present a new fully distributed approximation algorithm based on LP relaxation techniques. For an arbitrary parameter k and maximum degree #, our algorithm computes a dominating set ..."
Abstract

Cited by 112 (24 self)
 Add to MetaCart
Finding a small dominating set is one of the most fundamental problems of traditional graph theory. In this paper, we present a new fully distributed approximation algorithm based on LP relaxation techniques. For an arbitrary parameter k and maximum degree #, our algorithm computes a dominating set of expected size O k# log #DSOPT rounds where each node has to send O k messages of size O(log #). This is the first algorithm which achieves a nontrivial approximation ratio in a constant number of rounds.
Algorithmic construction of sets for krestrictions
 ACM TRANSACTIONS ON ALGORITHMS
, 2006
"... This work addresses krestriction problems, which unify combinatorial problems of the following type: The goal is to construct a short list of strings in Σ m that satisfies a given set of kwise demands. For every k positions and every demand, there must be at least one string in the list that satis ..."
Abstract

Cited by 42 (2 self)
 Add to MetaCart
This work addresses krestriction problems, which unify combinatorial problems of the following type: The goal is to construct a short list of strings in Σ m that satisfies a given set of kwise demands. For every k positions and every demand, there must be at least one string in the list that satisfies the demand at these positions. Problems of this form frequently arise in different fields in Computer Science. The standard approach for deterministically solving such problems is via almost kwise independence or kwise approximations for other distributions. We offer a generic algorithmic method that yields considerably smaller constructions. To this end, we generalize a previous work of Naor, Schulman and Srinivasan [18]. Among other results, we greatly enhance the combinatorial objects in the heart of their method, called splitters, and construct multiway splitters, using a new discrete version of the topological Necklace Splitting Theorem [1]. We utilize our methods to show improved constructions for group testing [19] and generalized hashing [3], and an improved inapproximability result for SetCover under the assumption P != NP.
On the Power of Priority Algorithms for Facility Location and Set Cover
 In Proceedings of the 5th International Workshop on Approximation Algorithms for Combinatorial Optimization
, 2002
"... We apply and extend the priority algorithm framework introduced by Borodin, Nielsen and Rackoff to define "greedylike" algorithms for (uncapacitated) facility location problems and set cover. These problems have been the focus of extensive research from the point of view of approximation ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
We apply and extend the priority algorithm framework introduced by Borodin, Nielsen and Rackoff to define "greedylike" algorithms for (uncapacitated) facility location problems and set cover. These problems have been the focus of extensive research from the point of view of approximation algorithms, and for both problems, greedy algorithms have been proposed and analyzed. The priority algorithm definitions are general enough so as to capture a broad class of algorithms that can be characterized as "greedylike" while still possible to derive nontrivial lower bounds on the approximability of the problems. Our results are orthogonal to complexity considerations, and hence apply to algorithms that are not necessarily polynomialtime.
A general method for sensor planning in multisensor systems: Extension to random occlusion
, 2005
"... Abstract. Systems utilizing multiple sensors are required in many domains. In this paper, we specifically concern ourselves with applications where dynamic objects appear randomly and the system is employed to obtain some userspecified characteristics of such objects. For such systems, we deal wit ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
Abstract. Systems utilizing multiple sensors are required in many domains. In this paper, we specifically concern ourselves with applications where dynamic objects appear randomly and the system is employed to obtain some userspecified characteristics of such objects. For such systems, we deal with the tasks of determining measures for evaluating their performance and of determining good sensor configurations that would maximize such measures for better system performance. We introduce a constraint in sensor planning that has not been addressed earlier: visibility in the presence of random occluding objects. Two techniques are developed to analyze such visibility constraints: a probabilistic approach to determine “average ” visibility rates and a deterministic approach to address worstcase scenarios. Apart from this constraint, other important constraints to be considered include image resolution, field of view, capture orientation, and algorithmic constraints such as stereo matching and background appearance. Integration of such constraints is performed via the development of a probabilistic framework that allows one to reason about different occlusion events and integrates different multiview capture and visibility constraints in a natural way. Integration of the thus obtained capture quality measure across the region of interest yields a measure for the effectiveness of a sensor configuration and maximization of such measure yields sensor configurations that are
Network Performance Anomaly Detection and Localization
"... Abstract—Detecting the occurrence and location of performance anomalies (e.g., high jitter or loss events) is critical to ensuring the effective operation of network infrastructures. In this paper we present a framework for detecting and localizing performance anomalies based on using an active prob ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
Abstract—Detecting the occurrence and location of performance anomalies (e.g., high jitter or loss events) is critical to ensuring the effective operation of network infrastructures. In this paper we present a framework for detecting and localizing performance anomalies based on using an active probeenabled measurement infrastructure deployed on the periphery of a network. Our framework has three components: an algorithm for detecting performance anomalies on a path, an algorithm for selecting which paths to probe at a given time in order to detect performance anomalies (where a path is defined as the set of links between two measurement nodes), and an algorithm for identifying the links that are causing an identified anomaly on apath(i.e., localizing). The problem of detecting an anomaly on a path is addressed by comparing probebased measures of performance characteristics with performance guarantees for the network (e.g., SLAs). The path selection algorithm is designed to enable a tradeoff between ensuring that all links in a network are frequently monitored to detect performance anomalies, while minimizing probing overhead. The localization algorithm is designed to use existing path measurement data in such a way as to minimize the number of paths necessary for additional probing in order to identify the link(s) responsible for an observed performance anomaly. We assess the feasibility of our framework and algorithms by implementing them in ns2 and conducting a set of simulationbased experiments using several different network topologies. Our results show that our method is able to accurately detect and localize performance anomalies in a timely fashion and with lower probe and computational overheads than previously proposed methodologies. I.
Approximation algorithms for Euclidean group TSP
 In Automata, languages and programming : 32nd International Colloquim, ICALP 2005
, 2005
"... Abstract. In the Euclidean group Traveling Salesman Problem (TSP), we are given a set of points P in the plane and a set of m connected regions, each containing at least one point of P. We want to find a tour of minimum length that visits at least one point in each region. This unifies the TSP with ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
Abstract. In the Euclidean group Traveling Salesman Problem (TSP), we are given a set of points P in the plane and a set of m connected regions, each containing at least one point of P. We want to find a tour of minimum length that visits at least one point in each region. This unifies the TSP with Neighborhoods and the Group Steiner Tree problem. We give a (9.1α + 1)approximation algorithm for the case when the regions are disjoint αfat objects with possibly varying size. This considerably improves the best results known, in this case, for both the group Steiner tree problem and the TSP with Neighborhoods problem. We also give the first O(1)approximation algorithm for the problem with intersecting regions. 1
Optimal Positioning of Active and Passive Monitoring Devices
 CONEXT 2005, TOULOUSE: FRANCE
, 2005
"... Network measurement is essential for assessing performance issues, identifying and locating problems. Two common strategies are the passive approach that attaches specific devices to links in order to monitor the traffic that passes through the network and the active approach that generates explicit ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
Network measurement is essential for assessing performance issues, identifying and locating problems. Two common strategies are the passive approach that attaches specific devices to links in order to monitor the traffic that passes through the network and the active approach that generates explicit control packets in the network for measurements. One of the key issues in this domain is to minimize the overhead in terms of hardware, software, maintenance cost and additional traffic. In this paper, we study the problem of assigning tap devices for passive monitoring and beacons for active monitoring. Minimizing the number of devices and finding optimal strategic locations is a key issue, mandatory for deploying scalable monitoring platforms. In this article, we present a combinatorial view of the problem from which we derive complexity and approximability results, as well as efficient and versatile Mixed Integer Programming (MIP) formulations.