Results 1  10
of
12
A Bidding Protocol for Deploying Mobile Sensors
 in Proceedings of IEEE ICNP
, 2003
"... Adequate coverage is very important for sensor networks to fulfill sensing tasks. In many working environments, it is necessary to make use of mobile sensors to provide the required coverage. We propose to deploy a mix of mobile and static sensors to achieve a balance between sensor coverage and sen ..."
Abstract

Cited by 81 (6 self)
 Add to MetaCart
Adequate coverage is very important for sensor networks to fulfill sensing tasks. In many working environments, it is necessary to make use of mobile sensors to provide the required coverage. We propose to deploy a mix of mobile and static sensors to achieve a balance between sensor coverage and sensor cost. We design two bidding protocols to guide the movement of mobile sensors. In the protocols, static sensors detect coverage holes locally by using Voronoi diagrams, and bid mobile sensors to move. Mobile sensors accept the highest bids and heal the largest holes. Simulation results show that our protocols achieve suitable tradeoff between coverage and sensor cost. I.
The discoverability of the web
 In WWW
, 2007
"... Previous studies have highlighted the high arrival rate of new content on the web. We study the extent to which this new content can be efficiently discovered by a crawler. Our study has two parts. First, we study the inherent difficulty of the discovery problem using a maximum cover formulation, un ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
(Show Context)
Previous studies have highlighted the high arrival rate of new content on the web. We study the extent to which this new content can be efficiently discovered by a crawler. Our study has two parts. First, we study the inherent difficulty of the discovery problem using a maximum cover formulation, under an assumption of perfect estimates of likely sources of links to new content. Second, we relax this assumption and study a more realistic setting in which algorithms must use historical statistics to estimate which pages are most likely to yield links to new content. We recommend a simple algorithm that performs comparably to all approaches we consider. We measure the overhead of discovering new content, defined as the average number of fetches required to discover one new page. We show first that with perfect foreknowledge of where to explore for links to new content, it is possible to discover 90 % of all new content with under 3 % overhead, and 100 % of new content with 9 % overhead. But actual algorithms, which do not have access to perfect foreknowledge, face a more difficult task: one quarter of new content is simply not amenable to efficient discovery. Of the remaining three quarters, 80 % of new content during a given week may be discovered with 160 % overhead if content is recrawled fully on a monthly basis.
Approximation Algorithms for the MinimumLength Corridor and Related Problems
, 2007
"... Given a rectangular boundary partitioned into rectangles, the MinimumLength Corridor (MLCR) problem consists of finding a corridor of least total length. A corridor is a set of connected line segments, each of which must lie along the line segments that form the rectangular boundary and/or the bou ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Given a rectangular boundary partitioned into rectangles, the MinimumLength Corridor (MLCR) problem consists of finding a corridor of least total length. A corridor is a set of connected line segments, each of which must lie along the line segments that form the rectangular boundary and/or the boundary of the rectangles, and must include at least one point from the boundary of every rectangle and from the rectangular boundary. The MLCR problem has been shown to be NPhard. In this paper we present the first polynomial time constant ratio approximation algorithm for the MLCR and MLCn problems. The MLCn problem is a generalization of the the MLCR problem where the rectangles are rectilinear kgons, for k ≤ n. We also present a polynomial time constant ratio approximation algorithm for the Group Traveling Salesperson Problem (GTSP) for a rectangle partitioned into rectilinear kgons as in the MLCn problem.
Approximating Full Steiner Tree in a Unit Disk Graph
, 2014
"... Given an edgeweighted graph G = (V,E) and a subset R of V, a Steiner tree of G is a tree which spans all the vertices in R. A full Steiner tree is a Steiner tree which has all the vertices of R as its leaves. The full Steiner tree problem is to find a full Steiner tree of G with minimum weight. In ..."
Abstract
 Add to MetaCart
(Show Context)
Given an edgeweighted graph G = (V,E) and a subset R of V, a Steiner tree of G is a tree which spans all the vertices in R. A full Steiner tree is a Steiner tree which has all the vertices of R as its leaves. The full Steiner tree problem is to find a full Steiner tree of G with minimum weight. In this paper we present a 20approximation algorithm for the full Steiner tree problem when G is a unit disk graph. 1
WWW 2007 / Track: Search Session: Crawlers ABSTRACT The Discoverability of the Web
"... Previous studies have highlighted the high arrival rate of new content on the web. We study the extent to which this new content can be efficiently discovered by a crawler. Our study has two parts. First, we study the inherent difficulty of the discovery problem using a maximum cover formulation, un ..."
Abstract
 Add to MetaCart
(Show Context)
Previous studies have highlighted the high arrival rate of new content on the web. We study the extent to which this new content can be efficiently discovered by a crawler. Our study has two parts. First, we study the inherent difficulty of the discovery problem using a maximum cover formulation, under an assumption of perfect estimates of likely sources of links to new content. Second, we relax this assumption and study a more realistic setting in which algorithms must use historical statistics to estimate which pages are most likely to yield links to new content. We recommend a simple algorithm that performs comparably to all approaches we consider. We measure the overhead of discovering new content, defined as the average number of fetches required to discover one new page. We show first that with perfect foreknowledge of where to explore for links to new content, it is possible to discover 90 % of all new content with under 3 % overhead, and 100 % of new content with 9 % overhead. But actual algorithms, which do not have access to perfect foreknowledge, face a more difficult task: one quarter of new content is simply not amenable to efficient discovery. Of the remaining three quarters, 80 % of new content during a given week may be discovered with 160 % overhead if content is recrawled fully on a monthly basis.
Variants
, 2005
"... Disclaimer: This note is for personal use only. So, it may be short, nonsense, and incomplete. However, if you found any mistake, comments are welcomed at ..."
Abstract
 Add to MetaCart
(Show Context)
Disclaimer: This note is for personal use only. So, it may be short, nonsense, and incomplete. However, if you found any mistake, comments are welcomed at