Results 1  10
of
11
Selfimproving algorithms
 in SODA ’06: Proceedings of the seventeenth annual ACMSIAM symposium on Discrete algorithm
"... We investigate ways in which an algorithm can improve its expected performance by finetuning itself automatically with respect to an arbitrary, unknown input distribution. We give such selfimproving algorithms for sorting and computing Delaunay triangulations. The highlights of this work: (i) an al ..."
Abstract

Cited by 26 (4 self)
 Add to MetaCart
We investigate ways in which an algorithm can improve its expected performance by finetuning itself automatically with respect to an arbitrary, unknown input distribution. We give such selfimproving algorithms for sorting and computing Delaunay triangulations. The highlights of this work: (i) an algorithm to sort a list of numbers with optimal expected limiting complexity; and (ii) an algorithm to compute the Delaunay triangulation of a set of points with optimal expected limiting complexity. In both cases, the algorithm begins with a training phase during which it adjusts itself to the input distribution, followed by a stationary regime in which the algorithm settles to its optimized incarnation. 1
TRACES OF FINITE SETS: EXTREMAL PROBLEMS AND GEOMETRIC APPLICATIONS
, 1992
"... Given a hypergraph H and a subset S of its vertices, the trace of H on S is defined as HS = {E ∩ S: E ∈ H}. The Vapnik–Chervonenkis dimension (VCdimension) of H is the size of the largest subset S for which HS has 2 S edges. Hypergraphs of small VCdimension play a central role in many areas o ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
Given a hypergraph H and a subset S of its vertices, the trace of H on S is defined as HS = {E ∩ S: E ∈ H}. The Vapnik–Chervonenkis dimension (VCdimension) of H is the size of the largest subset S for which HS has 2 S edges. Hypergraphs of small VCdimension play a central role in many areas of statistics, discrete and computational geometry, and learning theory. We survey some of the most important results related to this concept with special emphasis on (a) hypergraph theoretic methods and (b) geometric applications.
Fast almostlinearsized nets for boxes in the plane
"... Let B be any set of n axisaligned boxes in R d, d ≥ 1. For any point p, we define the subset Bp of B as Bp = {B ∈ B: p ∈ B}. A box B in Bp is said to be stabbed by p. A subset N ⊆ B is a (1/c)net ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Let B be any set of n axisaligned boxes in R d, d ≥ 1. For any point p, we define the subset Bp of B as Bp = {B ∈ B: p ∈ B}. A box B in Bp is said to be stabbed by p. A subset N ⊆ B is a (1/c)net
epssamples for kernels
 Proceedings 24th Annual ACMSIAM Symposium on Discrete Algorithms
, 2013
"... We study the worst case error of kernel density estimates via subset approximation. A kernel density estimate of a distribution is the convolution of that distribution with a fixed kernel (e.g. Gaussian kernel). Given a subset (i.e. a point set) of the input distribution, we can compare the kernel d ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
We study the worst case error of kernel density estimates via subset approximation. A kernel density estimate of a distribution is the convolution of that distribution with a fixed kernel (e.g. Gaussian kernel). Given a subset (i.e. a point set) of the input distribution, we can compare the kernel density estimates of the input distribution with that of the subset and bound the worst case error. If the maximum error is ε, then this subset can be thought of as an εsample (aka an εapproximation) of the range space defined with the input distribution as the ground set and the fixed kernel representing the family of ranges. Interestingly, in this case the ranges are not binary, but have a continuous range (for simplicity we focus on kernels with range of [0, 1]); these allow for smoother notions of range spaces. It turns out, the use of this smoother family of range spaces has an added benefit of greatly decreasing the size required for εsamples. For instance, in the plane the size is O((1/ε 4/3) log 2/3 (1/ε)) for disks (based on VCdimension arguments) but is only O((1/ε) √ log(1/ε)) for Gaussian kernels and for kernels with bounded slope that only affect a bounded domain. These bounds are accomplished by studying the discrepancy of these “kernel ” range spaces, and here the improvement in bounds are even more pronounced. In the plane, we show the discrepancy is O ( √ log n) for these kernels, whereas for
εNet Approach to Sensor kCoverage
"... Abstract. Wireless sensors rely on battery power, and in many application it is difficult or prohibitive to replace them. Hence, in order to prolongate the system’s lifetime, some sensors can be kept inactive while others perform all the tasks. In this paper, we study the kcoverage problem of activ ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. Wireless sensors rely on battery power, and in many application it is difficult or prohibitive to replace them. Hence, in order to prolongate the system’s lifetime, some sensors can be kept inactive while others perform all the tasks. In this paper, we study the kcoverage problem of activating the minimum number of sensors to ensure that every point in the area is covered by at least k sensors. This ensures higher fault tolerance, robustness, and improves many operations, among which position detection and intrusion detection. The kcoverage problem is trivially NPcomplete, and hence we can only provide approximation algorithms. In this paper, we present an algorithm based on an extension of the classical εnet technique. This method gives a O(log M)approximation, where M is the number of sensors in an optimal solution. We do not make any particular assumption on the shape of the areas covered by each sensor, besides that they must be closed, connected and without holes. 1
NearLinear Approximation Algorithms for . . .
 SCG'09
, 2009
"... Given a set system (X, R), the hitting set problem is to find a smallestcardinality subset H ⊆ X, with the property that each range R ∈ R has a nonempty intersection with H. We present nearlinear time approximation algorithms for the hitting set problem, under the following geometric settings: (i ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Given a set system (X, R), the hitting set problem is to find a smallestcardinality subset H ⊆ X, with the property that each range R ∈ R has a nonempty intersection with H. We present nearlinear time approximation algorithms for the hitting set problem, under the following geometric settings: (i) R is a set of planar regions with small union complexity. (ii) R is a set of axisparallel drectangles in R d. In both cases X is either the entire ddimensional space or a finite set of points in dspace. The approximation factors yielded by the algorithm are small; they are either the same as or within an O(log n) factor of the best factors known to be computable in polynomial time.
NearLinear Approximation Algorithms for Geometric Hitting Sets
 SCG'09
, 2009
"... Given a set system (X, R), the hitting set problem is to find a smallestcardinality subset H ⊆ X, with the property that each range R ∈ R has a nonempty intersection with H. We present nearlinear time approximation algorithms for the hitting set problem, under the following geometric settings: (i ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Given a set system (X, R), the hitting set problem is to find a smallestcardinality subset H ⊆ X, with the property that each range R ∈ R has a nonempty intersection with H. We present nearlinear time approximation algorithms for the hitting set problem, under the following geometric settings: (i) R is a set of planar regions with small union complexity. (ii) R is a set of axisparallel drectangles in R d. In both cases X is either the entire ddimensional space or a finite set of points in dspace. The approximation factors yielded by the algorithm are small; they are either the same as or within an O(log n) factor of the best factors known to be computable in polynomial time.
Tight Lower Bounds for the Size of EpsilonNets [Extended Abstract] ABSTRACT
"... According to a well known theorem of Haussler and Welzl (1987), any range space of bounded VCdimension admits an εnet of size O () 1 1 log. Using probabilistic techniques, ε ε Pach and Woeginger (1990) showed that there exist range spaces of VCdimension 2, for which the above bound is sharp. The ..."
Abstract
 Add to MetaCart
According to a well known theorem of Haussler and Welzl (1987), any range space of bounded VCdimension admits an εnet of size O () 1 1 log. Using probabilistic techniques, ε ε Pach and Woeginger (1990) showed that there exist range spaces of VCdimension 2, for which the above bound is sharp. The only known range spaces of small VCdimension, in which the ranges are geometric objects in some Euclidean space and the size of the smallest εnets is superlinear in 1 ε, were found by Alon (2010). In his examples, every εnet is of size Ω ( 1 1 g( ε ε)), where g is an extremely slowly growing function, related to the inverse Ackermann function. We show that there exist geometrically defined range spaces, already of VCdimension 2, in which the size of the smallest εnets is Ω () 1 1 log. We also construct range spaces inε ε duced by axisparallel rectangles in the plane, in which the size of the smallest εnets is Ω () 1 1 log log. By a theorem ε ε of Aronov, Ezra, and Sharir (2010), this bound is tight.
. Using probabilistic
"... According to a well known theorem of Haussler and Welzl (1987), any range space of bounded VCdimension admits an εnet of size O () ..."
Abstract
 Add to MetaCart
According to a well known theorem of Haussler and Welzl (1987), any range space of bounded VCdimension admits an εnet of size O ()