Results 1  10
of
14
Combinatorial Geometry
, 1995
"... Abstract. Let P be a set of n points in ~d (where d is a small fixed positive integer), and let F be a collection of subsets of ~d, each of which is defined by a constant number of bounded degree polynomial inequalities. We consider the following Frange searching problem: Given P, build a data stru ..."
Abstract

Cited by 163 (26 self)
 Add to MetaCart
Abstract. Let P be a set of n points in ~d (where d is a small fixed positive integer), and let F be a collection of subsets of ~d, each of which is defined by a constant number of bounded degree polynomial inequalities. We consider the following Frange searching problem: Given P, build a data structure for efficient answering of queries of the form, "Given a 7 ~ F, count (or report) the points of P lying in 7." Generalizing the simplex range searching techniques, we give a solution with nearly linear space and preprocessing time and with O(n 1 x/b+~) query time, where d < b < 2d 3 and ~> 0 is an arbitrarily small constant. The acutal value of b is related to the problem of partitioning arrangements of algebraic surfaces into cells with a constant description complexity. We present some of the applications of Frange searching problem, including improved ray shooting among triangles in ~3 1.
On LinearTime Deterministic Algorithms for Optimization Problems in Fixed Dimension
, 1992
"... We show that with recently developed derandomization techniques, one can convert Clarkson's randomized algorithm for linear programming in fixed dimension into a lineartime deterministic one. The constant of proportionality is d O(d) , which is better than for previously known such algorithms. We s ..."
Abstract

Cited by 91 (10 self)
 Add to MetaCart
We show that with recently developed derandomization techniques, one can convert Clarkson's randomized algorithm for linear programming in fixed dimension into a lineartime deterministic one. The constant of proportionality is d O(d) , which is better than for previously known such algorithms. We show that the algorithm works in a fairly general abstract setting, which allows us to solve various other problems (such as finding the maximum volume ellipsoid inscribed into the intersection of n halfspaces) in linear time.
On Range Searching with Semialgebraic Sets
 DISCRETE COMPUT. GEOM
, 1994
"... Let P be a set of n points in R d (where d is a small fixed positive integer), and let \Gamma be a collection of subsets of R d , each of which is defined by a constant number of bounded degree polynomials. We consider the following \Gammarange searching problem: Given P , build a data structur ..."
Abstract

Cited by 80 (22 self)
 Add to MetaCart
Let P be a set of n points in R d (where d is a small fixed positive integer), and let \Gamma be a collection of subsets of R d , each of which is defined by a constant number of bounded degree polynomials. We consider the following \Gammarange searching problem: Given P , build a data structure for efficient answering of queries of the form `Given a fl 2 \Gamma, count (or report) the points of P lying in fl'. Generalizing the simplex range searching techniques, we give a solution with nearly linear space and preprocessing time and with O(n 1\Gamma1=b+ffi ) query time, where d b 2d \Gamma 3 and ffi ? 0 is an arbitrarily small constant. The actual value of b is related to the problem of partitioning arrangements of algebraic surfaces into constantcomplexity cells. We present some of the applications of \Gammarange searching problem, including improved ray shooting among triangles in R³.
TRACES OF FINITE SETS: EXTREMAL PROBLEMS AND GEOMETRIC APPLICATIONS
, 1992
"... Given a hypergraph H and a subset S of its vertices, the trace of H on S is defined as HS = {E ∩ S: E ∈ H}. The Vapnik–Chervonenkis dimension (VCdimension) of H is the size of the largest subset S for which HS has 2 S edges. Hypergraphs of small VCdimension play a central role in many areas o ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
Given a hypergraph H and a subset S of its vertices, the trace of H on S is defined as HS = {E ∩ S: E ∈ H}. The Vapnik–Chervonenkis dimension (VCdimension) of H is the size of the largest subset S for which HS has 2 S edges. Hypergraphs of small VCdimension play a central role in many areas of statistics, discrete and computational geometry, and learning theory. We survey some of the most important results related to this concept with special emphasis on (a) hypergraph theoretic methods and (b) geometric applications.
Tight Lower Bounds for the Size of EpsilonNets
"... According to a well known theorem of Haussler and Welzl (1987), any range space of bounded VCdimension admits an εnet of size O () ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
According to a well known theorem of Haussler and Welzl (1987), any range space of bounded VCdimension admits an εnet of size O ()
Combinatorial Optimization: A Survey
, 1993
"... This paper is a chapter of the forthcoming Handbook of Combinatorics, to be published by NorthHolland. It surveys the basic techniques and methods in combinatorial optimization. We organize our material according to the fundamental algorithmic techniques and illustrate them on problems to which the ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
This paper is a chapter of the forthcoming Handbook of Combinatorics, to be published by NorthHolland. It surveys the basic techniques and methods in combinatorial optimization. We organize our material according to the fundamental algorithmic techniques and illustrate them on problems to which these methods have been applied successfully. Special attention is given to approximation algorithms and fast (primal and dual) heuristics.
epssamples for kernels
 Proceedings 24th Annual ACMSIAM Symposium on Discrete Algorithms
, 2013
"... We study the worst case error of kernel density estimates via subset approximation. A kernel density estimate of a distribution is the convolution of that distribution with a fixed kernel (e.g. Gaussian kernel). Given a subset (i.e. a point set) of the input distribution, we can compare the kernel d ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
We study the worst case error of kernel density estimates via subset approximation. A kernel density estimate of a distribution is the convolution of that distribution with a fixed kernel (e.g. Gaussian kernel). Given a subset (i.e. a point set) of the input distribution, we can compare the kernel density estimates of the input distribution with that of the subset and bound the worst case error. If the maximum error is ε, then this subset can be thought of as an εsample (aka an εapproximation) of the range space defined with the input distribution as the ground set and the fixed kernel representing the family of ranges. Interestingly, in this case the ranges are not binary, but have a continuous range (for simplicity we focus on kernels with range of [0, 1]); these allow for smoother notions of range spaces. It turns out, the use of this smoother family of range spaces has an added benefit of greatly decreasing the size required for εsamples. For instance, in the plane the size is O((1/ε 4/3) log 2/3 (1/ε)) for disks (based on VCdimension arguments) but is only O((1/ε) √ log(1/ε)) for Gaussian kernels and for kernels with bounded slope that only affect a bounded domain. These bounds are accomplished by studying the discrepancy of these “kernel ” range spaces, and here the improvement in bounds are even more pronounced. In the plane, we show the discrepancy is O ( √ log n) for these kernels, whereas for