Results 1  10
of
15
Nearest neighbor queries in metric spaces
 Discrete Comput. Geom
, 1997
"... Given a set S of n sites (points), and a distance measure d, the nearest neighbor searching problem is to build a data structure so that given a query point q, the site nearest to q can be found quickly. This paper gives data structures for this problem when the sites and queries are in a metric spa ..."
Abstract

Cited by 112 (1 self)
 Add to MetaCart
Given a set S of n sites (points), and a distance measure d, the nearest neighbor searching problem is to build a data structure so that given a query point q, the site nearest to q can be found quickly. This paper gives data structures for this problem when the sites and queries are in a metric space. One data structure, D(S), uses a divideandconquer recursion. The other data structure, M(S, Q), is somewhat like a skiplist. Both are simple and implementable. The data structures are analyzed when the metric space obeys a certain spherepacking bound, and when the sites and query points are random and have distributions with an exchangeability property. This property implies, for example, that query point q is a random element of S ∪ {q}. Under these conditions, the preprocessing and space bounds for the algorithms are close to linear in n. They depend also on the spherepacking bound, and on the logarithm of the distance ratio Υ(S) of S, the ratio of the distance between the farthest pair of points in S to the distance between the closest pair. The data structure M(S, Q) requires as input data an additional set Q, taken to be representative of the query points. The resource bounds of M(S, Q) have a dependence on the distance ratio of S ∪ Q. While M(S, Q) can return wrong answers, its failure probability can be bounded, and is decreasing in a parameter K. Here K ≤ Q/n is chosen when building M(S, Q). The expected query time for M(S, Q) is O(K log n) log Υ(S ∪ Q), and the resource bounds increase linearly in K. The data structure D(S) has expected O(log n) O(1) query time, for fixed distance ratio. The preprocessing algorithm for M(S, Q) can be used to solve the allnearestneighbor problem for S in O(n(log n) 2 (log Υ(S)) 2) expected time. 1
Sample compression, learnability, and the VapnikChervonenkis dimension
 MACHINE LEARNING
, 1995
"... Within the framework of paclearning, we explore the learnability of concepts from samples using the paradigm of sample compression schemes. A sample compression scheme of size k for a concept class C ` 2 X consists of a compression function and a reconstruction function. The compression function r ..."
Abstract

Cited by 66 (4 self)
 Add to MetaCart
Within the framework of paclearning, we explore the learnability of concepts from samples using the paradigm of sample compression schemes. A sample compression scheme of size k for a concept class C ` 2 X consists of a compression function and a reconstruction function. The compression function receives a finite sample set consistent with some concept in C and chooses a subset of k examples as the compression set. The reconstruction function forms a hypothesis on X from a compression set of k examples. For any sample set of a concept in C the compression set produced by the compression function must lead to a hypothesis consistent with the whole original sample set when it is fed to the reconstruction function. We demonstrate that the existence of a sample compression scheme of fixedsize for a class C is sufficient to ensure that the class C is paclearnable. Previous work has shown that a class is paclearnable if and only if the VapnikChervonenkis (VC) dimension of the class i...
Snap Rounding Line Segments Efficiently in Two and Three Dimensions
, 1997
"... We study the problem of robustly rounding a set S of n line segments in R 2 using the snap rounding paradigm. In this paradigm each pixel containing an endpoint or intersection point is called "hot," and all segments intersecting a hot pixel are rerouted to pass through its center. We s ..."
Abstract

Cited by 30 (4 self)
 Add to MetaCart
We study the problem of robustly rounding a set S of n line segments in R 2 using the snap rounding paradigm. In this paradigm each pixel containing an endpoint or intersection point is called "hot," and all segments intersecting a hot pixel are rerouted to pass through its center. We show that a snaprounded approximation to the arrangement defined by S can be built in an outputsensitive fashion, and that this can be done without first determining all the intersecting pairs of segments in S. Specifically, we give a deterministic planesweep algorithm running in time O(n log n + P h2H jhj log n), where H is the set of hot pixels and jhj is the number of segments intersecting a hot pixel h 2 H. We also give a simple randomized incremental construction whose expected running time matches that of our deterministic algorithm. The complexity of these algorithms is optimal up to polylogarithmic factors. This research is supported by NSF grant CCR9625289 and by U.S. ARO grant DAAH04...
Randomized ExternalMemory Algorithms for Some Geometric Problems
 INTERNATIONAL JOURNAL OF COMPUTATIONAL GEOMETRY & APPLICATIONS
, 2001
"... We show that the wellknown random incremental construction of Clarkson and Shor [14] can be adapted via gradations to provide efficient externalmemory algorithms for some geometric problems. In particular, as the main result, we obtain an optimal randomized algorithm for the problem of computin ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
We show that the wellknown random incremental construction of Clarkson and Shor [14] can be adapted via gradations to provide efficient externalmemory algorithms for some geometric problems. In particular, as the main result, we obtain an optimal randomized algorithm for the problem of computing the trapezoidal decomposition determined by a set of N line segments in the plane with K pairwise intersections, that requires \Theta( log M=B N ) expected disk accesses, where M is the size of the available internal memory and B is the size of the block transfer. The approach is sufficiently general to obtain algorithms also for the problems of 3d halfspace intersections, 2d and 3d convex hulls, 2d abstract Voronoi diagrams and batched planar point location, which require an optimal expected number of disk accesses and are simpler than the ones previously known. The results extend to an externalmemory model with multiple disks. Additionally, under reasonable conditions on the parameters N;M;B, these results can be notably simplified originating practical algorithms which still achieve optimal expected bounds.
Derandomization in Computational Geometry
, 1996
"... We survey techniques for replacing randomized algorithms in computational geometry by deterministic ones with a similar asymptotic running time. 1 Randomized algorithms and derandomization A rapid growth of knowledge about randomized algorithms stimulates research in derandomization, that is, repla ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
We survey techniques for replacing randomized algorithms in computational geometry by deterministic ones with a similar asymptotic running time. 1 Randomized algorithms and derandomization A rapid growth of knowledge about randomized algorithms stimulates research in derandomization, that is, replacing randomized algorithms by deterministic ones with as small decrease of efficiency as possible. Related to the problem of derandomization is the question of reducing the amount of random bits needed by a randomized algorithm while retaining its efficiency; the derandomization can be viewed as an ultimate case. Randomized algorithms are also related to probabilistic proofs and constructions in combinatorics (which came first historically), whose development has similarly been accompanied by the effort to replace them by explicit, nonrandom constructions whenever possible. Derandomization of algorithms can be seen as a part of an effort to map the power of randomness and explain its role. ...
Computing faces in segment and simplex arrangements
 In Proc. 27th Annu. ACM Sympos. Theory Comput
, 1995
"... For a set S of n line segments in the plane, we give the first workoptimal deterministic parallel algorithm for constructing their arrangement. It runs in O(log 2 n) time using O(n log n + k) work in the EREW PRAM model, where k is the number of intersecting line segment pairs, and provides a fairl ..."
Abstract

Cited by 16 (10 self)
 Add to MetaCart
For a set S of n line segments in the plane, we give the first workoptimal deterministic parallel algorithm for constructing their arrangement. It runs in O(log 2 n) time using O(n log n + k) work in the EREW PRAM model, where k is the number of intersecting line segment pairs, and provides a fairly simple divideandconquer alternative to the optimal sequential “planesweep ” algorithm of Chazelle and Edelsbrunner. Moreover, our method can be used to output all k intersecting pairs while using only O(n) working space, which solves an open problem posed by Chazelle and Edelsbrunner. We also describe a sequential algorithm for computing a single face in an arrangement of n line segments that runs in O(n 2 (n) log n) time, which improves on a previous O(n log 2 n) time algorithm. For collections of simplices in IR d, we give methods for constructing a set of m = O(n d,1 log c n+k) cells of constant descriptive complexity that covers their arrangement, where c> 1 is a constant and k is the number of faces in the arrangement. The construction is performed sequentially in O(m) time, or in O(log n) time using O(m) work in the EREW PRAM model. The covering can be augmented to answer point location queries in O(log n) time. In addition to supplying the first parallel methods for these problems, we improve on the previous best sequential methods by reducing the query times (from O(log 2 n) in IR 3 and O(log 3 n) in IR d, d>3), and also the size and construction cost of the covering (from O(n d,1+ + k)). 1
I/OEfficient Construction of Voronoi Diagrams
, 2002
"... We consider the problems of computing 2 and 3d Voronoi diagrams for large data sets efficiently. We describe a cacheoblivious distribution data structure (bu#er tree) that is the basis for the cache oblivious implementation of a random incremental construction for geometric problems. We then a ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
We consider the problems of computing 2 and 3d Voronoi diagrams for large data sets efficiently. We describe a cacheoblivious distribution data structure (bu#er tree) that is the basis for the cache oblivious implementation of a random incremental construction for geometric problems. We then apply this to the construction of 2 and 3d Voronoi diagrams. We also describe a very simple variant of the standard random incremental construction based on history dag, which has optimal running time and is likely to be I/Oefficient because the pattern of insertions is also local (but we don't have theoretical bounds). Finally, we describe a practical variant that has been implemeted and present some experimental results.
Speculative Parallelization of a Randomized Incremental Convex Hull Algorithm
 Proc. Int’l Workshop Computational Geometry and Applications
, 2004
"... Abstract. Finding the fastest algorithm to solve a problem is one of the main issues in Computational Geometry. Focusing only on worst case analysis or asymptotic computations leads to the development of complex data structures or hard to implement algorithms. Randomized algorithms appear in this sc ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
(Show Context)
Abstract. Finding the fastest algorithm to solve a problem is one of the main issues in Computational Geometry. Focusing only on worst case analysis or asymptotic computations leads to the development of complex data structures or hard to implement algorithms. Randomized algorithms appear in this scenario as a very useful tool in order to obtain easier implementations within a good expected time bound. However, parallel implementations of these algorithms are hard to develop and require an indepth understanding of the language, the compiler and the underlying parallel computer architecture. In this paper we show how we can use speculative parallelization techniques to execute in parallel iterative algorithms such as randomized incremental constructions. In this paper we focus on the convex hull problem, and show that, using our speculative parallelization engine, the sequential algorithm can be automatically executed in parallel, obtaining speedups with as little as four processors, and reaching 5.15x speedup with 28 processors. 1
I/OOptimal Computation of Segment Intersections
, 1999
"... We investigate the I/Ocomplexity of computing the trapezoidal decomposition defined by a set of N line segments in the plane. We present a randomized algorithm which solves optimally this problem requiring O( N B log M=B N B + K B ) expected I/O operations, where K is the number of pairw ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
We investigate the I/Ocomplexity of computing the trapezoidal decomposition defined by a set of N line segments in the plane. We present a randomized algorithm which solves optimally this problem requiring O( N B log M=B N B + K B ) expected I/O operations, where K is the number of pairwise intersections, M is the size of available internal memory and B is the size of the block transfer. The proposed algorithm requires an optimal expected number of internal operations. As a byproduct, the algorithm also solves the segment intersections problem requiring the same number of I/Os and internal operations.
On the Computational Requirements of Virtual Reality Systems
, 1997
"... The computational requirements of highquality, realtime rendering exceeds the limits of generally available computing power. However illumination effects, except shadows, are less noticeable on moving pictures. Shadows can be produced with the same techniques used for visibility computations, ther ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The computational requirements of highquality, realtime rendering exceeds the limits of generally available computing power. However illumination effects, except shadows, are less noticeable on moving pictures. Shadows can be produced with the same techniques used for visibility computations, therefore the basic requirements of realtime rendering are transformations, preselection of the part of the scene to be displayed and visibility computations. Transformations scale well, ie, their time requirement grows linearly with the input size. Preselection, if implemented by the traditional way of polygon clipping, has a growing rate of N log N in the worst case, where N is the total number of edges in the scene. Visibility computations, exhibiting a quadratic growing rate, are the bottleneck from a theoretical point of view. Three approaches are discussed to speed up visibility computations: (i) reducing the expected running time to O(N log N ) (ii) using approximation algorithms with ...