Results 1  10
of
22
Fast randomized point location without preprocessing in two and threedimensional Delaunay triangulations
 Computational Geometry—Theory and Applications
, 1999
"... This paper studies the point location problem in Delaunay triangulations without preprocessing and additional storage. The proposed procedure finds the query point by simply “walking through ” the triangulation, after selecting a “good starting point ” by random sampling. The analysis generalizes an ..."
Abstract

Cited by 58 (3 self)
 Add to MetaCart
(Show Context)
This paper studies the point location problem in Delaunay triangulations without preprocessing and additional storage. The proposed procedure finds the query point by simply “walking through ” the triangulation, after selecting a “good starting point ” by random sampling. The analysis generalizes and extends a recent result for d D 2 dimensions by proving this procedure takes expected time close to O.n1=.dC1/ / for point location in Delaunay triangulations of n random points in d D 3 dimensions. Empirical results in both two and three dimensions show
Nearly Optimal ExpectedCase Planar Point Location
"... We consider the planar point location problem from the perspective of expected search time. We are given a planar polygonal subdivision S and for each polygon of the subdivision the probability that a query point lies within this polygon. The goal is to compute a search structure to determine which ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
(Show Context)
We consider the planar point location problem from the perspective of expected search time. We are given a planar polygonal subdivision S and for each polygon of the subdivision the probability that a query point lies within this polygon. The goal is to compute a search structure to determine which cell of the subdivision contains a given query point, so as to minimize the expected search time. This is a generalization of the classical problem of computing an optimal binary search tree for onedimensional keys. In the onedimensional case it has long been known that the entropy H of the distribution is the dominant term in the lower bound on the expectedcase search time, and further there exist search trees achieving expected search times of at most H + 2. Prior to this work, there has been no known structure for planar point location with an expected search time better than 2H, and this result required strong assumptions on the nature of the query point distribution. Here we present a data structure whose expected search time is nearly equal to the entropy lower bound, namely H + o(H). The result holds for any polygonal subdivision in which the number of sides of each of the polygonal cells is bounded, and there are no assumptions on the query distribution within each cell. We extend these results to subdivisions with convex cells, assuming a uniform query distribution within each cell.
On the Exact Worst Case Query Complexity of Planar Point Location
 IN PROCEEDINGS OF THE NINTH ANNUAL ACMSIAM SYMPOSIUM ON DISCRETE ALGORITHMS
, 1998
"... What is the smallest constant c so that the planar point location queries can be answered in c log 2 n + o(log n) steps (i.e. pointline comparisons) in the worst case? In SODA 97 Goodrich, Orletsky, and Ramaiyer [6] showed that c = 2 is possible using linear space and conjectured this to be optimal ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
(Show Context)
What is the smallest constant c so that the planar point location queries can be answered in c log 2 n + o(log n) steps (i.e. pointline comparisons) in the worst case? In SODA 97 Goodrich, Orletsky, and Ramaiyer [6] showed that c = 2 is possible using linear space and conjectured this to be optimal. We disprove this conjecture and show that c = 1 can be achieved. Moreoever by giving upper and lower bounds we show that without space restrictions the worst case query complexity of planar point location differs from log 2 n + 2 p log 2 n at most by an additive factor of (1=2)log 2 log 2 n +O(1). For the case of linear space we show the query complexity to be bounded by log 2 n + 2 p log 2 n +O(log 1=4 n).
EntropyPreserving Cuttings and SpaceEfficient Planar Point Location
 In Proceedings of the Twelfth Annual ACMSIAM Symposium on Discrete Algorithms
, 2001
"... Point location is the problem of preprocessing a planar polygonal subdivision S into a data structure in order to determine efficiently the cell of the subdivision that contains a given query point. Given the probabilities pz that the query point lies within each cell z 2 S, a natural question is ho ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
(Show Context)
Point location is the problem of preprocessing a planar polygonal subdivision S into a data structure in order to determine efficiently the cell of the subdivision that contains a given query point. Given the probabilities pz that the query point lies within each cell z 2 S, a natural question is how to design such a structure so as to minimize the expectedcase query time. The entropy H of the probability distribution is the dominant term in the lower bound on the expectedcase search time. Clearly the number of edges n of the subdivision is a lower bound on the space required. There is no known approach that simultaneously achieves the goals of H + o(H) query time and O(n) space. In this paper we introduce entropypreserving cuttings and show how to use them to achieve query time H+o(H), using only O(n log n) space. 1 Introduction Planar point location is an important problem in computational geometry. We are given a polygonal subdivision S consisting of n edges, and the goal is ...
Efficient ExpectedCase Algorithms for Planar Point Location
, 2000
"... . Planar point location is among the most fundamental search problems in computational geometry. Although this problem has been heavily studied from the perspective of worstcase query time, there has been surprisingly little theoretical work on expectedcase query time. We are given an nvertex ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
(Show Context)
. Planar point location is among the most fundamental search problems in computational geometry. Although this problem has been heavily studied from the perspective of worstcase query time, there has been surprisingly little theoretical work on expectedcase query time. We are given an nvertex planar polygonal subdivision S satisfying some weak assumptions (satisfied, for example, by all convex subdivisions). We are to preprocess this into a data structure so that queries can be answered efficiently. We assume that the two coordinates of each query point are generated independently by a probability distribution also satisfying some weak assumptions (satisfied, for example, by the uniform distribution). In the decision tree model of computation, it is wellknown from information theory that a lower bound on the expected number of comparisons is entropy(S). We provide two data structures, one of size O(n 2 ) that can answer queries in 2 entropy(S) + O(1) expected number...
Proximate point searching
 In Proceedings of the 14th Canadian Conference on Computational Geometry (CCCG
, 2002
"... In the 2D point searching problem, the goal is to preprocess n points P = {p1,..., pn} in the plane so that, for an online sequence of query points q1,..., qm, it can quickly determined which (if any) of the elements of P are equal to each query point qi. This problem can be solved in O(log n) time ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
In the 2D point searching problem, the goal is to preprocess n points P = {p1,..., pn} in the plane so that, for an online sequence of query points q1,..., qm, it can quickly determined which (if any) of the elements of P are equal to each query point qi. This problem can be solved in O(log n) time by mapping the problem to one dimension. We present a data structure that is optimized for answering queries quickly when they are geometrically close to the previous successful query. Specifically, our data structure executes queries in time O(log d(qi−1, qi)), where d is some distance function between two points, and uses O(n log n) space. Our structure works with a variety of distance functions. In contrast, it is proved that, for some of the most intuitive distance functions d, it is impossible to obtain an O(log d(qi−1, qi)) runtime, or any bound that is o(log n).
Optimal Planar Point Location
 IN PROCEEDINGS OF THE TWELFTH ANNUAL ACMSIAM SYMPOSIUM ON DISCRETE ALGORITHMS
, 2001
"... Given a fixed distribution of point location queries among the regions of a triangulation of the plane, a data structure is presented that achieves, within constant multiplicative factors, the entropy bound on the expected point location query time. ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
Given a fixed distribution of point location queries among the regions of a triangulation of the plane, a data structure is presented that achieves, within constant multiplicative factors, the entropy bound on the expected point location query time.
LinearTime Triangulation of a Simple Polygon Made Easier Via Randomization
, 2000
"... We describe a randomized algorithm for computing the trapezoidal decomposition of a simple polygon. Its expected running time is linear in the size of the polygon. By a wellknown and simple linear time reduction, this implies a linear time algorithm for triangulating a simple polygon. Our algorithm ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
We describe a randomized algorithm for computing the trapezoidal decomposition of a simple polygon. Its expected running time is linear in the size of the polygon. By a wellknown and simple linear time reduction, this implies a linear time algorithm for triangulating a simple polygon. Our algorithm is considerably simpler than Chazelle's (1991) celebrated optimal deterministic algorithm and, hence, positively answers his question of whether a simpler randomized algorithm for the problem exists. The new algorithm can be viewed as a combination of Chazelle's algorithm and of nonoptimal randomized algorithms due to Clarkson et al. (1991) and to Seidel (1991), with the essential innovation that sampling is performed on subchains of the initial polygonal chain, rather than on its edges. It is also essential, as in Chazelle's algorithm, to include a bottomup preprocessing phase previous to the topdown construction phase.
Progressive TINs: Algorithms and Applications
 In Proceedings 5th ACM workshop on Advances in geographic information systems, Las Vegas
, 1997
"... Transmission of geographic data over the Internet, rendering at different resolutions /levels of detail, or processing at unnecessarily fine detail pose interesting challenges and opportunities. In this paper we explore the applicability to GIS of the notion of progressive meshes, introduced by H ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
(Show Context)
Transmission of geographic data over the Internet, rendering at different resolutions /levels of detail, or processing at unnecessarily fine detail pose interesting challenges and opportunities. In this paper we explore the applicability to GIS of the notion of progressive meshes, introduced by Hoppe [13] to the field of computer graphics. In particular, we describe progressive TINs as an alternative to hierarchical TINs, design algorithms for solving GIS tasks such as selective refinement, point location, visibility or line of sight queries, isoline/contour line extraction and provide empirical results which show that our algorithms are of considerable practical relevance. Moreover, the selective refinement data structure and refinement algorithm solves a question posed by Hoppe.
Two Topics in Applied Algorithmics
, 1998
"... This thesis examines two largely unrelated problems in applied algorithmics, motivated by the search for efficient geometric algorithms. In the first part of the thesis, we consider the problem of finding efficient parallel algorithms for heterogeneous parallel computers, i.e., parallel computers in ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
This thesis examines two largely unrelated problems in applied algorithmics, motivated by the search for efficient geometric algorithms. In the first part of the thesis, we consider the problem of finding efficient parallel algorithms for heterogeneous parallel computers, i.e., parallel computers in which different processors have different computational potential. To this end, we define a formal computational model for heterogeneous systems and develop algorithms for commonly used communication operations. The result is that many existing parallel algorithms which use these communication operations can be adapted to our model with little or no modifications. In the second part of the thesis we consider the problem of geometric models which allow for varying levels of detail. To this end, we extend the progressive mesh representation introduced by Hoppe. The main technical contribution of this part is an efficient scheme for refining only selected regions of a progressive mesh. Using ...