Results 11  20
of
35
An Experimental Analysis of Change Propagation in Dynamic Trees
, 2005
"... Change propagation is a technique for automatically adjusting the output of an algorithm to changes in the input. The idea behind change propagation is to track the dependences between data and function calls, so that, when the input changes, functions affected by that change can be reexecuted to u ..."
Abstract

Cited by 21 (13 self)
 Add to MetaCart
Change propagation is a technique for automatically adjusting the output of an algorithm to changes in the input. The idea behind change propagation is to track the dependences between data and function calls, so that, when the input changes, functions affected by that change can be reexecuted to update the computation and the output. Change propagation makes it possible for a compiler to dynamize static algorithms. The practical effectiveness of change propagation, however, is not known. In particular, the cost of dependence tracking and change propagation may seem significant. The contributions of the paper are twofold. First, we present some experimental evidence that change propagation performs well when compared to direct implementations of dynamic algorithms. We implement change propagation on treecontraction as a solution to the dynamic trees problem and present an experimental evaluation of the approach. As a second contribution, we present a library for dynamictrees that support a general interface and present an experimental evaluation by considering a broad set of applications. The dynamictrees library relies on change propagation to handle edge insertions/deletions. The applications that we consider include path queries, subtree queries, leastcommonancestor queries, maintenance of centers and medians of trees, nearestmarkedvertex queries, semidynamic minimum spanning trees, and the maxflow algorithm of Sleator and Tarjan.
Lineartime reconstruction of Delaunay triangulations with applications
 In Proc. Annu. European Sympos. Algorithms, number 1284 in Lecture Notes Comput. Sci
, 1997
"... Many of the computational geometers' favorite data structures are planar graphs, canonically determined by a set of geometric data, that take \Theta(n log n) time to compute. Examples include 2d Delaunay triangulation, trapezoidations of segments, and constrained Voronoi diagrams, and 3d convex hu ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
Many of the computational geometers' favorite data structures are planar graphs, canonically determined by a set of geometric data, that take \Theta(n log n) time to compute. Examples include 2d Delaunay triangulation, trapezoidations of segments, and constrained Voronoi diagrams, and 3d convex hulls. Given such a structure, one can determine a permutation of the data in O(n) time such that the data structure can be reconstructed from the permuted data in O(n) time by a simple incremental algorithm. As a consequence, one can permute a data file to "hide" a geometric structure, such as a terrian model based on the Delaunay triangulation of a set of sampled points, without disrupting other applications. One can even include "importance" in the ordering so the incremental reconstruction produces approximate terrain models as the data is read or received. For the Delaunay triangulation, we can also handle input in degenerate position, even though the data structures may no longer be cano...
Methods for Achieving Fast Query Times in Point Location Data Structures
, 1997
"... Given a collection S of n line segments in the plane, the planar point location problem is to construct a data structure that can efficiently determine for a given query point p the first segment(s) in S intersected by vertical rays emanating out from p. It is well known that linearspace data struc ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
Given a collection S of n line segments in the plane, the planar point location problem is to construct a data structure that can efficiently determine for a given query point p the first segment(s) in S intersected by vertical rays emanating out from p. It is well known that linearspace data structures can be constructed so as to achieve O(log n) query times. But applications, such as those common in geographic information systems, motivate a reexamination of this problem with the goal of improving query times further while also simplifying the methods needed to achieve such query times. In this paper we perform such a reexamination, focusing on the issues that arise in three different classes of pointlocation query sequences: ffl sequences that are reasonably uniform spatially and temporally (in which case the constant factors in the query times become critical), ffl sequences that are nonuniform spatially or temporally (in which case one desires data structures that adapt to s...
TwoPoint Euclidean Shortest Path Queries in the Plane (Extended Abstract)
, 1999
"... ) To appear in Proc. Tenth Annual ACMSIAM Symposium on Discrete Algorithms (SODA '99), January 1719, 1999 YiJen Chiang Joseph S. B. Mitchell y Abstract We consider the twopoint query version of the fundamental geometric shortest path problem: Given a set h of polygonal obstacles in the pla ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
) To appear in Proc. Tenth Annual ACMSIAM Symposium on Discrete Algorithms (SODA '99), January 1719, 1999 YiJen Chiang Joseph S. B. Mitchell y Abstract We consider the twopoint query version of the fundamental geometric shortest path problem: Given a set h of polygonal obstacles in the plane, having a total of n vertices, build a data structure such that for any two query points s and t we can efficiently determine the length, d(s; t), of an Euclidean shortest obstacleavoiding path, ß(s; t), from s to t. Additionally, our data structure should allow one to report the path ß(s; t), in time proportional to its (combinatorial) size. We present various methods for solving this twopoint query problem, including algorithms with o(n), O(log n+h), O(h log n), O(log 2 n) or optimal O(log n) query times, using polynomialspace data structures, with various tradeoffs between space and query time. While several results have been known for approximate twopoint Euclidean shortest p...
Dynamization of the Trapezoid Method for Planar Point Location in Monotone Subdivisions
 INTERNATIONAL JOURNAL OF COMPUTATIONAL GEOMETRY AND APPLICATIONS
, 1992
"... We present a fully dynamic data structure for point location in a monotone subdivision, based on the trapezoid method. The operations supported are insertion and deletion of vertices and edges, and horizontal translation of vertices. Let n be the current number of vertices of the subdivision. Poi ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
We present a fully dynamic data structure for point location in a monotone subdivision, based on the trapezoid method. The operations supported are insertion and deletion of vertices and edges, and horizontal translation of vertices. Let n be the current number of vertices of the subdivision. Point location queries take O(logn) time, while updates take O(log² n) time (amortized for vertex insertion/deletion and worstcase for the others). The space requirement is O(n log n). This is the first fully dynamic point location data structure for monotone subdivisions that achieves optimal query time.
Average case analysis of dynamic geometric optimization
, 1995
"... We maintain the maximum spanning tree of a planar point set, as points are inserted or deleted, in O(log³ n) expected time per update in Mulmuley’s averagecase model of dynamic geometric computation. We use as subroutines dynamic algorithms for two other geometric graphs: the farthest neighbor fore ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
We maintain the maximum spanning tree of a planar point set, as points are inserted or deleted, in O(log³ n) expected time per update in Mulmuley’s averagecase model of dynamic geometric computation. We use as subroutines dynamic algorithms for two other geometric graphs: the farthest neighbor forest and the rotating caliper graph related to an algorithm for static computation of point set widths and diameters. We maintain the former graph in expected time O(log² n) per update and the latter in expected time O(log n) per update. We also use the rotating caliper graph to maintain the diameter, width, and minimum enclosing rectangle of a point set in expected time O(log n) per update. A subproblem uses a technique for averagecase orthogonal range search that may also be of interest.
Simplification Culling of Static and Dynamic Scene Graphs
, 1998
"... We present a new approach for simplifying large polygonal environments composed of hundreds or thousands of objects. Our algorithm represents the environment using a scene graph and automatically computes levels of detail (LOD) for each node in the graph. For drastic simplification, the algorithm us ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
We present a new approach for simplifying large polygonal environments composed of hundreds or thousands of objects. Our algorithm represents the environment using a scene graph and automatically computes levels of detail (LOD) for each node in the graph. For drastic simplification, the algorithm uses hierarchical levels of detail (HLOD) to represent the simplified geometry of whole portions of the scene graph. When HLOD are rendered, the algorithm can ignore these portions, thereby performing simplification culling. For dynamic environments, HLOD are incrementally computed on the fly. The algorithm is applicable to all models and involves no user intervention. It generates high quality and drastic simplifications and has been applied to CAD models composed of hundreds of thousands of polygons. In practice, it achieves significant speedups in rendering large static and dynamic environments with little loss in image quality.
Range Searching and Point Location among Fat Objects
 Journal of Algorithms
, 1994
"... We present a data structure that can store a set of disjoint fat objects in dspace such that point location and boundedsize range searching with arbitrarilyshaped ranges can be performed efficiently. The structure can deal with either arbitrary (fat) convex objects or nonconvex polytopes. The m ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
We present a data structure that can store a set of disjoint fat objects in dspace such that point location and boundedsize range searching with arbitrarilyshaped ranges can be performed efficiently. The structure can deal with either arbitrary (fat) convex objects or nonconvex polytopes. The multipurpose data structure supports point location and range searching queries in time O(log d\Gamma1 n) and requires O(n log d\Gamma1 n) storage, after O(n log d\Gamma1 n log log n) preprocessing. The data structure and query algorithm are rather simple. 1 Introduction Fatness turns out to be an interesting phenomenon in computational geometry. Several papers present surprising combinatorial complexity reductions [3, 15, 22, 26, 32] and efficiency gains for algorithms [1, 4, 19, 28, 33] if the objects under consideration have a certain fatness. Fat objects are compact to some extent, rather than long and thin. Fatness is a realistic assumption, since in many practical instances of ...
Improved Construction of Vertical Decompositions of ThreeDimensional Arrangements
 In Proc. 18th Annu
, 2002
"... We present new results concerning the refinement of threedimensional arrangements by vertical decompositions. First, we describe a new outputsensitive algorithm for computing the vertical decomposition of arrangements of n triangles in O(n log n + V log n) time, where V is the complexity of the de ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
We present new results concerning the refinement of threedimensional arrangements by vertical decompositions. First, we describe a new outputsensitive algorithm for computing the vertical decomposition of arrangements of n triangles in O(n log n + V log n) time, where V is the complexity of the decomposition. This improves significantly over the best previously known algorithms. Next, we propose an alternative sparser refinement, which we call the partial vertical decomposition and has the advantages that it produces fewer cells and requires lower degree constructors. We adapt the outputsensitive algorithm to efficiently compute the partial decomposition as well. We implemented algorithms that construct the full and the partial decompositions and we compare the two types theoretically and experimentally. The improved outputsensitive construction extends to the case of arrangements of n wellbehaved surfaces with the same asymptotic running time. We also extended the implementation to the case of polyhedral surfaces  this can serve as the basis for robust implementation of approximations of arrangements of general surfaces.