Results 1 
6 of
6
Dynamic Trees and Dynamic Point Location
 In Proc. 23rd Annu. ACM Sympos. Theory Comput
, 1991
"... This paper describes new methods for maintaining a pointlocation data structure for a dynamicallychanging monotone subdivision S. The main approach is based on the maintenance of two interlaced spanning trees, one for S and one for the graphtheoretic planar dual of S. Queries are answered by using ..."
Abstract

Cited by 46 (9 self)
 Add to MetaCart
(Show Context)
This paper describes new methods for maintaining a pointlocation data structure for a dynamicallychanging monotone subdivision S. The main approach is based on the maintenance of two interlaced spanning trees, one for S and one for the graphtheoretic planar dual of S. Queries are answered by using a centroid decomposition of the dual tree to drive searches in the primal tree. These trees are maintained via the linkcut trees structure of Sleator and Tarjan, leading to a scheme that achieves vertex insertion/deletion in O(log n) time, insertion/deletion of kedge monotone chains in O(log n + k) time, and answers queries in O(log 2 n) time, with O(n) space, where n is the current size of subdivision S. The techniques described also allow for the dual operations expand and contract to be implemented in O(log n) time, leading to an improved method for spatial pointlocation in a 3dimensional convex subdivision. In addition, the interlacedtree approach is applied to online pointlo...
FULLY DYNAMIC POINT LOCATION IN A MONOTONE SUBDIVISION
, 1989
"... In this paper a dynamic technique for locating a point in a monotone planar subdivision, whose current number of vertices is n, is presented. The (complete set of) update operations are insertion of a point on an edge and of a chain of edges between two vertices, and their reverse operations. The d ..."
Abstract

Cited by 23 (7 self)
 Add to MetaCart
In this paper a dynamic technique for locating a point in a monotone planar subdivision, whose current number of vertices is n, is presented. The (complete set of) update operations are insertion of a point on an edge and of a chain of edges between two vertices, and their reverse operations. The data structure uses space O(n). The query time is O(log n), the time for insertion/deletion of a point is O(log n), and the time for insertion/deletion of a chain with k edges is O(log n + k), all worstcase. The technique is conceptually a special case of the chain method of Lee and Preparata and uses the same query algorithm. The emergence of full dynamic capabilities is afforded by a subtle choice of the chain set (separators), which induces a total order on the set of regions of the planar subdivision.
A Unified Approach to Dynamic Point Location, Ray Shooting, and Shortest Paths in Planar Maps
, 1992
"... We describe a new technique for dynamically maintaining the trapezoidal decomposition of a connected planar map M with 7 ~ vertices, and apply it to the development of a unified dynamic data structure that supports pointlocation, rayshooting, and shortestpath queries in M. The space requirement i ..."
Abstract

Cited by 20 (6 self)
 Add to MetaCart
(Show Context)
We describe a new technique for dynamically maintaining the trapezoidal decomposition of a connected planar map M with 7 ~ vertices, and apply it to the development of a unified dynamic data structure that supports pointlocation, rayshooting, and shortestpath queries in M. The space requirement is O(nlog n). Pointlocation queries take time O(log 7~). Rayshooting and shortestpath queries take time O(log3 TZ) (plus O(k) time if the k edges of the shortest path are reported in addition to its length). Updates consist of insertions and deletions of vertices and edges, and take O(log3 n) time (amortized for vertex updates).
Dynamic and I/OEfficient Algorithms for Computational Geometry and Graph Problems: Theoretical and Experimental Results
, 1995
"... As most important applications today are largescale in nature, highperformance methods are becoming indispensable. Two promising computational paradigms for largescale applications are dynamic and I/Oefficient computations. We give efficient dynamic data structures for several fundamental proble ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
As most important applications today are largescale in nature, highperformance methods are becoming indispensable. Two promising computational paradigms for largescale applications are dynamic and I/Oefficient computations. We give efficient dynamic data structures for several fundamental problems in computational geometry, including point location, ray shooting, shortest path, and minimumlink path. We also develop a collection of new techniques for designing and analyzing I/Oefficient algorithms for graph problems, and illustrate how these techniques can be applied to a wide variety of specific problems, including list ranking, Euler tour, expressiontree evaluation, leastcommon ancestors, connected and biconnected components, minimum spanning forest, ear decomposition, topological sorting, reachability, graph drawing, and visibility representation. Finally, we present an extensive experimental study comparing the practical I/O efficiency of four algorithms for the orthogonal s...
Dynamization of the Trapezoid Method for Planar Point Location
, 1991
"... We present a fully dynamic data structure for point location in a monotone subdivision, based on the trapezoid method. The operations supported are insertion and deletion of vertices and edges, and horizontal translation of vertices. Let n be the current number of vertices of the subdivision. Point ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
We present a fully dynamic data structure for point location in a monotone subdivision, based on the trapezoid method. The operations supported are insertion and deletion of vertices and edges, and horizontal translation of vertices. Let n be the current number of vertices of the subdivision. Point location queries take O(log n) time, while updates take O(log2 n) time. The space requirement is O(n log n). This is the first fully dynamic point location data structure for monotone subdivisions that achieves optimal query time.
Trace Size vs Parallelism in TraceandReplay Debugging of SharedMemory Programs
 LANGUAGES AND COMPILERS FOR PARALLEL COMPUTING, LNCS
, 1993
"... Execution replay is a debugging strategy where a program is run over and over on an input that manifests bugs. For explicitly parallel sharedmemory programs, execution replay requires support of special tools  because these programs can be nondeterministic, their executions can differ from ru ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Execution replay is a debugging strategy where a program is run over and over on an input that manifests bugs. For explicitly parallel sharedmemory programs, execution replay requires support of special tools  because these programs can be nondeterministic, their executions can differ from run to run on the same input. For such programs, executions must be traced before they can be replayed for debugging. We present improvements over our past work on an adaptive tracing strategy that records only a fraction of the execution'ssharedmemory references. Our past approach makes runtime tracing decisions by detecting and tracing exactly the nontransitive dynamic data dependences among the execution'sshared data. Tracing the nontransitive dependences provides sufficient information for a replay.Inthis paper we show that tracing exactly these dependences is not necessary.Instead, we present two algorithms that introduce and trace artificial dependences among some events that are actually independent. These artificial dependences reduce trace size, but introduce additional event orderings that can reduce the amount of parallelism achievable during replay.Wepresent one algorithm that always adds dependences guaranteed not to be on the critical path and thus do not slow replay.Another algorithm adds as many dependences as possible, slowing replay but reducing trace size further.Experiments show that we can improve the already high trace reduction of our past technique by up to two more orders of magnitude, without slowing replay.Our new techniques usually trace only 0.000250.2% of the sharedmemory references, a 36 order of magnitude reduction over past techniques which trace every access.