Results 1  10
of
29
Dynamic Ray Shooting and Shortest Paths in Planar Subdivisions via Balanced Geodesic Triangulations
 J. Algorithms
, 1997
"... We give new methods for maintaining a data structure that supports ray shooting and shortest path queries in a dynamicallychanging connected planar subdivision S. Our approach is based on a new dynamic method for maintaining a balanced decomposition of a simple polygon via geodesic triangles. We ma ..."
Abstract

Cited by 37 (3 self)
 Add to MetaCart
(Show Context)
We give new methods for maintaining a data structure that supports ray shooting and shortest path queries in a dynamicallychanging connected planar subdivision S. Our approach is based on a new dynamic method for maintaining a balanced decomposition of a simple polygon via geodesic triangles. We maintain such triangulations by viewing their dual trees as balanced trees. We show that rotations in these trees can be implemented via a simple "diagonal swapping" operation performed on the corresponding geodesic triangles, and that edge insertion and deletion can be implemented on these trees using operations akin to the standard split and splice operations. We also maintain a dynamic point location structure on the geodesic triangulation, so that we may implement ray shooting queries by first locating the ray's endpoint and then walking along the ray from geodesic triangle to geodesic triangle until we hit the boundary of some region of S. The shortest path between two points in the same ...
Finding maximal pairs with bounded gap
 Proceedings of the 10th Annual Symposium on Combinatorial Pattern Matching (CPM), volume 1645 of Lecture Notes in Computer Science
, 1999
"... A pair in a string is the occurrence of the same substring twice. A pair is maximal if the two occurrences of the substring cannot be extended to the left and right without making them different. The gap of a pair is the number of characters between the two occurrences of the substring. In this pape ..."
Abstract

Cited by 26 (5 self)
 Add to MetaCart
A pair in a string is the occurrence of the same substring twice. A pair is maximal if the two occurrences of the substring cannot be extended to the left and right without making them different. The gap of a pair is the number of characters between the two occurrences of the substring. In this paper we present methods for finding all maximal pairs under various constraints on the gap. In a string of length n we can find all maximal pairs with gap in an upper and lower bounded interval in time O(n log n + z) where z is the number of reported pairs. If the upper bound is removed the time reduces to O(n+z). Since a tandem repeat is a pair where the gap is zero, our methods can be seen as a generalization of finding tandem repeats. The running time of our methods equals the running time of well known methods for finding tandem repeats.
Implementing Radixsort
 ACM Jour. of Experimental Algorithmics
, 1998
"... We present and evaluate several new optimization and implementation techniques for string sorting. In particular, we study a recently published radix sorting algorithm, Forward radixsort, that has a provably good worstcase behavior. Our experimental results indicate that radix sorting is considerab ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
We present and evaluate several new optimization and implementation techniques for string sorting. In particular, we study a recently published radix sorting algorithm, Forward radixsort, that has a provably good worstcase behavior. Our experimental results indicate that radix sorting is considerably faster (often more than twice as fast) than comparisonbased sorting methods. This is true even for small input sequences. We also show that it is possible to implement a radix sort with good worstcase running time without sacrificing averagecase performance. Our implementations are competitive with the best previously published string sorting algorithms. Code, test data, and test results are available from the World Wide Web. 1. Introduction Radix sorting is a simple and very efficient sorting method that has received too little attention. A common misconception is that a radix sorting algorithm either has to inspect all the characters of the input or use an inordinate amount of extra...
Discrete Loops and Worst Case Performance
 Computer Languages
, 1994
"... In this paper socalled discrete loops are introduced which narrow the gap between general loops (e.g. while or repeatloops) and forloops. Alt hough discrete loops can be used for applications that would otherwise require general loops, discrete loops are known to complete in any case. Furthe ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
(Show Context)
In this paper socalled discrete loops are introduced which narrow the gap between general loops (e.g. while or repeatloops) and forloops. Alt hough discrete loops can be used for applications that would otherwise require general loops, discrete loops are known to complete in any case. Furthermore it is possible to determine the number of iterations of a discrete loop, while this is trivial to do for forloops and extremely difficult for general loops. Thus discrete loops form an ideal framework for determining the worst case timing behavior of a program and they are especially useful in implementing realtime systems and proving such systems correct.
A Complete and Efficient Algorithm for the Intersection of a General and a Convex Polyhedron
 IN PROC. 3RD WORKSHOP ALGORITHMS DATA STRUCT
, 1993
"... A polyhedron is any set that can be obtained from the open halfspaces by a finite number of set complement and set intersection operations. We give an efficient and complete algorithm for intersecting two threedimensional polyhedra, one of which is convex. The algorithm is efficient in the sense ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
A polyhedron is any set that can be obtained from the open halfspaces by a finite number of set complement and set intersection operations. We give an efficient and complete algorithm for intersecting two threedimensional polyhedra, one of which is convex. The algorithm is efficient in the sense that its running time is bounded by the size of the inputs plus the size of the output times a logarithmic factor. The algorithm is complete in the sense that it can handle all inputs and requires no general position assumption. We also describe a novel data structure that can represent all threedimensional polyhedra (the set of polyhedra representable by all previous data structures is not closed under the basic boolean operations).
Computing the quartet distance between evolutionary trees in time O(n log n
 Algorithmica
, 2001
"... Abstract Evolutionary trees describing the relationship for a set of species are central in evolutionarybiology, and quantifying differences between evolutionary trees is therefore an important task. The quartet distance is a distance measure between trees previously proposed by Estabrook,McMorris a ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
Abstract Evolutionary trees describing the relationship for a set of species are central in evolutionarybiology, and quantifying differences between evolutionary trees is therefore an important task. The quartet distance is a distance measure between trees previously proposed by Estabrook,McMorris and Meacham. The quartet distance between two unrooted evolutionary trees is the number of quartet topology differences between the two trees, where a quartet topologyis the topological subtree induced by four species. In this paper, we present an algorithm for computing the quartet distance between two unrooted evolutionary trees of n species, whereall internal nodes have degree three, in time O(n log n). The previous best algorithm for theproblem uses time O(n2).
On the Exact Worst Case Query Complexity of Planar Point Location
 IN PROCEEDINGS OF THE NINTH ANNUAL ACMSIAM SYMPOSIUM ON DISCRETE ALGORITHMS
, 1998
"... What is the smallest constant c so that the planar point location queries can be answered in c log 2 n + o(log n) steps (i.e. pointline comparisons) in the worst case? In SODA 97 Goodrich, Orletsky, and Ramaiyer [6] showed that c = 2 is possible using linear space and conjectured this to be optimal ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
What is the smallest constant c so that the planar point location queries can be answered in c log 2 n + o(log n) steps (i.e. pointline comparisons) in the worst case? In SODA 97 Goodrich, Orletsky, and Ramaiyer [6] showed that c = 2 is possible using linear space and conjectured this to be optimal. We disprove this conjecture and show that c = 1 can be achieved. Moreoever by giving upper and lower bounds we show that without space restrictions the worst case query complexity of planar point location differs from log 2 n + 2 p log 2 n at most by an additive factor of (1=2)log 2 log 2 n +O(1). For the case of linear space we show the query complexity to be bounded by log 2 n + 2 p log 2 n +O(log 1=4 n).
BTrees with Relaxed Balance
 In Proceedings of the 9th International Parallel Processing Symposium
, 1993
"... Btrees with relaxed balance have been defined to facilitate fast updating on sharedmemory asynchronous parallel architectures. To obtain this, rebalancing has been uncoupled from the updating such that extensive locking can be avoided in connection with updates. We analyze Btrees with relaxed bal ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
Btrees with relaxed balance have been defined to facilitate fast updating on sharedmemory asynchronous parallel architectures. To obtain this, rebalancing has been uncoupled from the updating such that extensive locking can be avoided in connection with updates. We analyze Btrees with relaxed balance, and prove that each update gives rise to at most blog a (N=2)c + 1 rebalancing operations, where a is the degree of the Btree, and N is the bound on its maximal size since it was last in balance. Assuming that the size of nodes are at least twice the degree, we prove that rebalancing can be performed in amortized constant time. So, in the long run, rebalancing is constant time on average, even if any particular update could give rise to logarithmic time rebalancing. We also prove that the amount of rebalancing done at any particular level decreases exponentially going from the leaves towards the root. This is important since the higher up in the tree a lock due to a rebalancing operat...
Leap Forward Virtual Clock: An O(log log N) Fair Queuing Scheme with Guaranteed Delays and Throughput Fairness
, 1996
"... We describe an efficient fair queuing scheme, Leap Forward Virtual Clock, that provides endtoend delay bounds almost identical to that of PGPS fair queuing, along with throughput fairness. Our scheme can be implemented with a worstcase time O(log log N) per packet (inclusive of sorting costs), wh ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
We describe an efficient fair queuing scheme, Leap Forward Virtual Clock, that provides endtoend delay bounds almost identical to that of PGPS fair queuing, along with throughput fairness. Our scheme can be implemented with a worstcase time O(log log N) per packet (inclusive of sorting costs), which improves upon all previously known schemes that achieve guaranteed delay and throughput fairness. As its name suggests, our scheme is based on Zhang's virtual clock. While the original virtual clock scheme does not achieve throughput fairness, we can modify it with a simple leap forward mechanism that keeps the server clock from lagging too far behind the packet tags. We prove that our scheme guarantees a fair share of the available bandwidth to each of the backlogged users, while precisely matching the delay bounds of PGPS schemes. In order to improve computational efficiency, we introduce a "coarsened" version of our scheme in which all tags assume values from a set of O(N) integers. We then use "approximate sorting" and a finiteuniverse priority queue to achieve O(log log N) processing time per packet. We can show that the coarsening of tags increases the delay bound by a very small additive constant. Finally, our proofs are based on a dual version of the algorithm called Leap Backward, whose behavior is identical to the Leap Forward but that admits a simpler analysis.
WorstCase Space and Time Complexity of Recursive Procedures
"... The purpose of this paper is to show that recursive procedures can be used for implementing realtime applications without harm, if a few conditions are met. These conditions ensure that upper bounds for space and time requirements can be derived at compile time. Moreover they are simple enough such ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
The purpose of this paper is to show that recursive procedures can be used for implementing realtime applications without harm, if a few conditions are met. These conditions ensure that upper bounds for space and time requirements can be derived at compile time. Moreover they are simple enough such that many important recursive algorithms can be implemented, for example Mergesort or recursive treetraversal algorithms. In addition,