Results 11  20
of
27
Topology BTrees and Their Applications
"... . The wellknown Btree data structure provides a mechanism for dynamically maintaining balanced binary trees in external memory. We present an externalmemory dynamic data structure for maintaining arbitrary binary trees. Our data structure, which we call the topology Btree, is an externalmemory ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
. The wellknown Btree data structure provides a mechanism for dynamically maintaining balanced binary trees in external memory. We present an externalmemory dynamic data structure for maintaining arbitrary binary trees. Our data structure, which we call the topology Btree, is an externalmemory analogue to the internalmemory topology tree data structure of Frederickson. It allows for dynamic expression evaluation and updates as well as various tree searching and evaluation queries. We show how to apply this data structure to a number of externalmemory dynamic problems, including approximate nearestneighbor searching and closestpair maintenance. 1 Introduction The Btree [8, 12, 14, 15] data structure is a very efficient and powerful way for maintaining balanced binary trees in external memory [1, 11, 13, 18, 19, 21, 22, 2]. Indeed, in his wellknown survey paper [8], Comer calls Btrees "ubiquitous," for they are found in a host of different applications. Nevertheless, there ar...
Randomized Data Structures for the Dynamic ClosestPair Problem
, 1993
"... We describe a new randomized data structure, the sparse partition, for solving the dynamic closestpair problem. Using this data structure the closest pair of a set of n points in Ddimensional space, for any fixed D, can be found in constant time. If a frame containing all the points is known in adv ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
We describe a new randomized data structure, the sparse partition, for solving the dynamic closestpair problem. Using this data structure the closest pair of a set of n points in Ddimensional space, for any fixed D, can be found in constant time. If a frame containing all the points is known in advance, and if the floor function is available at unitcost, then the data structure supports insertions into and deletions from the set in expected O(log n) time and requires expected O(n) space. Here, it is assumed that the updates are chosen by an adversary who does not know the random choices made by the data structure. This method is more efficient than any deterministic algorithm for solving the problem in dimension D ? 1. The data structure can be modified to run in O(log 2 n) expected time per update in the algebraic computation tree model of computation. Even this version is more efficient than the currently best known deterministic algorithm for D ? 2. 1 Introduction We ...
Computational Geometry
 in optimization 2.5D and 3D NC surface machining. Computers in Industry
, 1996
"... Introduction Computational geometry evolves from the classical discipline of design and analysis of algorithms, and has received a great deal of attention in the last two decades since its inception in 1975 by M. Shamos[108]. It is concerned with the computational complexity of geometric problems t ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Introduction Computational geometry evolves from the classical discipline of design and analysis of algorithms, and has received a great deal of attention in the last two decades since its inception in 1975 by M. Shamos[108]. It is concerned with the computational complexity of geometric problems that arise in various disciplines such as pattern recognition, computer graphics, computer vision, robotics, VLSI layout, operations research, statistics, etc. In contrast with the classical approach to proving mathematical theorems about geometryrelated problems, this discipline emphasizes the computational aspect of these problems and attempts to exploit the underlying geometric properties possible, e.g., the metric space, to derive efficient algorithmic solutions. The classical theorem, for instance, that a set S is convex if and only if for any 0 ff 1 the convex combination ffp + (1 \Gamma<F
I/Oefficient wellseparated pair decomposition and its applications
 In Proc. Annual European Symposium on Algorithms
, 2000
"... Abstract. We present an external memory algorithm to compute a wellseparated pair decomposition (WSPD) of a given point set P in £ d in O ¤ sort ¤ N¥¦ ¥ I/Os using O ¤ N § B ¥ blocks of external memory, where N is the number of points in P, and sort ¤ N ¥ denotes the I/O complexity of sorting N ite ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Abstract. We present an external memory algorithm to compute a wellseparated pair decomposition (WSPD) of a given point set P in £ d in O ¤ sort ¤ N¥¦ ¥ I/Os using O ¤ N § B ¥ blocks of external memory, where N is the number of points in P, and sort ¤ N ¥ denotes the I/O complexity of sorting N items. (Throughout this paper we assume that the dimension d is fixed). We also show how to dynamically maintain the WSPD in O ¤ log B N ¥ I/O’s per insert or delete operation using O ¤ N § B ¥ blocks of external memory. As applications of the WSPD, we show how to compute a linear size tspanner for P within the same I/O and space bounds and how to solve the Knearest neighbor and Kclosest pair problems in O ¤ sort ¤ KN¥¦¥ and O ¤ sort ¤ N ¨ K¥¦ ¥ I/Os using O ¤ KN § B ¥ and O¤¦ ¤ N ¨ K¥© § B ¥ blocks of external memory, respectively. Using the dynamic WSPD, we show how to dynamically maintain the closest pair of P in O ¤ log B N ¥ I/O’s per insert or delete operation using O ¤ N § B ¥ blocks of external memory. 1
Dynamic compressed hyperoctrees with application to the Nbody problem
 In Proc. 19th Conf
, 1999
"... Abstract. Hyperoctree is a popular data structure for organizing multidimensional point data. The main drawback of this data structure is that its size and the runtime of operations supported by it are dependent upon the distribution of the points. Clarkson rectified the distributiondependency in t ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Abstract. Hyperoctree is a popular data structure for organizing multidimensional point data. The main drawback of this data structure is that its size and the runtime of operations supported by it are dependent upon the distribution of the points. Clarkson rectified the distributiondependency in the size of hyperoctrees by introducing compressed hyperoctrees. He presents an O(n log n) expected time randomized algorithm to construct a compressed hyperoctree. In this paper, we give three deterministic algorithms to construct a compressed hyperoctree in O(n log n) time, for any fixed dimension d. We present O(log n) algorithms for point and cubic region searches, point insertions and deletions. We propose a solution to the Nbody problem in O(n) time, given the tree. Our algorithms also reduce the runtime dependency on the number of dimensions. 1
Wellseparated pair decomposition for the unitdisk graph metric and its applications
 SIAM Journal on Computing
, 2003
"... Abstract. We extend the classic notion of wellseparated pair decomposition [10] to the unitdisk graph metric: the shortest path distance metric induced by the intersection graph of unit disks. We show that for the unitdisk graph metric of n points in the plane and for any constant c ≥ 1, there ex ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Abstract. We extend the classic notion of wellseparated pair decomposition [10] to the unitdisk graph metric: the shortest path distance metric induced by the intersection graph of unit disks. We show that for the unitdisk graph metric of n points in the plane and for any constant c ≥ 1, there exists a cwellseparated pair decomposition with O(n log n) pairs, and the decomposition can be computed in O(n log n) time. We also show that for the unitball graph metric in k dimensions where k ≥ 3, there exists a cwellseparated pair decomposition with O(n 2−2/k) pairs, and the bound is tight in the worst case. We present the application of the wellseparated pair decomposition in obtaining efficient algorithms for approximating the diameter, closest pair, nearest neighbor, center, median, and stretch factor, all under the unitdisk graph metric. Keywords Well separated pair decomposition, Unitdisk graph, Approximation algorithm
Approximating Energy Efficient Paths in Wireless MultiHop Networks
, 2003
"... Given the positions of n sites in a radio network we consider the problem of finding routes between any pair of sites that minimize energy consumption and do not use more than some constant number k of hops. Known exact algorithms for this problem required O(n log n) per query pair (p,q). In this pa ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Given the positions of n sites in a radio network we consider the problem of finding routes between any pair of sites that minimize energy consumption and do not use more than some constant number k of hops. Known exact algorithms for this problem required O(n log n) per query pair (p,q). In this paper we relax the exactness requirement and only compute approximate (1+ε) solutions which allows us to guarantee constant query time using linear space and O(n log n) preprocessing time. The dependence on ε is polynomial in1/ε. One tool...
Lazy Algorithms for Dynamic Closest Pair with Arbitrary Distance Measures
 In Algorithm Engineering and Experiments Workshop (ALENEX
, 2004
"... We propose novel lazy algorithms for the dynamic closest pair problem with arbitrary distance measures. They use a simple delayed computation mechanism to spare distance calculations. One of them is a lazy version of the FastPair algorithm recently proposed by Eppstein. Experimental results on a wid ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
We propose novel lazy algorithms for the dynamic closest pair problem with arbitrary distance measures. They use a simple delayed computation mechanism to spare distance calculations. One of them is a lazy version of the FastPair algorithm recently proposed by Eppstein. Experimental results on a wide number of applications show that lazy FastPair always perform significantly better than FastPair.
Chromatic Nearest Neighbor Searching: A Query Sensitive Approach
, 1996
"... The nearest neighbor problem is that of preprocessing a set P of n data points in R d so that, given any query point q, the closest point in P to q can be determined efficiently. In the chromatic nearest neighbor problem, each point of P is assigned a color, and the problem is to determine the col ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
The nearest neighbor problem is that of preprocessing a set P of n data points in R d so that, given any query point q, the closest point in P to q can be determined efficiently. In the chromatic nearest neighbor problem, each point of P is assigned a color, and the problem is to determine the color of the nearest point to the query point. More generally, given k 1, the problem is to determine the color occurring most frequently among the k nearest neighbors. The chromatic version of the nearest neighbor problem is used in many applications in pattern recognition and learning. In this paper we present a simple algorithm for solving the chromatic k nearest neighbor problem. We provide a query sensitive analysis, which shows that if the color classes form spatially well separated clusters (as often happens in practice), then queries can be answered quite efficiently. We also allow the user to specify an error bound ffl 0, and consider the same problem in the context of approximate ne...