Results 1  10
of
12
ClosestPoint Problems in Computational Geometry
, 1997
"... This is the preliminary version of a chapter that will appear in the Handbook on Computational Geometry, edited by J.R. Sack and J. Urrutia. A comprehensive overview is given of algorithms and data structures for proximity problems on point sets in IR D . In particular, the closest pair problem, th ..."
Abstract

Cited by 65 (14 self)
 Add to MetaCart
This is the preliminary version of a chapter that will appear in the Handbook on Computational Geometry, edited by J.R. Sack and J. Urrutia. A comprehensive overview is given of algorithms and data structures for proximity problems on point sets in IR D . In particular, the closest pair problem, the exact and approximate postoffice problem, and the problem of constructing spanners are discussed in detail. Contents 1 Introduction 1 2 The static closest pair problem 4 2.1 Preliminary remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Algorithms that are optimal in the algebraic computation tree model . 5 2.2.1 An algorithm based on the Voronoi diagram . . . . . . . . . . . 5 2.2.2 A divideandconquer algorithm . . . . . . . . . . . . . . . . . . 5 2.2.3 A plane sweep algorithm . . . . . . . . . . . . . . . . . . . . . . 6 2.3 A deterministic algorithm that uses indirect addressing . . . . . . . . . 7 2.3.1 The degraded grid . . . . . . . . . . . . . . . . . . ...
Coresets for kMeans and kMedian Clustering and their Applications
 In Proc. 36th Annu. ACM Sympos. Theory Comput
, 2003
"... In this paper, we show the existence of small coresets for the problems of computing kmedian and kmeans clustering for points in low dimension. In other words, we show that given a point set P in IR , one can compute a weighted set S P , of size log n), such that one can compute the kmed ..."
Abstract

Cited by 47 (13 self)
 Add to MetaCart
In this paper, we show the existence of small coresets for the problems of computing kmedian and kmeans clustering for points in low dimension. In other words, we show that given a point set P in IR , one can compute a weighted set S P , of size log n), such that one can compute the kmedian/means clustering on S instead of on P , and get an (1 + ")approximation.
An Optimal Algorithm for Closest Pair Maintenance
 Discrete Comput. Geom
, 1995
"... Given a set S of n points in kdimensional space, and an L t metric, the dynamic closest pair problem is defined as follows: find a closest pair of S after each update of S (the insertion or the deletion of a point). For fixed dimension k and fixed metric L t , we give a data structure of size O(n) ..."
Abstract

Cited by 35 (0 self)
 Add to MetaCart
Given a set S of n points in kdimensional space, and an L t metric, the dynamic closest pair problem is defined as follows: find a closest pair of S after each update of S (the insertion or the deletion of a point). For fixed dimension k and fixed metric L t , we give a data structure of size O(n) that maintains a closest pair of S in O(logn) time per insertion and deletion. The running time of algorithm is optimal up to constant factor because \Omega\Gammaaus n) is a lower bound, in algebraic decisiontree model of computation, on the time complexity of any algorithm that maintains the closest pair (for k = 1). The algorithm is based on the fairsplit tree. The constant factor in the update time is exponential in the dimension. We modify the fairsplit tree to reduce it. 1 Introduction The dynamic closest pair problem is one of the very wellstudied proximity problem in computational geometry [6, 1720, 22, 2426, 2831]. We are given a set S of n points in kdimensional space...
Clustering Motion
 In Proc. 42nd Annu. IEEE Sympos. Found. Comput. Sci
, 2003
"... Given a set of moving points in IR , we show how to cluster them in advance, using a small number of clusters, so that at any time this static clustering is competitive with the optimal kcenter clustering at that time. The advantage of this approach is that it avoids updating the clustering a ..."
Abstract

Cited by 30 (5 self)
 Add to MetaCart
Given a set of moving points in IR , we show how to cluster them in advance, using a small number of clusters, so that at any time this static clustering is competitive with the optimal kcenter clustering at that time. The advantage of this approach is that it avoids updating the clustering as time passes. We also show how to maintain this static clustering eciently under insertions and deletions.
Fast Algorithms for Computing the Smallest kEnclosing Disc
 In Proc. 11th Annu. European Sympos. Algorithms, volume 2832 of Lect. Notes in Comp. Sci
, 2003
"... We consider the problem of nding, for a given n point set P in the plane and an integer k n, the smallest circle enclosing at least k points of P . We present a randomized algorithm that computes in O(nk) expected time such a circle, improving over previously known algorithms. ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
We consider the problem of nding, for a given n point set P in the plane and an integer k n, the smallest circle enclosing at least k points of P . We present a randomized algorithm that computes in O(nk) expected time such a circle, improving over previously known algorithms.
Randomized Data Structures for the Dynamic ClosestPair Problem
, 1993
"... We describe a new randomized data structure, the sparse partition, for solving the dynamic closestpair problem. Using this data structure the closest pair of a set of n points in Ddimensional space, for any fixed D, can be found in constant time. If a frame containing all the points is known in adv ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
We describe a new randomized data structure, the sparse partition, for solving the dynamic closestpair problem. Using this data structure the closest pair of a set of n points in Ddimensional space, for any fixed D, can be found in constant time. If a frame containing all the points is known in advance, and if the floor function is available at unitcost, then the data structure supports insertions into and deletions from the set in expected O(log n) time and requires expected O(n) space. Here, it is assumed that the updates are chosen by an adversary who does not know the random choices made by the data structure. This method is more efficient than any deterministic algorithm for solving the problem in dimension D ? 1. The data structure can be modified to run in O(log 2 n) expected time per update in the algebraic computation tree model of computation. Even this version is more efficient than the currently best known deterministic algorithm for D ? 2. 1 Introduction We ...
On Enumerating and Selecting Distances
 Int. J. Comput. Geom. Appl
, 1999
"... Given an npoint set, the problems of enumerating the k closest pairs and selecting the kth smallest distance are revisited. For the enumeration problem, we give simpler randomized and deterministic algorithms with O(n log n + k) running time in any fixeddimensional Euclidean space. For the selec ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
Given an npoint set, the problems of enumerating the k closest pairs and selecting the kth smallest distance are revisited. For the enumeration problem, we give simpler randomized and deterministic algorithms with O(n log n + k) running time in any fixeddimensional Euclidean space. For the selection problem, we give a randomized algorithm with running time O(n log n + n 2=3 k 1=3 log 5=3 n). We also describe outputsensitive results for halfspace range counting that are of use in more general distance selection problems. None of our algorithms requires parametric search. Keywords: distance enumeration, distance selection, closest pairs, range counting, randomized algorithms. 1 Introduction Finding the closest pair of an npoint set has a long history in computational geometry (see [34] for a nice survey). In the plane, the problem can be solved in O(n log n) time using the Delaunay triangulation. In an arbitrary fixed dimension d, the first O(n log n) algorithm, based on di...
A reliable randomized algorithm for the . . .
, 1997
"... The following two computational problems are studied: Duplicate grouping: Assume that n items are given, each of which is labeled by an integer key from the set 0,..., U � 1 4. Store the items in an array of size n such that items with the same key occupy a contiguous segment of the array. Closest p ..."
Abstract
 Add to MetaCart
The following two computational problems are studied: Duplicate grouping: Assume that n items are given, each of which is labeled by an integer key from the set 0,..., U � 1 4. Store the items in an array of size n such that items with the same key occupy a contiguous segment of the array. Closest pair: Assume that a multiset of n points in the ddimensional Euclidean space is given, where d � 1 is a fixed integer. Each point is represented as a dtuple of integers in the range 0,..., U � 14 Ž or of arbitrary real numbers.. Find a closest pair, i.e., a pair of points whose distance is minimal over all such pairs.
Chapter 1 The Power of Grids Computing the Minimum Disk Containing k Points
"... The Peace of Olivia. How sweat and peaceful it sounds! There the great powers noticed for the first time that the land of the Poles lends itself admirably to partition. – The tin drum, Gunter Grass In this chapter, we are going to discuss two basic geometric algorithms. The first one, computes the c ..."
Abstract
 Add to MetaCart
The Peace of Olivia. How sweat and peaceful it sounds! There the great powers noticed for the first time that the land of the Poles lends itself admirably to partition. – The tin drum, Gunter Grass In this chapter, we are going to discuss two basic geometric algorithms. The first one, computes the closest pair among a set of n points in linear time. This is a beautiful and surprising result that exposes the computational power of using grids for geometric computation. Next, we discuss a simple algorithm for approximating the smallest enclosing ball that contains k points of the input. This at first looks like a bizarre problem, but turns out to be a key ingredient to our later discussion. 1.1 Preliminaries For a real positive number r and a point p = (x, y) in IR 2, define Gr(p) to be the grid point (⌊x/r ⌋ r, ⌊y/r ⌋ r). We call r the width of the grid Gr. Observe that Gr partitions the plane into square regions, which we call grid cells. Formally, for any i, j ∈ Z, the intersection of the halfplanes x ≥ ri, x < r(i + 1), y ≥ r j and y < r ( j + 1) is said to be a grid cell. Further we define a grid cluster as a block of 3 × 3 contiguous grid cells. Note, that every grid cell C of Gr, has a unique ID; indeed, let p = (x, y) be any point in C, and consider the pair of integer numbers idC = id(p) = (⌊x/r ⌋ , ⌊y/r⌋). Clearly, only points inside C are going to be mapped to idC. This is very useful, since we store a set P of points inside a grid efficiently. Indeed, given a point p, compute its id(p). We associate with each unique id a datastructure that stores all the points falling into this grid cell (of course, we do not maintain such datastructures for grid cells which are empty). So, once we computed id(p), we fetch the data structure associated with this cell, by using hashing. Namely, we store pointers to all those datastructures in a hash table, where each such datastructure is indexed by its unique id. Since the ids are integer numbers, we can do the hashing in constant time.