Results 1  10
of
16
Closest pair queries in spatial databases
 In SIGMOD
, 2000
"... All intext references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately. ..."
Abstract

Cited by 84 (10 self)
 Add to MetaCart
(Show Context)
All intext references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately.
ClosestPoint Problems in Computational Geometry
, 1997
"... This is the preliminary version of a chapter that will appear in the Handbook on Computational Geometry, edited by J.R. Sack and J. Urrutia. A comprehensive overview is given of algorithms and data structures for proximity problems on point sets in IR D . In particular, the closest pair problem, th ..."
Abstract

Cited by 74 (14 self)
 Add to MetaCart
This is the preliminary version of a chapter that will appear in the Handbook on Computational Geometry, edited by J.R. Sack and J. Urrutia. A comprehensive overview is given of algorithms and data structures for proximity problems on point sets in IR D . In particular, the closest pair problem, the exact and approximate postoffice problem, and the problem of constructing spanners are discussed in detail. Contents 1 Introduction 1 2 The static closest pair problem 4 2.1 Preliminary remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Algorithms that are optimal in the algebraic computation tree model . 5 2.2.1 An algorithm based on the Voronoi diagram . . . . . . . . . . . 5 2.2.2 A divideandconquer algorithm . . . . . . . . . . . . . . . . . . 5 2.2.3 A plane sweep algorithm . . . . . . . . . . . . . . . . . . . . . . 6 2.3 A deterministic algorithm that uses indirect addressing . . . . . . . . . 7 2.3.1 The degraded grid . . . . . . . . . . . . . . . . . . ...
On Enumerating and Selecting Distances
 Int. J. Comput. Geom. Appl
, 1999
"... Given an npoint set, the problems of enumerating the k closest pairs and selecting the kth smallest distance are revisited. For the enumeration problem, we give simpler randomized and deterministic algorithms with O(n log n + k) running time in any fixeddimensional Euclidean space. For the selec ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
(Show Context)
Given an npoint set, the problems of enumerating the k closest pairs and selecting the kth smallest distance are revisited. For the enumeration problem, we give simpler randomized and deterministic algorithms with O(n log n + k) running time in any fixeddimensional Euclidean space. For the selection problem, we give a randomized algorithm with running time O(n log n + n 2=3 k 1=3 log 5=3 n). We also describe outputsensitive results for halfspace range counting that are of use in more general distance selection problems. None of our algorithms requires parametric search. Keywords: distance enumeration, distance selection, closest pairs, range counting, randomized algorithms. 1 Introduction Finding the closest pair of an npoint set has a long history in computational geometry (see [34] for a nice survey). In the plane, the problem can be solved in O(n log n) time using the Delaunay triangulation. In an arbitrary fixed dimension d, the first O(n log n) algorithm, based on di...
Randomized Data Structures for the Dynamic ClosestPair Problem
, 1993
"... We describe a new randomized data structure, the sparse partition, for solving the dynamic closestpair problem. Using this data structure the closest pair of a set of n points in Ddimensional space, for any fixed D, can be found in constant time. If a frame containing all the points is known in adv ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
(Show Context)
We describe a new randomized data structure, the sparse partition, for solving the dynamic closestpair problem. Using this data structure the closest pair of a set of n points in Ddimensional space, for any fixed D, can be found in constant time. If a frame containing all the points is known in advance, and if the floor function is available at unitcost, then the data structure supports insertions into and deletions from the set in expected O(log n) time and requires expected O(n) space. Here, it is assumed that the updates are chosen by an adversary who does not know the random choices made by the data structure. This method is more efficient than any deterministic algorithm for solving the problem in dimension D ? 1. The data structure can be modified to run in O(log 2 n) expected time per update in the algebraic computation tree model of computation. Even this version is more efficient than the currently best known deterministic algorithm for D ? 2. 1 Introduction We ...
I/Oefficient wellseparated pair decomposition and its applications
 In Proc. Annual European Symposium on Algorithms
, 2000
"... Abstract. We present an external memory algorithm to compute a wellseparated pair decomposition (WSPD) of a given point set P in £ d in O ¤ sort ¤ N¥¦ ¥ I/Os using O ¤ N § B ¥ blocks of external memory, where N is the number of points in P, and sort ¤ N ¥ denotes the I/O complexity of sorting N ite ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We present an external memory algorithm to compute a wellseparated pair decomposition (WSPD) of a given point set P in £ d in O ¤ sort ¤ N¥¦ ¥ I/Os using O ¤ N § B ¥ blocks of external memory, where N is the number of points in P, and sort ¤ N ¥ denotes the I/O complexity of sorting N items. (Throughout this paper we assume that the dimension d is fixed). We also show how to dynamically maintain the WSPD in O ¤ log B N ¥ I/O’s per insert or delete operation using O ¤ N § B ¥ blocks of external memory. As applications of the WSPD, we show how to compute a linear size tspanner for P within the same I/O and space bounds and how to solve the Knearest neighbor and Kclosest pair problems in O ¤ sort ¤ KN¥¦¥ and O ¤ sort ¤ N ¨ K¥¦ ¥ I/Os using O ¤ KN § B ¥ and O¤¦ ¤ N ¨ K¥© § B ¥ blocks of external memory, respectively. Using the dynamic WSPD, we show how to dynamically maintain the closest pair of P in O ¤ log B N ¥ I/O’s per insert or delete operation using O ¤ N § B ¥ blocks of external memory. 1
Net and prune: A linear time algorithm for euclidean distance problems
 In Proc. 45th Annu. ACM Sympos. Theory Comput. (STOC
, 2013
"... We provide a general framework for getting expected linear time constant factor approximations (and in many cases FPTAS’s) to several well known problems in Computational Geometry, such as kcenter clustering and farthest nearest neighbor. The new approach is robust to variations in the input proble ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
We provide a general framework for getting expected linear time constant factor approximations (and in many cases FPTAS’s) to several well known problems in Computational Geometry, such as kcenter clustering and farthest nearest neighbor. The new approach is robust to variations in the input problem, and yet it is simple, elegant and practical. In particular, many of these well studied problems which fit easily into our framework, either previously had no linear time approximation algorithm, or required rather involved algorithms and analysis. A short list of the problems we consider include farthest nearest neighbor, kcenter clustering, smallest disk enclosing k points, kth largest distance, kth smallest mnearest neighbor distance, kth heaviest edge in the MST and other spanning forest type problems, problems involving upward closed set systems, and more. Finally, we show how to extend our framework such that the linear running time bound holds with high probability. 1.
Convex Hull of Points Lying on Lines in o(n log n) Time after Preprocessing
, 2011
"... Motivated by the desire to cope with data imprecision [31], we study methods for taking advantage of preliminary information about point sets in order to speed up the computation of certain structures associated with them. In particular, we study the following problem: given a set L of n lines in th ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
Motivated by the desire to cope with data imprecision [31], we study methods for taking advantage of preliminary information about point sets in order to speed up the computation of certain structures associated with them. In particular, we study the following problem: given a set L of n lines in the plane, we wish to preprocess L such that later, upon receiving a set P of n points, each of which lies on a distinct line of L, we can construct the convex hull of P efficiently. We show that in quadratic time and space it is possible to construct a data structure on L that enables us to compute the convex hull of any such point set P in O(nα(n) log ∗ n) expected time. If we further assume that the points are “oblivious ” with respect to the data structure, the running time improves to O(nα(n)). The same result holds when L is a set of line segments (in general position). We present several extensions, including a tradeoff between space and query time and an outputsensitive algorithm. We also study the “dual problem ” where we show how to efficiently compute the ( ≤ k)level of n lines in the plane, each of which is incident to a distinct point (given in advance). We complement our results by Ω(n log n) lower bounds under the algebraic computation tree model for several related problems, including sorting a set of points (according to, say, their xorder), each of which lies on a given line known in advance. Therefore, the convex hull problem under our setting is easier than sorting, contrary to the “standard ” convex hull and sorting problems, in which the two problems require Θ(n log n) steps in the worst case (under the algebraic computation tree model).
Multimodal Partial Estimates Fusion
"... Fusing partial estimates is a critical and common problem in many computer vision tasks such as partbased detection and tracking. It generally becomes complicated and intractable when there are a large number of multimodal partial estimates, and thus it is desirable to find an effective and scalabl ..."
Abstract
 Add to MetaCart
(Show Context)
Fusing partial estimates is a critical and common problem in many computer vision tasks such as partbased detection and tracking. It generally becomes complicated and intractable when there are a large number of multimodal partial estimates, and thus it is desirable to find an effective and scalable fusion method to integrate these partial estimates. This paper presents a novel and effective approach to fusing multimodal partial estimates in a principled way. In this new approach, fusion is related to a computational geometry problem of finding the minimumvolume orthotope, and an effective and scalable branch and bound search algorithm is designed to obtain the global optimal solution. Experiments on tracking articulated objects and occluded objects show the effectiveness of the proposed approach. 1.
Research Articles A Geometric Interpretation for Local AlignmentFree Sequence Comparison
"... Local alignmentfree sequence comparison arises in the context of identifying similar segments of sequences that may not be alignable in the traditional sense. We propose a randomized approximation algorithm that is both accurate and efficient. We show that under D2 and its important variant D 2 a ..."
Abstract
 Add to MetaCart
(Show Context)
Local alignmentfree sequence comparison arises in the context of identifying similar segments of sequences that may not be alignable in the traditional sense. We propose a randomized approximation algorithm that is both accurate and efficient. We show that under D2 and its important variant D 2 as the similarity measure, local alignmentfree comparison between a pair of sequences can be formulated as the problem of finding the maximum bichromatic dot product between two sets of points in high dimensions. We introduce a geometric framework that reduces this problem to that of finding the bichromatic closest pair (BCP), allowing the properties of the underlying metric to be leveraged. Local alignmentfree sequence comparison can be solved by making a quadratic number of alignmentfree substring comparisons. We show both theoretically and through empirical results on simulated data that our approximation algorithm requires a subquadratic number of such comparisons and trades only a small amount of accuracy to achieve this efficiency. Therefore, our algorithm can extend the current usage of alignmentfree–based methods and can also be regarded as a substitute for local alignment algorithms in many biological studies. Key words: algorithms, alignment, dynamic programming, metagenomics. 1.