Results 1  10
of
35
ExternalMemory Computational Geometry
, 1993
"... In this paper, we give new techniques for designing efficient algorithms for computational geometry problems that are too large to be solved in internal memory, and we use these techniques to develop optimal and practical algorithms for a number of important largescale problems. We discuss our algor ..."
Abstract

Cited by 121 (20 self)
 Add to MetaCart
In this paper, we give new techniques for designing efficient algorithms for computational geometry problems that are too large to be solved in internal memory, and we use these techniques to develop optimal and practical algorithms for a number of important largescale problems. We discuss our algorithms primarily in the contex't of single processor/single disk machines, a domain in which they are not only the first known optimal results but also of tremendous practical value. Our methods also produce the first known optimal algorithms for a wide range of twolevel and hierarchical muir{level memory models, including parallel models. The algorithms are optimal both in terms of I/0 cost and internal computation.
Lower bounds for orthogonal range searching: I. the reporting case
 Journal of the ACM
, 1990
"... Abstract. We establish lower bounds on the complexity of orthogonal range reporting in the static case. Given a collection of n points in dspace and a box [a,, b,] x. x [ad, bd], report every point whose ith coordinate lies in [a,, biJ, for each i = 1,..., d. The collection of points is fixed once ..."
Abstract

Cited by 65 (4 self)
 Add to MetaCart
Abstract. We establish lower bounds on the complexity of orthogonal range reporting in the static case. Given a collection of n points in dspace and a box [a,, b,] x. x [ad, bd], report every point whose ith coordinate lies in [a,, biJ, for each i = 1,..., d. The collection of points is fixed once and for all and can be preprocessed. The box, on the other hand, constitutes a query that must be answered online. It is shown that on a pointer machine a query time of O(k + polylog(n)), where k is the number of points to be reported, can only be achieved at the expense of fl(n(logn/loglogn)d‘) storage. Interestingly, these bounds are optimal in the pointer machine model, but they can be improved (ever so slightly) on a random access machine. In a companion paper, we address the related problem of adding up weights assigned to the points in the query box.
Authenticated Data Structures for Graph and Geometric Searching
 IN CTRSA
, 2001
"... Following in the spirit of data structure and algorithm correctness checking, authenticated data structures provide cryptographic proofs that their answers are as accurate as the author intended, even if the data structure is being maintained by a remote host. We present techniques for authenticatin ..."
Abstract

Cited by 49 (18 self)
 Add to MetaCart
Following in the spirit of data structure and algorithm correctness checking, authenticated data structures provide cryptographic proofs that their answers are as accurate as the author intended, even if the data structure is being maintained by a remote host. We present techniques for authenticating data structures that represent graphs and collection of geometric objects. We use a model where a data structure maintained by a trusted source is mirrored at distributed directories, with the directories answering queries made by users. When a user queries a directory, it receives a cryptographic proof in addition to the answer, where the proof contains statements signed by the source. The user verifies the proof trusting only the statements signed by the source. We show how to efficiently authenticate data structures for fundamental problems on networks, such as path and connectivity queries, and on geometric objects, such as intersection and containment queries.
Dynamic Trees and Dynamic Point Location
 In Proc. 23rd Annu. ACM Sympos. Theory Comput
, 1991
"... This paper describes new methods for maintaining a pointlocation data structure for a dynamicallychanging monotone subdivision S. The main approach is based on the maintenance of two interlaced spanning trees, one for S and one for the graphtheoretic planar dual of S. Queries are answered by using ..."
Abstract

Cited by 46 (11 self)
 Add to MetaCart
This paper describes new methods for maintaining a pointlocation data structure for a dynamicallychanging monotone subdivision S. The main approach is based on the maintenance of two interlaced spanning trees, one for S and one for the graphtheoretic planar dual of S. Queries are answered by using a centroid decomposition of the dual tree to drive searches in the primal tree. These trees are maintained via the linkcut trees structure of Sleator and Tarjan, leading to a scheme that achieves vertex insertion/deletion in O(log n) time, insertion/deletion of kedge monotone chains in O(log n + k) time, and answers queries in O(log 2 n) time, with O(n) space, where n is the current size of subdivision S. The techniques described also allow for the dual operations expand and contract to be implemented in O(log n) time, leading to an improved method for spatial pointlocation in a 3dimensional convex subdivision. In addition, the interlacedtree approach is applied to online pointlo...
Fractionally cascaded information in a sensor network
, 2004
"... We address the problem of distributed information aggregation and storage in a sensor network, where queries can be injected anywhere in the network. The principle we propose is that a sensor should know a “fraction ” of the information from distant parts of the network, in an exponentially decaying ..."
Abstract

Cited by 41 (9 self)
 Add to MetaCart
We address the problem of distributed information aggregation and storage in a sensor network, where queries can be injected anywhere in the network. The principle we propose is that a sensor should know a “fraction ” of the information from distant parts of the network, in an exponentially decaying fashion by distance. We show how a sampled scalar field can be stored in this distributed fashion, with only a modest amount of additional storage and network traffic. Our storage scheme makes neighboring sensors have highly correlated world views; this allows smooth information gradients and enables local search algorithms to work well. We study in particular how this principle of fractionally cascaded information can be exploited to answer range queries about the sampled field efficiently. Using local decisions only we are able to route the query to exactly the portions of the field where the sought information is stored. We provide a rigorous theoretical analysis showing that our scheme is close to optimal. Categories and Subject Descriptors H.3.3 [Information Systems]: information storage and retrieval—information search and retrieval; F.2.2 [Theory of Computation]: analysis of algorithms and problem complexity—nonnumerical algorithms and problems
Space Decomposition Techniques For Fast Layer4 Switching
 Proceedings of Conference on Protocols for High Speed Networks
, 1999
"... Packet classification is the problem of matching each incoming packet at a router against a database of filters, which specify forwarding rules for the packets. The filters are a powerful and uniform way to implement new network services such as firewalls, Network Address Translation (NAT), Virtual ..."
Abstract

Cited by 39 (2 self)
 Add to MetaCart
Packet classification is the problem of matching each incoming packet at a router against a database of filters, which specify forwarding rules for the packets. The filters are a powerful and uniform way to implement new network services such as firewalls, Network Address Translation (NAT), Virtual Private Networks (VPN), and perflow or classbased Quality of Service (QOS) guarantees. While several schemes have been proposed recently that can perform packet classification at high speeds, none of them achieves fast worstcase time for adding or deleting filters from the database. In this paper, we present a new scheme, based on space decomposition, whose search time is comparable to the best existing schemes, but which also offers fast worstcase filter update time. The three key ideas in this algorithm are as follows: (1) innovative datastructure based on quadtrees for a hierarchical representation of the recursively decomposed search space, (2) fractional cascading and precomputation to improve packet classification time, and (3) prefix partitioning to improve update time. Depending on the actual requirements of the system this algorithm is deployed in, a single parameter can be used to tradeoff search time for update time. Also, this algorithm is amenable to fast software and hardware implementation.
Counting and Reporting Red/Blue Segment Intersections
 CVGIP: Graph. Models Image Process
, 1993
"... We simplify the red/blue segment intersection algorithm of Chazelle et al: Given sets of n disjoint red and n disjoint blue segments, we count red/blue intersections in O(n log n) time using O(n) space or report them in additional time proportional to their number. Our algorithm uses a plane swee ..."
Abstract

Cited by 28 (3 self)
 Add to MetaCart
We simplify the red/blue segment intersection algorithm of Chazelle et al: Given sets of n disjoint red and n disjoint blue segments, we count red/blue intersections in O(n log n) time using O(n) space or report them in additional time proportional to their number. Our algorithm uses a plane sweep to presort the segments; then it operates on a list of slabs that efficiently stores a single level of a segment tree. With no dynamic memory allocation, low pointer overhead, and mostly sequential memory reference, our algorithm performs well even with inadequate physical memory. 1 Introduction Geographic information systems frequently organize map data into various layers. Users can make custom maps by overlaying roads, political boundaries, soil types, or whatever features are of interest to them. The ARC/INFO system [8] is organized around this model; even a relatively inexpensive database like the Digital Chart of the World [9] contains seventeen layers, several with sublayers. A...
Hierarchical representations of collections of small rectangles
 ACM Computing Surveys
, 1988
"... A tutorial survey is presented of hierarchical data structures for representing collections of small rectangles. Rectangles are often used as an approximation of shapes for which they serve as the minimum rectilinear enclosing object. They arise in applications in cartography as well as very larges ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
A tutorial survey is presented of hierarchical data structures for representing collections of small rectangles. Rectangles are often used as an approximation of shapes for which they serve as the minimum rectilinear enclosing object. They arise in applications in cartography as well as very largescale integration (VLSI) design rule checking. The different data structures are discussed in terms of how they support the execution of queries involving proximity relations. The focus is on intersection and subset queries. Several types of representations are described. Some are designed for use with the planesweep paradigm, which works well for static collections of rectangles. Others are oriented toward dynamic collections. In this case, one representation reduces each rectangle to a point in a higher multidimensional space and treats the problem as one involving point data. The other representation is area basedthat is, it depends on the physical extent of each rectangle.
Efficiently approximating polygonal paths in three and higher dimensions
 Algorithmica
, 1998
"... Abstract. We present efficient algorithms for solving polygonalpath approximation problems in three and higher dimensions. Given an nvertex polygonal curve P in R d, d ≥ 3, we approximate P by another polygonal curve P ′ of m ≤ n vertices in R d such that the vertex sequence of P ′ is an ordered s ..."
Abstract

Cited by 23 (5 self)
 Add to MetaCart
Abstract. We present efficient algorithms for solving polygonalpath approximation problems in three and higher dimensions. Given an nvertex polygonal curve P in R d, d ≥ 3, we approximate P by another polygonal curve P ′ of m ≤ n vertices in R d such that the vertex sequence of P ′ is an ordered subsequence of the vertices of P. The goal is either to minimize the size m of P ′ for a given error tolerance ε (called the min # problem), or to minimize the deviation error ε between P and P ′ for a given size m of P ′ (called the minε problem). Our techniques enable us to develop efficient nearquadratictime algorithms in three dimensions and subcubictime algorithms in four dimensions for solving the min # and minε problems. We discuss extensions of our solutions to ddimensional space, where d> 4, and for the L1 and L∞ metrics. Key Words. Curve approximation, Parametric searching. 1. Introduction. In
Computing partial sums in multidimensional arrays
 In Proc. of the ACM Symp. on Computational Geometry
, 1989
"... 1 Introduction The central theme of this paper is the complexity of the partialsum problem: Given a ddimensional array A with n entries in a semigroup and a drectangle q = [a1; b1] \Theta \Delta \Delta \Delta \Theta [ad; bd], compute the sum oe(A; q) = X ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
1 Introduction The central theme of this paper is the complexity of the partialsum problem: Given a ddimensional array A with n entries in a semigroup and a drectangle q = [a1; b1] \Theta \Delta \Delta \Delta \Theta [ad; bd], compute the sum oe(A; q) = X