Results 1 
8 of
8
Cacheoblivious Btrees
, 2000
"... Abstract. This paper presents two dynamic search trees attaining nearoptimal performance on any hierarchical memory. The data structures are independent of the parameters of the memory hierarchy, e.g., the number of memory levels, the blocktransfer size at each level, and the relative speeds of me ..."
Abstract

Cited by 139 (22 self)
 Add to MetaCart
Abstract. This paper presents two dynamic search trees attaining nearoptimal performance on any hierarchical memory. The data structures are independent of the parameters of the memory hierarchy, e.g., the number of memory levels, the blocktransfer size at each level, and the relative speeds of memory levels. The performance is analyzed in terms of the number of memory transfers between two memory levels with an arbitrary blocktransfer size of B; this analysis can then be applied to every adjacent pair of levels in a multilevel memory hierarchy. Both search trees match the optimal search bound of Θ(1+logB+1 N) memory transfers. This bound is also achieved by the classic Btree data structure on a twolevel memory hierarchy with a known blocktransfer size B. The first search tree supports insertions and deletions in Θ(1 + logB+1 N) amortized memory transfers, which matches the Btree’s worstcase bounds. The second search tree supports scanning S consecutive elements optimally in Θ(1 + S/B) memory transfers and supports insertions and deletions in Θ(1 + logB+1 N + log2 N) amortized memory transfers, matching the performance of the Btree for B = B Ω(log N log log N).
Fully Dynamic Delaunay Triangulation in Logarithmic Expected Time per Operation
, 1991
"... The Delaunay Tree is a hierarchical data structure that has been introduced in [6] and analyzed in [7,4]. For a given set of sites S in the plane and an order of insertion for these sites, the Delaunay Tree stores all the successive Delaunay triangulations. As proved before, the Delaunay Tree all ..."
Abstract

Cited by 36 (6 self)
 Add to MetaCart
The Delaunay Tree is a hierarchical data structure that has been introduced in [6] and analyzed in [7,4]. For a given set of sites S in the plane and an order of insertion for these sites, the Delaunay Tree stores all the successive Delaunay triangulations. As proved before, the Delaunay Tree allows the insertion of a site in logarithmic expected time and linear expected space, when the insertion sequence is randomized.
I/OEfficient Dynamic Planar Point Location
"... We present the first provably I/Oefficient dynamic data structure for point location in a general planar subdivision. Our structure uses O(N/B) disk blocks to store a subdivision of size N , where B is the disk block size. Queries can be answered in ... I/Os in the worstcase, and insertions and de ..."
Abstract

Cited by 29 (17 self)
 Add to MetaCart
We present the first provably I/Oefficient dynamic data structure for point location in a general planar subdivision. Our structure uses O(N/B) disk blocks to store a subdivision of size N , where B is the disk block size. Queries can be answered in ... I/Os in the worstcase, and insertions and deletions can be performed in ... and ... I/Os amortized, respectively. Previously, an I/Oefficient dynamic point location structure was only known for monotone subdivisions. Part of our data structure...
Efficient CrossTrees for External Memory
, 1998
"... . We describe efficient methods for organizing and maintaining large multidimensional data sets in external memory. This is particular important as access to external memory is currently several order of magnitudes slower than access to main memory, and current technology advances are likely to make ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
. We describe efficient methods for organizing and maintaining large multidimensional data sets in external memory. This is particular important as access to external memory is currently several order of magnitudes slower than access to main memory, and current technology advances are likely to make this gap even wider. We focus particularly on multidimensional data sets which must be kept simultaneously sorted under several total orderings: these orderings may be defined by the user, and may also be changed dynamically by the user throughout the lifetime of the data structures, according to the application at hand. Besides standard insertions and deletions of data, our proposed solution can perform efficiently split and concatenate operations on the whole data sets according to any ordering. This allows the user: (1) to dynamically rearrange any ordering of a segment of data, in a time that is faster than recomputing the new ordering from scratch; (2) to efficiently answer queries rel...
Efficient Splitting and Merging Algorithms for Order Decomposable Problems
, 1997
"... Let S be a set whose items are sorted with respect to d ? 1 total orders OE 1 ; : : : ; OE d , and which is subject to dynamic operations, such as insertions of a single item, deletions of a single item, split and concatenate operations performed according to any chosen order OE i (1 i d). This g ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
Let S be a set whose items are sorted with respect to d ? 1 total orders OE 1 ; : : : ; OE d , and which is subject to dynamic operations, such as insertions of a single item, deletions of a single item, split and concatenate operations performed according to any chosen order OE i (1 i d). This generalizes to dimension d ? 1 the notion of concatenable data structures, such as the 23trees, which support splits and concatenates under a single total order. The main contribution of this paper is a general and novel technique for solving order decomposable problems on S, which yields new and efficient concatenable data structures for dimension d ? 1. By using our technique we maintain S with the following time bounds: O(log n) for the insertion or the deletion of a single item, where n is the number of items currently in S; O(n 1\Gamma1=d ) for splits and concatenates along any order, and for rectangular range queries. The space required is O(n). We provide several applications of ...
CacheOblivious BTrees
"... We present dynamic searchtree data structures that perform well in the setting of a hierarchical memory (including various levels of cache, disk, etc.), but do not depend on the number of memory levels, the block sizes and number of blocks at each level, or the relative speeds of memory access. In ..."
Abstract
 Add to MetaCart
We present dynamic searchtree data structures that perform well in the setting of a hierarchical memory (including various levels of cache, disk, etc.), but do not depend on the number of memory levels, the block sizes and number of blocks at each level, or the relative speeds of memory access. In particular, between any pair of levels in the memory hierarchy, where transfers between the levels are done in blocks of size B, our data structures match the optimal search bound of Θ(log B N) memory transfers. This bound is also achieved by the classic Btree data structure, but only when the block size B is known, which in practice requires careful tuning on each machine platform. One of our data structures supports insertions and deletions in Θ(log B N) amortized memory transfers, which matches the Btree’s worstcase bounds. We augment this structure to support scans optimally in Θ(N/B) memory transfers. In this second data structure insertions and deletions require Θ(logB N + log2 N B) amortized memory transfers. Thus, we match the performance of the Btree for B = Ω(log N log log N). 1.
Dynamic Dictionaries in Constant WorstCase Time
"... We introduce a technique to maintain a set of n elements from a universe of size u with membership and indel operations, so that elements are associated rbit satellite data. We achieve constant worstcase time for all the operations, at the price of spending u + o(u) + O(nr + n log log log u) bits ..."
Abstract
 Add to MetaCart
We introduce a technique to maintain a set of n elements from a universe of size u with membership and indel operations, so that elements are associated rbit satellite data. We achieve constant worstcase time for all the operations, at the price of spending u + o(u) + O(nr + n log log log u) bits of space. Only the variant where the space is of the form O(nr + n log u) was exhaustively explored before, yet in that case existing lower bounds prevent achieving constant worstcase times. As a byproduct, we improve a folklore data structure for initializing an array of n elements in constant time, by reducing its space requirement from 2n log n to n+o(n) bits. Key words: Algorithms and data structures, succinct data structures, dynamic perfect hashing, dynamic dictionaries with satellite information. 1 Introduction and Related Work One of the most basic algorithmic problems is that of maintaining a set of (key, value) pairs, so as to retrieve the value associated to a key (or determine