Results 1  10
of
14
Lower bounds for UnionSplitFind related problems on random access machines
, 1994
"... We prove \Omega\Gamma p log log n) lower bounds on the random access machine complexity of several dynamic, partially dynamic and static data structure problems, including the unionsplitfind problem, dynamic prefix problems and onedimensional range query problems. The proof techniques include a ..."
Abstract

Cited by 49 (3 self)
 Add to MetaCart
We prove \Omega\Gamma p log log n) lower bounds on the random access machine complexity of several dynamic, partially dynamic and static data structure problems, including the unionsplitfind problem, dynamic prefix problems and onedimensional range query problems. The proof techniques include a general technique using perfect hashing for reducing static data structure problems (with a restriction of the size of the structure) into partially dynamic data structure problems (with no such restriction), thus providing a way to transfer lower bounds. We use a generalization of a method due to Ajtai for proving the lower bounds on the static problems, but describe the proof in terms of communication complexity, revealing a striking similarity to the proof used by Karchmer and Wigderson for proving lower bounds on the monotone circuit depth of connectivity. 1 Introduction and summary of results In this paper we give lower bounds for the complexity of implementing several dynamic and sta...
A General Lower Bound on the I/OComplexity of Comparisonbased Algorithms
 In Proc. Workshop on Algorithms and Data Structures, LNCS 709
, 1993
"... We show a general relationship between the number of comparisons and the number of I/Ooperations needed to solve a given problem. This relationship enables one to show lower bounds on the number of I/Ooperations needed to solve a problem whenever a lower bound on the number of comparisons is known ..."
Abstract

Cited by 32 (11 self)
 Add to MetaCart
We show a general relationship between the number of comparisons and the number of I/Ooperations needed to solve a given problem. This relationship enables one to show lower bounds on the number of I/Ooperations needed to solve a problem whenever a lower bound on the number of comparisons is known. We use the result to show lower bounds on the I/Ocomplexity on a number of problems where known techniques only give trivial bounds. Among these are the problems of removing duplicates from a multiset, a problem of great importance in e.g. relational database systems, and the problem of determining the mode  the most frequently occurring element  of a multiset. We develop algorithms for these problems in order to show that the lower bounds are tight.
Exponential structures for efficient cacheoblivious algorithms
 In Proceedings of the 29th International Colloquium on Automata, Languages and Programming
, 2002
"... Abstract. We present cacheoblivious data structures based upon exponential structures. These data structures perform well on a hierarchical memory but do not depend on any parameters of the hierarchy, including the block sizes and number of blocks at each level. The problems we consider are searchi ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
Abstract. We present cacheoblivious data structures based upon exponential structures. These data structures perform well on a hierarchical memory but do not depend on any parameters of the hierarchy, including the block sizes and number of blocks at each level. The problems we consider are searching, partial persistence and planar point location. On a hierarchical memory where data is transferred in blocks of size B, some of the results we achieve are: – We give a linearspace data structure for dynamic searching that supports searches and updates in optimal O(log B N) worstcase I/Os, eliminating amortization from the result of Bender, Demaine, and FarachColton (FOCS ’00). We also consider finger searches and updates and batched searches. – We support partiallypersistent operations on an ordered set, namely, we allow searches in any previous version of the set and updates to the latest version of the set (an update creates a new version of the set). All operations take an optimal O(log B (m + N)) amortized I/Os, where N is the size of the version being searched/updated, and m is the number of versions. – We solve the planar point location problem in linear space, taking optimal O(log B N) I/Os for point location queries, where N is the number of line segments specifying the partition of the plane. The preprocessing requires O((N/B) log M/B N) I/Os, where M is the size of the ‘inner ’ memory. 1
Adapting Radix Sort to the Memory Hierarchy
 In ALENEX, Workshop on Algorithm Engineering and Experimentation
, 2000
"... this paper, we focus on one such: the integer sorting algorithm least signicant bit (LSB) radix sort. LSB radix sort sorts wbit integer keys with an rbit radix in O(dw=re(n+2 ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
this paper, we focus on one such: the integer sorting algorithm least signicant bit (LSB) radix sort. LSB radix sort sorts wbit integer keys with an rbit radix in O(dw=re(n+2
External Memory Value Iteration
, 2007
"... We propose a unified approach to diskbased search for deterministic, nondeterministic, and probabilistic (MDP) settings. We provide the design of an external Value Iteration algorithm that performs at most O(lG · scan(E) + tmax · sort(E)) I/Os, where lG is the length of the largest backedge i ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
We propose a unified approach to diskbased search for deterministic, nondeterministic, and probabilistic (MDP) settings. We provide the design of an external Value Iteration algorithm that performs at most O(lG · scan(E) + tmax · sort(E)) I/Os, where lG is the length of the largest backedge in the breadthfirst search graph G having E  edges, tmax is the maximum number of iterations, and scan(n) and sort(n) are the I/O complexities for externally scanning and sorting n items. The new algorithm is evaluated over large instances of known benchmark problems. As shown, the proposed algorithm is able to solve very large problems that do not fit into the available RAM and thus out of reach for other exact algorithms. 1 1
Sorting n Objects With a kSorter
 IEEE Transactions on Computers
, 1990
"... A ksorter is a device that sorts k objects in unit time. We define the complexity of an algorithm that uses a ksorter as the number of applications of the ksorter. In this measure, the complexity of sorting n objects is between n log n=k log k and 4n log n=k log k, up to first order terms in n a ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
A ksorter is a device that sorts k objects in unit time. We define the complexity of an algorithm that uses a ksorter as the number of applications of the ksorter. In this measure, the complexity of sorting n objects is between n log n=k log k and 4n log n=k log k, up to first order terms in n and k. 1 Introduction Although there has been much work on sorting a fixed number of elements with special hardware, we have not seen any results on how to use specialpurpose sorting devices when one wants to sort more data than the sorting device was designed for. Eventually, coprocessors that sort a fixed number of elements may become as common as coprocessors that perform floating point operations. Thus it becomes important to be able to write software that effectively takes advantage of a ksorter. Supported in part by National Science Foundation grants CCR8808949 and CCR8958528 and by graduate fellowships from the NSF and the Fannie and John Hertz Foundation. This research was com...
I/OEfficient Join Algorithms for Temporal, Spatial, and Constraint Databases
, 1996
"... We examine I/Oefficient algorithms for join problems arising in spatial, temporal, and constraint databases. Along with retrieval (implemented by hashing or indexing), the join is one of the most I/Ointensive operations in database systems. The join problem in many data models can be defined as ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We examine I/Oefficient algorithms for join problems arising in spatial, temporal, and constraint databases. Along with retrieval (implemented by hashing or indexing), the join is one of the most I/Ointensive operations in database systems. The join problem in many data models can be defined as the intersection between two sets of orthogonal rectangles in d dimensions. In this paper, we present new I/Oefficient algorithms for the ddimensional join problem in one, two, and three dimensions, and then generalize our algorithms to arbitrary higher dimensions. Let N be the total number of rectangles in the two sets to be joined, M the total amount of memory available, B the disk block size, and T the total number of pairs in the output of the join. Define n = N=B;m = M=B; and t = T=B. For one and two dimensions, we provide I/Ooptimal join algorithms that run in O(n log m n + t) I/O operations, and that are simple and practical and have direct applications to temporal and spat...
Memory Reference Locality in Binary Search Trees
, 1995
"... Balanced binary search trees are widely used main memory index structures. They provide for logarithmic cost for searching, insertion, deletion, and efficient ordered scanning of keys. Long term trends in computer technology have emphasized the effect of memory reference locality on algorithm perfor ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Balanced binary search trees are widely used main memory index structures. They provide for logarithmic cost for searching, insertion, deletion, and efficient ordered scanning of keys. Long term trends in computer technology have emphasized the effect of memory reference locality on algorithm performance. For example, the search performance of large structurally equivalent binary trees can double if nodes are located optimally in memory relative to each other. Unfortunately the traditional Random Access Memory (RAM) model cannot distinguish algorithms with good memory reference locality from algorithms with poor memory reference locality. We therefore define a new ...