Results 1  10
of
19
The buffer tree: A new technique for optimal I/Oalgorithms
 University of Aarhus
, 1995
"... ..."
(Show Context)
Lower bounds for UnionSplitFind related problems on random access machines
, 1994
"... We prove \Omega\Gamma p log log n) lower bounds on the random access machine complexity of several dynamic, partially dynamic and static data structure problems, including the unionsplitfind problem, dynamic prefix problems and onedimensional range query problems. The proof techniques include a ..."
Abstract

Cited by 51 (3 self)
 Add to MetaCart
We prove \Omega\Gamma p log log n) lower bounds on the random access machine complexity of several dynamic, partially dynamic and static data structure problems, including the unionsplitfind problem, dynamic prefix problems and onedimensional range query problems. The proof techniques include a general technique using perfect hashing for reducing static data structure problems (with a restriction of the size of the structure) into partially dynamic data structure problems (with no such restriction), thus providing a way to transfer lower bounds. We use a generalization of a method due to Ajtai for proving the lower bounds on the static problems, but describe the proof in terms of communication complexity, revealing a striking similarity to the proof used by Karchmer and Wigderson for proving lower bounds on the monotone circuit depth of connectivity. 1 Introduction and summary of results In this paper we give lower bounds for the complexity of implementing several dynamic and sta...
A General Lower Bound on the I/OComplexity of Comparisonbased Algorithms
 In Proc. Workshop on Algorithms and Data Structures, LNCS 709
, 1993
"... We show a general relationship between the number of comparisons and the number of I/Ooperations needed to solve a given problem. This relationship enables one to show lower bounds on the number of I/Ooperations needed to solve a problem whenever a lower bound on the number of comparisons is known ..."
Abstract

Cited by 38 (11 self)
 Add to MetaCart
We show a general relationship between the number of comparisons and the number of I/Ooperations needed to solve a given problem. This relationship enables one to show lower bounds on the number of I/Ooperations needed to solve a problem whenever a lower bound on the number of comparisons is known. We use the result to show lower bounds on the I/Ocomplexity on a number of problems where known techniques only give trivial bounds. Among these are the problems of removing duplicates from a multiset, a problem of great importance in e.g. relational database systems, and the problem of determining the mode  the most frequently occurring element  of a multiset. We develop algorithms for these problems in order to show that the lower bounds are tight.
Exponential structures for efficient cacheoblivious algorithms
 In Proceedings of the 29th International Colloquium on Automata, Languages and Programming
, 2002
"... Abstract. We present cacheoblivious data structures based upon exponential structures. These data structures perform well on a hierarchical memory but do not depend on any parameters of the hierarchy, including the block sizes and number of blocks at each level. The problems we consider are searchi ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
(Show Context)
Abstract. We present cacheoblivious data structures based upon exponential structures. These data structures perform well on a hierarchical memory but do not depend on any parameters of the hierarchy, including the block sizes and number of blocks at each level. The problems we consider are searching, partial persistence and planar point location. On a hierarchical memory where data is transferred in blocks of size B, some of the results we achieve are: – We give a linearspace data structure for dynamic searching that supports searches and updates in optimal O(log B N) worstcase I/Os, eliminating amortization from the result of Bender, Demaine, and FarachColton (FOCS ’00). We also consider finger searches and updates and batched searches. – We support partiallypersistent operations on an ordered set, namely, we allow searches in any previous version of the set and updates to the latest version of the set (an update creates a new version of the set). All operations take an optimal O(log B (m + N)) amortized I/Os, where N is the size of the version being searched/updated, and m is the number of versions. – We solve the planar point location problem in linear space, taking optimal O(log B N) I/Os for point location queries, where N is the number of line segments specifying the partition of the plane. The preprocessing requires O((N/B) log M/B N) I/Os, where M is the size of the ‘inner ’ memory. 1
Adapting Radix Sort to the Memory Hierarchy
 In ALENEX, Workshop on Algorithm Engineering and Experimentation
, 2000
"... this paper, we focus on one such: the integer sorting algorithm least signicant bit (LSB) radix sort. LSB radix sort sorts wbit integer keys with an rbit radix in O(dw=re(n+2 ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
this paper, we focus on one such: the integer sorting algorithm least signicant bit (LSB) radix sort. LSB radix sort sorts wbit integer keys with an rbit radix in O(dw=re(n+2
External Memory Value Iteration
, 2007
"... We propose a unified approach to diskbased search for deterministic, nondeterministic, and probabilistic (MDP) settings. We provide the design of an external Value Iteration algorithm that performs at most O(lG · scan(E) + tmax · sort(E)) I/Os, where lG is the length of the largest backedge i ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
We propose a unified approach to diskbased search for deterministic, nondeterministic, and probabilistic (MDP) settings. We provide the design of an external Value Iteration algorithm that performs at most O(lG · scan(E) + tmax · sort(E)) I/Os, where lG is the length of the largest backedge in the breadthfirst search graph G having E  edges, tmax is the maximum number of iterations, and scan(n) and sort(n) are the I/O complexities for externally scanning and sorting n items. The new algorithm is evaluated over large instances of known benchmark problems. As shown, the proposed algorithm is able to solve very large problems that do not fit into the available RAM and thus out of reach for other exact algorithms. 1 1
I/OEfficient Join Algorithms for Temporal, Spatial, and Constraint Databases", Bell Labs TechReport
, 1996
"... ..."
(Show Context)
Graph Algorithms for Multicores with Multilevel Caches
, 2009
"... Historically, the primary model of computation employed in the design and analysis of algorithms has been the sequential RAM model. However, recent developments in computer architecture have reduced the efficacy of the sequential RAM model for algorithmic development. In response, theoretical comput ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Historically, the primary model of computation employed in the design and analysis of algorithms has been the sequential RAM model. However, recent developments in computer architecture have reduced the efficacy of the sequential RAM model for algorithmic development. In response, theoretical computer scientists have developed models of computation which better reflect these modern architectures. In this project, we consider a variety of graph problems on parallel, cacheefficient, and multicore models of computation. We introduce each model by defining the analysis of algorithms on these models. Then, for each model, we present current results for the problems of prefix sums, list ranking, various tree problems, connected components, and minimum spanning tree. Finally, we present our novel results, which include the multicore oblivious extension of current results on a private cache multicore model to a more general multilevel multicore