Results 11 
14 of
14
A Generalization of Binomial Queues
 Information Processing Letters
, 1996
"... We give a generalization of binomial queues involving an arbitrary sequence (mk )k=0;1;2;::: of integers greater than one. Different sequences lead to different worst case bounds for the priority queue operations, allowing the user to adapt the data structure to the needs of a specific application. ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We give a generalization of binomial queues involving an arbitrary sequence (mk )k=0;1;2;::: of integers greater than one. Different sequences lead to different worst case bounds for the priority queue operations, allowing the user to adapt the data structure to the needs of a specific application. Examples include the first priority queue to combine a sublogarithmic worst case bound for Meld with a sublinear worst case bound for Delete min. Keywords: Data structures; Meldable priority queues. 1 Introduction The binomial queue, introduced in 1978 by Vuillemin [14], is a data structure for meldable priority queues. In meldable priority queues, the basic operations are insertion of a new item into a queue, deletion of the item having minimum key in a queue, and melding of two queues into a single queue. The binomial queue is one of many data structures which support these operations at a worst case cost of O(logn) for a queue of n items. Theoretical [2] and empirical [9] evidence i...
On sorting, heaps, and minimum spanning trees
 Algorithmica
"... Let A be a set of size m. Obtaining the first k ≤ m elements of A in ascending order can be done in optimal O(m + k log k) time. We present Incremental Quicksort (IQS), an algorithm (online on k) which incrementally gives the next smallest element of the set, so that the first k elements are obtaine ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Let A be a set of size m. Obtaining the first k ≤ m elements of A in ascending order can be done in optimal O(m + k log k) time. We present Incremental Quicksort (IQS), an algorithm (online on k) which incrementally gives the next smallest element of the set, so that the first k elements are obtained in optimal expected time for any k. Based on IQS, we present the Quickheap (QH), a simple and efficient priority queue for main and secondary memory. Quickheaps are comparable with classical binary heaps in simplicity, yet are more cachefriendly. This makes them an excellent alternative for a secondary memory implementation. We show that the expected amortized CPU cost per operation over a Quickheap of m elements is O(log m), and this translates into O((1/B)log(m/M)) I/O cost with main memory size M and block size B, in a cacheoblivious fashion. As a direct application, we use our techniques to implement classical Minimum Spanning Tree (MST) algorithms. We use IQS to implement Kruskal’s MST algorithm and QHs to implement Prim’s. Experimental results show that IQS, QHs, external QHs, and our Kruskal’s and Prim’s MST variants are competitive, and in many cases better in practice than current stateoftheart alternative (and much more sophisticated) implementations.
RankPairing Heaps
"... Abstract. We introduce the rankpairing heap, a heap (priority queue) implementation that combines the asymptotic efficiency of Fibonacci heaps with much of the simplicity of pairing heaps. Unlike all other heap implementations that match the bounds of Fibonacci heaps, our structure needs only one c ..."
Abstract
 Add to MetaCart
Abstract. We introduce the rankpairing heap, a heap (priority queue) implementation that combines the asymptotic efficiency of Fibonacci heaps with much of the simplicity of pairing heaps. Unlike all other heap implementations that match the bounds of Fibonacci heaps, our structure needs only one cut and no other structural changes per key decrease; the trees representing the heap can evolve to have arbitrary structure. Our initial experiments indicate that rankpairing heaps perform almost as well as pairing heaps on typical input sequences and better on worstcase sequences. 1
I/OEfficient Batched UnionFind and Its . . .
"... Despite extensive study over the last four decades and numerous applications, no I/Oefficient algorithm is known for the unionfind problem. In this paper we present an I/Oefficient algorithm for the batched (offline) version of the unionfind problem. Given any sequence of N mixed union andfin ..."
Abstract
 Add to MetaCart
Despite extensive study over the last four decades and numerous applications, no I/Oefficient algorithm is known for the unionfind problem. In this paper we present an I/Oefficient algorithm for the batched (offline) version of the unionfind problem. Given any sequence of N mixed union andfind operations, where each union operation joins two distinct sets, our algorithm uses O(SORT(N)) = O ( NB logM/B NB) I/Os, where M is the memory size and B is the disk block size. This bound isasymptotically optimal in the worst case. If there are union operations that join a set with itself, our algorithm uses O(SORT(N) + MST(N)) I/Os, where MST(N) is the number of I/Os needed to compute the minimum spanning tree of a graph with N edges. We also describe a simple and practical O(SORT(N) log ( NM))I/O algorithm, which we have implemented.The main motivation for our study of the unionfind problem arises from problems in terrain analysis. A terrain can be abstracted as a height function defined over R2, and many problems that deal with suchfunctions require a unionfind data structure. With the emergence of modern mapping technologies, huge amount of data is being generated that is too large to fit in memory, thus I/Oefficient algorithmsare needed to process this data efficiently. In this paper, we study two terrain analysis problems that benefit from a unionfind data structure: (i) computing topological persistence and (ii) constructing thecontour tree. We give the first O(SORT(N))I/O algorithms for these two problems, assuming that theinput terrain is represented as a triangular mesh with N vertices.Finally, we report some preliminary experimental results, showing that our algorithms give orderofmagnitude improvement over previous methods on large data sets that do not fit in memory.