Results 1 
5 of
5
RankPairing Heaps
"... Abstract. We introduce the rankpairing heap, a heap (priority queue) implementation that combines the asymptotic efficiency of Fibonacci heaps with much of the simplicity of pairing heaps. Unlike all other heap implementations that match the bounds of Fibonacci heaps, our structure needs only one c ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
Abstract. We introduce the rankpairing heap, a heap (priority queue) implementation that combines the asymptotic efficiency of Fibonacci heaps with much of the simplicity of pairing heaps. Unlike all other heap implementations that match the bounds of Fibonacci heaps, our structure needs only one cut and no other structural changes per key decrease; the trees representing the heap can evolve to have arbitrary structure. Our initial experiments indicate that rankpairing heaps perform almost as well as pairing heaps on typical input sequences and better on worstcase sequences. 1
Violation heaps: A better substitute for Fibonacci heaps
, 2008
"... We give a priority queue that achieves the same amortized bounds as Fibonacci heaps. Namely, findmin requires O(1) worstcase time, insert, meld and decreasekey require O(1) amortized time, and deletemin requires O(log n) amortized time. Our structure is simple and promises a more efficient pract ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
We give a priority queue that achieves the same amortized bounds as Fibonacci heaps. Namely, findmin requires O(1) worstcase time, insert, meld and decreasekey require O(1) amortized time, and deletemin requires O(log n) amortized time. Our structure is simple and promises a more efficient practical behavior compared to any other known Fibonaccilike heap.
A back–to–basics empirical study of priority queues
, 2013
"... The theory community has proposed several new heap variants in the recent past which have remained largely untested experimentally. We take the field back to the drawing board, with straightforward implementations of both classic and novel structures using only standard, wellknown optimizations. We ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
The theory community has proposed several new heap variants in the recent past which have remained largely untested experimentally. We take the field back to the drawing board, with straightforward implementations of both classic and novel structures using only standard, wellknown optimizations. We study the behavior of each structure on a variety of inputs, including artificial workloads, workloads generated by running algorithms on real map data, and workloads from a discrete event simulator used in recent systems networking research. We provide observations about which characteristics are most correlated to performance. For example, we find that the L1 cache miss rate appears to be strongly correlated with wallclock time. We also provide observations about how the input sequence affects the relative performance of the different heap variants. For example, we show (both theoretically and in practice) that certain random insertiondeletion sequences are degenerate and can lead to misleading results. Overall, our findings suggest that while the conventional wisdom holds in some cases, it is sorely mistaken in others. 1
Hollow Heaps
"... We introduce the hollow heap, a very simple data structure with the same amortized efficiency as the classical Fibonacci heap. All heap operations except delete and deletemin take O(1) time, worst case as well as amortized; delete and deletemin take O(log n) amortized time. Hollow heaps are by far ..."
Abstract
 Add to MetaCart
(Show Context)
We introduce the hollow heap, a very simple data structure with the same amortized efficiency as the classical Fibonacci heap. All heap operations except delete and deletemin take O(1) time, worst case as well as amortized; delete and deletemin take O(log n) amortized time. Hollow heaps are by far the simplest structure to achieve this. Hollow heaps combine two novel ideas: the use of lazy deletion and reinsertion to do decreasekey operations, and the use of a dag (directed acyclic graph) instead of a tree or set of trees to represent a heap. Lazy deletion produces hollow nodes (nodes without items), giving the data structure its name.
A note on meldable heaps relying on datastructural bootstrapping∗
"... Abstract. We introduce a meldable heap which guarantees the worstcase cost of O(1) for findmin, insert, and meld with at most 0, 3, and 3 element comparisons for the respective operations; and the worstcase cost of O(lg n) with at most 3 lgn+ O(1) element comparisons for delete. Our data structur ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. We introduce a meldable heap which guarantees the worstcase cost of O(1) for findmin, insert, and meld with at most 0, 3, and 3 element comparisons for the respective operations; and the worstcase cost of O(lg n) with at most 3 lgn+ O(1) element comparisons for delete. Our data structure is asymptotically optimal and nearly constantfactor optimal with respect to the comparison complexity of all the meldableheap operations. Furthermore, the data structure is also simple and elegant. 1.