Results 1 
8 of
8
Practical InPlace Mergesort
, 1996
"... Two inplace variants of the classical mergesort algorithm are analysed in detail. The first, straightforward variant performs at most N log 2 N + O(N ) comparisons and 3N log 2 N + O(N ) moves to sort N elements. The second, more advanced variant requires at most N log 2 N + O(N ) comparisons and & ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Two inplace variants of the classical mergesort algorithm are analysed in detail. The first, straightforward variant performs at most N log 2 N + O(N ) comparisons and 3N log 2 N + O(N ) moves to sort N elements. The second, more advanced variant requires at most N log 2 N + O(N ) comparisons and "N log 2 N moves, for any fixed " ? 0 and any N ? N ("). In theory, the second one is superior to advanced versions of heapsort. In practice, due to the overhead in the index manipulation, our fastest inplace mergesort behaves still about 50 per cent slower than the bottomup heapsort. However, our implementations are practical compared to mergesort algorithms based on inplace merging. Key words: sorting, mergesort, inplace algorithms CR Classification: F.2.2 1.
An InPlace Sorting with O(n log n) Comparisons and O(n) Moves
 In Proc. 44th Annual IEEE Symposium on Foundations of Computer Science
, 2003
"... Abstract. We present the first inplace algorithm for sorting an array of size n that performs, in the worst case, at most O(n log n) element comparisons and O(n) element transports. This solves a longstanding open problem, stated explicitly, e.g., in [J.I. Munro and V. Raman, Sorting with minimum ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We present the first inplace algorithm for sorting an array of size n that performs, in the worst case, at most O(n log n) element comparisons and O(n) element transports. This solves a longstanding open problem, stated explicitly, e.g., in [J.I. Munro and V. Raman, Sorting with minimum data movement, J. Algorithms, 13, 374–93, 1992], of whether there exists a sorting algorithm that matches the asymptotic lower bounds on all computational resources simultaneously.
Performance Engineering Case Study: Heap Construction
 WAE, LNCS
, 1999
"... this paper we study, both analytically and experimentally, the performance of programs that construct a binary heap [Williams 1964] in a hierarchical memory system. Especially, we consider the largest memory level that is too small to fit the whole heap. We call that particular level simply the cach ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
this paper we study, both analytically and experimentally, the performance of programs that construct a binary heap [Williams 1964] in a hierarchical memory system. Especially, we consider the largest memory level that is too small to fit the whole heap. We call that particular level simply the cache. It should, however, be emphasized that our analysis is valid for the memory levels below this cache as well, provided that all our assumptions are fulfilled. We let B denote the size of the blocks transferred between the cache and the memory level above it, and M the capacity of the cache, both measured in elements being manipulated.
The Ultimate Heapsort
 In Proceedings of the Computing: the 4th Australasian Theory Symposium, Australian Computer Science Communications
, 1998
"... . A variant of Heapsortnamed Ultimate Heapsortis presented that sorts n elements inplace in \Theta(n log 2 (n+ 1)) worstcase time by performing at most n log 2 n + \Theta(n) key comparisons and n log 2 n + \Theta(n) element moves. The secret behind Ultimate Heapsort is that it occasionally ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
. A variant of Heapsortnamed Ultimate Heapsortis presented that sorts n elements inplace in \Theta(n log 2 (n+ 1)) worstcase time by performing at most n log 2 n + \Theta(n) key comparisons and n log 2 n + \Theta(n) element moves. The secret behind Ultimate Heapsort is that it occasionally transforms the heap it operates with to a twolayer heap which keeps small elements at the leaves. Basically, Ultimate Heapsort is like BottomUp Heapsort but, due to the twolayer heap property, an element taken from a leaf has to be moved towards the root only O(1) levels, on an average. Let a[1::n] be an array of n elements each consisting of a key and some information associated with this key. This array is a (maximum) heap if, for all i 2 f2; : : : ; ng, the key of element a[bi=2c] is larger than or equal to that of element a[i]. That is, a heap is a pointerfree representation of a left complete binary tree, where the elements stored are partially ordered according to their keys. Ele...
An extended truth about heaps ⋆
"... Abstract. We describe a number of alternative implementations for the heap functions, which are part of the C++ standard library, and provide a through experimental evaluation of their performance. In our benchmarking framework the heap functions are implemented using the same set of utility functio ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. We describe a number of alternative implementations for the heap functions, which are part of the C++ standard library, and provide a through experimental evaluation of their performance. In our benchmarking framework the heap functions are implemented using the same set of utility functions, the utility functions using the same set of policy functions, and for each implementation alternative only the utility functions need be modified. This way the programs become homogeneous and the underlying methods can be compared fairly. Our benchmarks show that the conflicting results in earlier experimental studies are mainly due to test arrangements. No heapifying approach is universally the best for all kinds of inputs and ordering functions, but the bottomup heapifying performs well for most kinds of inputs and ordering functions. We examine several approaches that improve the worstcase performance and make the heap functions even more trustworthy. 1
unknown title
, 2006
"... Theoretical and practical efficiency of priority queues ..."
(Show Context)
Additional Key Words and Phrases: Sorting inplace
"... Abstract. We present the first inplace algorithm for sorting an array of size n that performs, in the worst case, at most O(n log n) element comparisons and O(n) element transports. This solves a longstanding open problem, stated explicitly, for example, in Munro and Raman [1992], of whether there ..."
Abstract
 Add to MetaCart
Abstract. We present the first inplace algorithm for sorting an array of size n that performs, in the worst case, at most O(n log n) element comparisons and O(n) element transports. This solves a longstanding open problem, stated explicitly, for example, in Munro and Raman [1992], of whether there exists a sorting algorithm that matches the asymptotic lower bounds on all computational resources simultaneously.
Multiway Blockwise Inplace Merging
"... Abstract. We present an algorithm for asymptotically efficient multiway blockwise inplace merging. Given an array A containing sorted subsequences A1,..., Ak of respective lengths n1,..., nk, where ∑k ni = n, we assume i=1 that extra k·s elements (so called buffer elements) are positioned at the ve ..."
Abstract
 Add to MetaCart
Abstract. We present an algorithm for asymptotically efficient multiway blockwise inplace merging. Given an array A containing sorted subsequences A1,..., Ak of respective lengths n1,..., nk, where ∑k ni = n, we assume i=1 that extra k·s elements (so called buffer elements) are positioned at the very end of array A, and that the lengths n1,..., nk are positive integer multiples of some parameter s (i.e., multiples of a given block of length s). The number of input sequences k is a fixed constant parameter, not dependent on the lengths of input sequences. Then our algorithm merges the subsequences A1,..., Ak into a single sorted sequence, performing Θ(log k·n) + O((n/s) 2) + O(s· log s) element comparisons and 3·n + O(s·log s) element moves. 1 Then, for s = ⌈n 2/3 /(log n) 1/3 ⌉, this gives an algorithm performing Θ(log k·n) + O((n·log n) 2/3) comparisons and 3·n + O((n·log n) 2/3) moves. That is, our algorithm runs in linear time, with an asymptotically optimal number of comparisons and with the number of moves independent on the number of input sequences. Moreover, our algorithm is “almost inplace”, it requires only k extra blocks of size s = o(n). 1