Results 1 
7 of
7
Implementing HEAPSORT with n log n  0.9n and QUICKSORT with n log n + 0.2n Comparisons
 ACM Journal of Experimental Algorithms
, 2002
"... With refinements to the WEAKHEAPSORT... ..."
Performance study of improved HeapSort algorithm and other sorting algorithms on different platforms
 Int. J. Comput. Sci. Network Secur
, 2008
"... Today there are several efficient algorithms that cope with the popular task of sorting. This paper titled Comparative Performance Study of Improved Heap Sort Algorithm and other sorting Algorithms presents a comparison between classical sorting algorithms and improved heap sort algorithm. To have s ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Today there are several efficient algorithms that cope with the popular task of sorting. This paper titled Comparative Performance Study of Improved Heap Sort Algorithm and other sorting Algorithms presents a comparison between classical sorting algorithms and improved heap sort algorithm. To have some experimental data to sustain these comparisons three representative algorithms were chosen (classical Heap sort, quick sort and merge sort). The improved Heap sort algorithm was compared with some experimental data of classical algorithms on two different platforms that lead to final conclusions.
Pushing the Limits in Sequential Sorting
 Proceedings of the 4 th International Workshop on Algorithm Engineering (WAE 2000
, 2000
"... With refinements to the WEAKHEAPSORT algorithm we establish the general and practical relevant sequential sorting algorithm RELAXEDWEAKHEAPSORT executing exactly ndlog ne#2 dlog ne + 1 # n log n # 0:9n comparisons on any given input. The number of transpositions is bounded by n plus the number of ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
With refinements to the WEAKHEAPSORT algorithm we establish the general and practical relevant sequential sorting algorithm RELAXEDWEAKHEAPSORT executing exactly ndlog ne#2 dlog ne + 1 # n log n # 0:9n comparisons on any given input. The number of transpositions is bounded by n plus the number of comparisons. Experiments show that RELAXEDWEAKHEAPSORT only requires O(n) extra bits. Even if this space is not available, with QUICKWEAKHEAPSORT we propose an efficient QUICKSORT variant with n log n+0:2n+ o(n) comparisons on the average. Furthermore, we present data showing that WEAKHEAPSORT, RELAXEDWEAKHEAPSORT and QUICKWEAKHEAPSORT beat other performant QUICKSORT and HEAPSORT variants even for moderate values of n.
An extended truth about heaps ⋆
"... Abstract. We describe a number of alternative implementations for the heap functions, which are part of the C++ standard library, and provide a through experimental evaluation of their performance. In our benchmarking framework the heap functions are implemented using the same set of utility functio ..."
Abstract
 Add to MetaCart
Abstract. We describe a number of alternative implementations for the heap functions, which are part of the C++ standard library, and provide a through experimental evaluation of their performance. In our benchmarking framework the heap functions are implemented using the same set of utility functions, the utility functions using the same set of policy functions, and for each implementation alternative only the utility functions need be modified. This way the programs become homogeneous and the underlying methods can be compared fairly. Our benchmarks show that the conflicting results in earlier experimental studies are mainly due to test arrangements. No heapifying approach is universally the best for all kinds of inputs and ordering functions, but the bottomup heapifying performs well for most kinds of inputs and ordering functions. We examine several approaches that improve the worstcase performance and make the heap functions even more trustworthy. 1
CacheOblivious Searching and Sorting Master’s Thesis By
, 2003
"... Algorithms that use multilayered memory hierarchies efficiently have traditionally relied on detailed knowledge of the characteristics of memory systems. The cacheoblivious approach changed this in 1999 by making it possible to design memoryefficient algorithms for hierarchical memory systems wit ..."
Abstract
 Add to MetaCart
Algorithms that use multilayered memory hierarchies efficiently have traditionally relied on detailed knowledge of the characteristics of memory systems. The cacheoblivious approach changed this in 1999 by making it possible to design memoryefficient algorithms for hierarchical memory systems without such detailed knowledge. As a consequence, one single implementation of a cacheoblivious algorithm is efficient on any memory hierarchy. The purpose of the thesis is to investigate the behavior of cacheoblivious searching and sorting algorithms through constantfactors analysis and benchmarking. Cacheoblivious algorithms are analyzed in the idealcache model, which is an abstraction of real memory systems. We investigate the assumptions of the model in order to determine the accuracy of cachecomplexity bounds derived by use of the model. We derive the constant factors of the cache complexities of cacheoblivious, cacheaware, and traditional searching and sorting algorithms in the idealcache model. The constant factors of the work complexities of the algorithms are derived in the pureC cost model. The analyses are verified through benchmarking of implementations of all algorithms. For the searching algorithms, our constantfactors analysis predicts the benchmark results quite precisely — considering both memory performance and work complexity. For the more complex sorting algorithms our results show the same pattern, though the similarities between predicted and measured performance are not as significant. Furthermore, we develop a new algorithm that lays out a cacheoblivious static search tree in memory in linear time, which is an improvement of the algorithms known so far. We conclude that by combining the idealcache model and the pureC model, the relative performance of programs can be predicted quite precisely, provided that the analysis is carefully done.
Nordic Journal of Computing 10(2003), 238–262. NAVIGATION PILES WITH APPLICATIONS TO SORTING, PRIORITY QUEUES, AND PRIORITY
"... Abstract. A data structure, named a navigation pile, is described and exploited in the implementation of a sorting algorithm, a priority queue, and a priority deque. When carrying out these tasks, a linear number of bits is used in addition to the elements manipulated, and extra space for a sublinea ..."
Abstract
 Add to MetaCart
Abstract. A data structure, named a navigation pile, is described and exploited in the implementation of a sorting algorithm, a priority queue, and a priority deque. When carrying out these tasks, a linear number of bits is used in addition to the elements manipulated, and extra space for a sublinear number of elements is allocated if the grow and shrink operations are to be supported. Our viewpoint is to allow little extra space, make a low number of element moves, and still keep the efficiency in the number of element comparisons and machine instructions. In spite of low memory consumption, the worstcase bounds for the number of element comparisons, element moves, and machine instructions are close to the absolute minimum.
Experimental evaluation of local heaps
"... Abstract. In this paper we present a cacheaware realization of a priority queue, named a local heap, which is a slight modification of a standard binary heap. The new data structure is cache efficient, has a small computational overhead, and achieves a good worstcase performance with respect to th ..."
Abstract
 Add to MetaCart
Abstract. In this paper we present a cacheaware realization of a priority queue, named a local heap, which is a slight modification of a standard binary heap. The new data structure is cache efficient, has a small computational overhead, and achieves a good worstcase performance with respect to the number of element comparisons and the number of element moves. We show both theoretically and experimentally that the data structure is competitive with a standard binary heap, provided that the number of elements stored is not small. 1.