Results 1  10
of
17
Derivation of Randomized Sorting and Selection Algorithms
, 1993
"... In this paper we systematically derive randomized algorithms (both sequential and parallel) for sorting and selection from basic principles and fundamental techniques like random sampling. We prove several sampling lemmas which will find independent applications. The new algorithms derived here are ..."
Abstract

Cited by 28 (24 self)
 Add to MetaCart
In this paper we systematically derive randomized algorithms (both sequential and parallel) for sorting and selection from basic principles and fundamental techniques like random sampling. We prove several sampling lemmas which will find independent applications. The new algorithms derived here are the most efficient known. From among other results, we have an efficient algorithm for sequential sorting. The problem of sorting has attracted so much attention because of its vital importance. Sorting with as few comparisons as possible while keeping the storage size minimum is a long standing open problem. This problem is referred to as ‘the minimum storage sorting ’ [10] in the literature. The previously best known minimum storage sorting algorithm is due to Frazer and McKellar [10]. The expected number of comparisons made by this algorithm is n log n + O(n log log n). The algorithm we derive in this paper makes only an expected n log n + O(n ω(n)) number of comparisons, for any function ω(n) that tends to infinity. A variant of this algorithm makes no more than n log n + O(n log log n) comparisons on any input of size n with overwhelming probability. We also prove high probability bounds for several randomized algorithms for which only expected bounds have been proven so far.
Achieving RangeFree Localization Beyond Connectivity
"... Wireless sensor networks have been proposed for many locationdependent applications. In such applications, the requirement of low system cost prohibitsmany rangebased methods for sensor node localization; on the other hand, rangefree localization depending only on connectivity may underutilize th ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
(Show Context)
Wireless sensor networks have been proposed for many locationdependent applications. In such applications, the requirement of low system cost prohibitsmany rangebased methods for sensor node localization; on the other hand, rangefree localization depending only on connectivity may underutilize the proximity information embedded in neighborhood sensing. In response to the above limitations, this paper presents a rangefree approach to capturing a relative distancebetween1hopneighboringnodesfromtheirneighborhood orderings that serve as unique highdimensional location signatures for nodes in the network. With little overhead, the proposed design can be conveniently applied as a transparent supporting layer for many stateoftheart connectivitybased localization solutions to achieve better positioningaccuracy. Weimplementedourdesignwiththree wellknownlocalizationalgorithmsandtesteditintwotypes ofoutdoortestbedexperiments: an850footlonglinearnetwork with 54 MICAz motes, and a regular2D networkcovering an area of 10000 square feet with 49 motes. Results show that our design helps eliminate estimation ambiguity with subhop resolution, and reduces localization errors by as much as 35%. In addition, extensive simulations reveal an interestingfeature of robustnessfor our design underunevenly distributed radio propagation path loss, and confirm itseffectivenessforlargescalenetworks. Categories andSubject Descriptors
Practical InPlace Mergesort
, 1996
"... Two inplace variants of the classical mergesort algorithm are analysed in detail. The first, straightforward variant performs at most N log 2 N + O(N ) comparisons and 3N log 2 N + O(N ) moves to sort N elements. The second, more advanced variant requires at most N log 2 N + O(N ) comparisons and & ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Two inplace variants of the classical mergesort algorithm are analysed in detail. The first, straightforward variant performs at most N log 2 N + O(N ) comparisons and 3N log 2 N + O(N ) moves to sort N elements. The second, more advanced variant requires at most N log 2 N + O(N ) comparisons and "N log 2 N moves, for any fixed " ? 0 and any N ? N ("). In theory, the second one is superior to advanced versions of heapsort. In practice, due to the overhead in the index manipulation, our fastest inplace mergesort behaves still about 50 per cent slower than the bottomup heapsort. However, our implementations are practical compared to mergesort algorithms based on inplace merging. Key words: sorting, mergesort, inplace algorithms CR Classification: F.2.2 1.
Sorting inplace with a worst case complexity of n log n  1:3n + O(log n) comparisons and ffln log n +O(1) transports
 LNCS
, 1992
"... First we present a new variant of Mergesort, which needs only 1.25n space, because it uses space again, which becomes available within the current stage. It does not need more comparisons than classical Mergesort. The main result is an easy to implement method of iterating the procedure inplace s ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
First we present a new variant of Mergesort, which needs only 1.25n space, because it uses space again, which becomes available within the current stage. It does not need more comparisons than classical Mergesort. The main result is an easy to implement method of iterating the procedure inplace starting to sort 4/5 of the elements. Hereby we can keep the additional transport costs linear and only very few comparisons get lost, so that n log n − 0.8n comparisons are needed. We show that we can improve the number of comparisons if we sort blocks of constant length with MergeInsertion, before starting the algorithm. Another improvement is to start the iteration with a better version, which needs only (1+ε)n space and again additional O(n) transports. The result is, that we can improve this theoretically up to n log n − 1.3289n comparisons in the worst case. This is close to the theoretical lower bound of n log n − 1.443n. The total number of transports in all these versions can be reduced to ε n log n+O(1) for any ε> 0. 1
The Ultimate Heapsort
 In Proceedings of the Computing: the 4th Australasian Theory Symposium, Australian Computer Science Communications
, 1998
"... . A variant of Heapsortnamed Ultimate Heapsortis presented that sorts n elements inplace in \Theta(n log 2 (n+ 1)) worstcase time by performing at most n log 2 n + \Theta(n) key comparisons and n log 2 n + \Theta(n) element moves. The secret behind Ultimate Heapsort is that it occasionally ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
. A variant of Heapsortnamed Ultimate Heapsortis presented that sorts n elements inplace in \Theta(n log 2 (n+ 1)) worstcase time by performing at most n log 2 n + \Theta(n) key comparisons and n log 2 n + \Theta(n) element moves. The secret behind Ultimate Heapsort is that it occasionally transforms the heap it operates with to a twolayer heap which keeps small elements at the leaves. Basically, Ultimate Heapsort is like BottomUp Heapsort but, due to the twolayer heap property, an element taken from a leaf has to be moved towards the root only O(1) levels, on an average. Let a[1::n] be an array of n elements each consisting of a key and some information associated with this key. This array is a (maximum) heap if, for all i 2 f2; : : : ; ng, the key of element a[bi=2c] is larger than or equal to that of element a[i]. That is, a heap is a pointerfree representation of a left complete binary tree, where the elements stored are partially ordered according to their keys. Ele...
Performance study of improved HeapSort algorithm and other sorting algorithms on different platforms
 Int. J. Comput. Sci. Network Secur
, 2008
"... Today there are several efficient algorithms that cope with the popular task of sorting. This paper titled Comparative Performance Study of Improved Heap Sort Algorithm and other sorting Algorithms presents a comparison between classical sorting algorithms and improved heap sort algorithm. To have s ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Today there are several efficient algorithms that cope with the popular task of sorting. This paper titled Comparative Performance Study of Improved Heap Sort Algorithm and other sorting Algorithms presents a comparison between classical sorting algorithms and improved heap sort algorithm. To have some experimental data to sustain these comparisons three representative algorithms were chosen (classical Heap sort, quick sort and merge sort). The improved Heap sort algorithm was compared with some experimental data of classical algorithms on two different platforms that lead to final conclusions.
RSD: A Metric for Achieving RangeFree Localization beyond Connectivity
"... Abstract—Wireless sensor networks have been considered as a promising tool for many locationdependent applications. In such deployments, the requirement of low system cost prohibits many rangebased methods for sensor node localization; on the other hand, rangefree approaches depending only on rad ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract—Wireless sensor networks have been considered as a promising tool for many locationdependent applications. In such deployments, the requirement of low system cost prohibits many rangebased methods for sensor node localization; on the other hand, rangefree approaches depending only on radio connectivity may underutilize the proximity information embedded in neighborhood sensing. In response to these limitations, this paper introduces a proximity metric called RSD to capture the distance relationships among 1hop neighboring nodes in a rangefree manner. With little overhead, RSD can be conveniently applied as a transparent supporting layer for stateoftheart connectivitybased localization solutions to achieve better accuracy. We implemented RSD with three wellknown algorithms and evaluated using two outdoor test beds: an 850footlong linear network with 54 MICAz motes, and a regular 2D network covering an area of 10,000 square feet with 49 motes. Results show that our design helps eliminate estimation ambiguity with a subhop resolution, and reduces localization errors by as much as 35 percent. In addition, simulations confirm its effectiveness for largescale networks and reveal an interesting feature of robustness under unevenly distributed radio path loss.
A New Data Structure for Heapsort with Improved Number of Comparisons (Extended Abstract)
"... Abstract. In this paper we present a new data structure for implementing heapsort algorithm for pairs of which can be simultaneously stored and processed in a single register. Since time complexity of Carlsson type variants of heapsort has already achieved a leading coefficient of 1, concretely nlg ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we present a new data structure for implementing heapsort algorithm for pairs of which can be simultaneously stored and processed in a single register. Since time complexity of Carlsson type variants of heapsort has already achieved a leading coefficient of 1, concretely nlg n + nlg lg n, and lower bound theory asserts that no comparison based inplace sorting algorithm can sort n data in less than ⌈lg(n!) ⌉ ≈ n lg n − 1.44n comparisons on the average, any improvement in the number of comparisons can only be achieved in lower terms. Our new data structure results in improvement in the linear term of the time complexity function irrespective of the variant of the heapsort algorithm used. This improvement is important in the context that some of the variants of heapsort algorithm, for example weak heapsort although not inplace, are near optimal and is away from the theoretical bound on number of comparisons by only 1.54n.
A Publication of The Science and Information Organization From the Desk of Managing
"... With monthly feature peerreviewed articles and technical contributions, the Journal's content is dynamic, innovative, thoughtprovoking and directly beneficial to the readers in their work. The number of submissions have increased dramatically over the last issues. Our ability to accommodate th ..."
Abstract
 Add to MetaCart
(Show Context)
With monthly feature peerreviewed articles and technical contributions, the Journal's content is dynamic, innovative, thoughtprovoking and directly beneficial to the readers in their work. The number of submissions have increased dramatically over the last issues. Our ability to accommodate this growth is due in large part to the terrific work of our Editorial Board. Some of the papers have an introductory character, some of them access highly desired extensions for a particular method, and some of them even introduce completely new approaches to computer science research in a very efficient manner. This diversity was strongly desired and should contribute to evoke a picture of this field at large. As a consequence only 29 % of the received articles have been finally accepted for publication. With respect to all the contributions, we are happy to have assembled researchers whose names are linked to the particular manuscript they are discussing. Therefore, this issue may not just be used by the reader to get an introduction to the methods but also to the people behind that have been pivotal in the promotion of the respective research. By having in mind such future issues, we hope to establish a regular outlet for contributions and new findings in the field of Computer science and applications. Therefore, IJACSA in general, could serve as a reliable resource for everybody loosely or tightly attached to this field of science. And if only a single young researcher is inspired by this issue to contribute in the future to solve some of the problems sketched
An extended truth about heaps ⋆
"... Abstract. We describe a number of alternative implementations for the heap functions, which are part of the C++ standard library, and provide a through experimental evaluation of their performance. In our benchmarking framework the heap functions are implemented using the same set of utility functio ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. We describe a number of alternative implementations for the heap functions, which are part of the C++ standard library, and provide a through experimental evaluation of their performance. In our benchmarking framework the heap functions are implemented using the same set of utility functions, the utility functions using the same set of policy functions, and for each implementation alternative only the utility functions need be modified. This way the programs become homogeneous and the underlying methods can be compared fairly. Our benchmarks show that the conflicting results in earlier experimental studies are mainly due to test arrangements. No heapifying approach is universally the best for all kinds of inputs and ordering functions, but the bottomup heapifying performs well for most kinds of inputs and ordering functions. We examine several approaches that improve the worstcase performance and make the heap functions even more trustworthy. 1