Results 1 
9 of
9
Parallel Priority Queues
, 1991
"... This paper introduces the Parallel Priority Queue (PPQ) abstract data type. A PPQ stores a set of integervalued items and provides operations such as insertion of n new items or deletion of the n smallest ones. Algorithms for realizing PPQ operations on an nprocessor CREWPRAM are based on two new ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
This paper introduces the Parallel Priority Queue (PPQ) abstract data type. A PPQ stores a set of integervalued items and provides operations such as insertion of n new items or deletion of the n smallest ones. Algorithms for realizing PPQ operations on an nprocessor CREWPRAM are based on two new data structures, the nBandwidthHeap (nH) and the nBandwidth LeftistHeap (nL), that are obtained as extensions of the well known sequential binaryheap and leftistheap, respectively. Using these structures, it is shown that insertion of n new items in a PPQ of m elements can be performed in parallel time O(h + log n), where h = log m n , while deletion of the n smallest items can be performed in time O(h + log log n). Keywords Data structures, parallel algorithms, analysis of algorithms, heaps, PRAM model. This work has been partly supported by the Ministero della Pubblica Istruzione of Italy and by the C.N.R. project "Sistemi Informatici e Calcolo Parallelo" y Istituto di Ela...
Linesegment intersection made inplace
, 2007
"... We present a spaceefficient algorithm for reporting all k intersections induced by a set of n line segments in the plane. Our algorithm is an inplace variant of Balaban’s algorithm and, in the worst case, runs in O(n log2 n+k) time using O(1) extra words of memory in addition to the space used f ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
We present a spaceefficient algorithm for reporting all k intersections induced by a set of n line segments in the plane. Our algorithm is an inplace variant of Balaban’s algorithm and, in the worst case, runs in O(n log2 n+k) time using O(1) extra words of memory in addition to the space used for the input to the algorithm.
Optimal incremental sorting
 In Proc. 8th Workshop on Algorithm Engineering and Experiments (ALENEX
, 2006
"... Let A be a set of size m. Obtaining the first k ≤ m elements of A in ascending order can be done in optimal O(m+k log k) time. We present an algorithm (online on k) which incrementally gives the next smallest element of the set, so that the first k elements are obtained in optimal time for any k. We ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
Let A be a set of size m. Obtaining the first k ≤ m elements of A in ascending order can be done in optimal O(m+k log k) time. We present an algorithm (online on k) which incrementally gives the next smallest element of the set, so that the first k elements are obtained in optimal time for any k. We also give a practical algorithm with the same complexity on average, which improves in practice the existing online algorithm. As a direct application, we use our technique to implement Kruskal’s Minimum Spanning Tree algorithm, where our solution is competitive with the best current implementations. We finally show that our technique can be applied to several other problems, such as obtaining an interval of the sorted sequence and implementing heaps. 1
Optimal cacheoblivious implicit dictionaries
 In Proceedings of the 30th International Colloquium on Automata, Languages and Programming
, 2003
"... Abstract. We consider the issues of implicitness and cacheobliviousness in the classical dictionary problem for n distinct keys over an unbounded and ordered universe. One finding in this paper is that of closing the longstanding open problem about the existence of an optimal implicit dictionary ov ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Abstract. We consider the issues of implicitness and cacheobliviousness in the classical dictionary problem for n distinct keys over an unbounded and ordered universe. One finding in this paper is that of closing the longstanding open problem about the existence of an optimal implicit dictionary over an unbounded universe. Another finding is motivated by the antithetic features of implicit and cacheoblivious models in data structures. We show how to blend their best qualities achieving O(log n) time and O(log B n) block transfers for searching and for amortized updating, while using just n memory cells like sorted arrays and heaps. As a result, we avoid space wasting and provide fast data access at any level of the memory hierarchy. 1
Lower and Upper Bounds on Obtaining History Independence
, 2003
"... History independent data structures, presented by Micciancio, are data structures that possess a strong security property: even if an intruder manages to get a copy of the data structure, the memory layout of the structure yields no additional information on the data structure beyond its content. In ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
History independent data structures, presented by Micciancio, are data structures that possess a strong security property: even if an intruder manages to get a copy of the data structure, the memory layout of the structure yields no additional information on the data structure beyond its content. In particular, the history of operations applied on the structure is not visible in its memory layout. Naor and Teague proposed a stronger notion of history independence in which the intruder may break into the system several times without being noticed and still obtain no additional information from reading the memory layout of the data structure.
Inplace algorithms for computing (layers of) maxima
 In: Proceedings of the 10th Scandinavian Workshop on Algorithm Theory (SWAT ’06
, 2006
"... Abstract. We describe spaceefficient algorithms for solving problems related to finding maxima among points in two and three dimensions. Our algorithms run in optimal O(n log n) time and occupy only constant extra space in addition to the space needed for representing the input. 1 ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Abstract. We describe spaceefficient algorithms for solving problems related to finding maxima among points in two and three dimensions. Our algorithms run in optimal O(n log n) time and occupy only constant extra space in addition to the space needed for representing the input. 1
Optimal spacetime dictionaries over an unbounded universe with flat implicit trees
, 2003
"... In the classical dictionary problem, a set of n distinct keys over an unbounded and ordered universe is maintained under insertions and deletions of individual keys while supporting search operations. An implicit dictionary has the additional constraint of occupying the space merely required by stor ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
In the classical dictionary problem, a set of n distinct keys over an unbounded and ordered universe is maintained under insertions and deletions of individual keys while supporting search operations. An implicit dictionary has the additional constraint of occupying the space merely required by storing the n keys, that is, exactly n contiguous words of space in total. All what is known is the starting position of the memory segment hosting the keys, as the rest of the information is implicitly encoded by a suitable permutation of the keys. This paper describes the
at implicit tree, which is the rst implicit dictionary requiring O(log n) time per search and update operation.
Comparative Performance Study of Improved Heap Sort Algorithm on Different Hardware 1
"... Abstract: Problem statement: Several efficient algorithms were developed to cope with the popular task of sorting. Improved heap sort is a new variant of heap sort. Basic idea of new algorithm is similar to classical Heap sort algorithm but it builds heap in another way. The improved heap sort algor ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract: Problem statement: Several efficient algorithms were developed to cope with the popular task of sorting. Improved heap sort is a new variant of heap sort. Basic idea of new algorithm is similar to classical Heap sort algorithm but it builds heap in another way. The improved heap sort algorithm requires nlogn0.788928n comparisons for worst case and nlognn comparisons in average case. This algorithm uses only one comparison at each node. Hardware has impact on performance of an algorithm. Since improved heap sort is a new algorithm, its performance on different hardware is required to be measured. Approach: In this comparative study the mathematical results of improved heap sort were verified experimentally on different hardware. To have some experimental data to sustain this comparison five representative hardware were chosen and code was executed and execution time was noted to verify and analyze the performance. Results: Hardware impact was shown on the performance of improved heap sort algorithm. Performance of algorithm varied for different datasets also. Conclusion: The Improved Heap sort algorithm performance was found better as compared to traditional heap sort on different hardware, but on certain hardware it was found best. Key words: Complexity, performance of algorithms, sorting
On sorting, heaps, and minimum spanning trees
 Algorithmica
"... Let A be a set of size m. Obtaining the first k ≤ m elements of A in ascending order can be done in optimal O(m + k log k) time. We present Incremental Quicksort (IQS), an algorithm (online on k) which incrementally gives the next smallest element of the set, so that the first k elements are obtaine ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Let A be a set of size m. Obtaining the first k ≤ m elements of A in ascending order can be done in optimal O(m + k log k) time. We present Incremental Quicksort (IQS), an algorithm (online on k) which incrementally gives the next smallest element of the set, so that the first k elements are obtained in optimal expected time for any k. Based on IQS, we present the Quickheap (QH), a simple and efficient priority queue for main and secondary memory. Quickheaps are comparable with classical binary heaps in simplicity, yet are more cachefriendly. This makes them an excellent alternative for a secondary memory implementation. We show that the expected amortized CPU cost per operation over a Quickheap of m elements is O(log m), and this translates into O((1/B)log(m/M)) I/O cost with main memory size M and block size B, in a cacheoblivious fashion. As a direct application, we use our techniques to implement classical Minimum Spanning Tree (MST) algorithms. We use IQS to implement Kruskal’s MST algorithm and QHs to implement Prim’s. Experimental results show that IQS, QHs, external QHs, and our Kruskal’s and Prim’s MST variants are competitive, and in many cases better in practice than current stateoftheart alternative (and much more sophisticated) implementations.