Results 1  10
of
23
Fast Priority Queues for Cached Memory
 ACM Journal of Experimental Algorithmics
, 1999
"... This paper advocates the adaption of external memory algorithms to this purpose. This idea and the practical issues involved are exemplified by engineering a fast priority queue suited to external memory and cached memory that is based on kway merging. It improves previous external memory algorithm ..."
Abstract

Cited by 46 (7 self)
 Add to MetaCart
This paper advocates the adaption of external memory algorithms to this purpose. This idea and the practical issues involved are exemplified by engineering a fast priority queue suited to external memory and cached memory that is based on kway merging. It improves previous external memory algorithms by constant factors crucial for transferring it to cached memory. Running in the cache hierarchy of a workstation the algorithm is at least two times faster than an optimized implementation of binary heaps and 4ary heaps for large inputs
Derivation of Randomized Sorting and Selection Algorithms, in Parallel Algorithm Derivation And Program Transformation, edited by
, 1993
"... In this paper we systematically derive randomized algorithms (both sequential and parallel) for sorting and selection from basic principles and fundamental techniques like random sampling. We prove several sampling lemmas which will find independent applications. The new algorithms derived here are ..."
Abstract

Cited by 22 (18 self)
 Add to MetaCart
In this paper we systematically derive randomized algorithms (both sequential and parallel) for sorting and selection from basic principles and fundamental techniques like random sampling. We prove several sampling lemmas which will find independent applications. The new algorithms derived here are the most efficient known. From among other results, we have an efficient algorithm for sequential sorting. The problem of sorting has attracted so much attention because of its vital importance. Sorting with as few comparisons as possible while keeping the storage size minimum is a long standing open problem. This problem is referred to as ‘the minimum storage sorting ’ [10] in the literature. The previously best known minimum storage sorting algorithm is due to Frazer and McKellar [10]. The expected number of comparisons made by this algorithm is n log n + O(n log log n). The algorithm we derive in this paper makes only an expected n log n + O(n ω(n)) number of comparisons, for any function ω(n) that tends to infinity. A variant of this algorithm makes no more than n log n + O(n log log n) comparisons on any input of size n with overwhelming probability. We also prove high probability bounds for several randomized algorithms for which only expected bounds have been proven so far.
Practical InPlace Mergesort
, 1996
"... Two inplace variants of the classical mergesort algorithm are analysed in detail. The first, straightforward variant performs at most N log 2 N + O(N ) comparisons and 3N log 2 N + O(N ) moves to sort N elements. The second, more advanced variant requires at most N log 2 N + O(N ) comparisons and " ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Two inplace variants of the classical mergesort algorithm are analysed in detail. The first, straightforward variant performs at most N log 2 N + O(N ) comparisons and 3N log 2 N + O(N ) moves to sort N elements. The second, more advanced variant requires at most N log 2 N + O(N ) comparisons and "N log 2 N moves, for any fixed " ? 0 and any N ? N ("). In theory, the second one is superior to advanced versions of heapsort. In practice, due to the overhead in the index manipulation, our fastest inplace mergesort behaves still about 50 per cent slower than the bottomup heapsort. However, our implementations are practical compared to mergesort algorithms based on inplace merging. Key words: sorting, mergesort, inplace algorithms CR Classification: F.2.2 1.
Highspeed highsecurity signatures
"... Abstract. This paper shows that a $390 massmarket quadcore 2.4GHz Intel Westmere (Xeon E5620) CPU can create 109000 signatures per second and verify 71000 signatures per second on an elliptic curve at a 2 128 security level. Public keys are 32 bytes, and signatures are 64 bytes. These performance ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
Abstract. This paper shows that a $390 massmarket quadcore 2.4GHz Intel Westmere (Xeon E5620) CPU can create 109000 signatures per second and verify 71000 signatures per second on an elliptic curve at a 2 128 security level. Public keys are 32 bytes, and signatures are 64 bytes. These performance figures include strong defenses against software sidechannel attacks: there is no data flow from secret keys to array indices, and there is no data flow from secret keys to branch conditions.
An InPlace Sorting with O(n log n) Comparisons and O(n) Moves
 In Proc. 44th Annual IEEE Symposium on Foundations of Computer Science
, 2003
"... Abstract. We present the first inplace algorithm for sorting an array of size n that performs, in the worst case, at most O(n log n) element comparisons and O(n) element transports. This solves a longstanding open problem, stated explicitly, e.g., in [J.I. Munro and V. Raman, Sorting with minimum ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Abstract. We present the first inplace algorithm for sorting an array of size n that performs, in the worst case, at most O(n log n) element comparisons and O(n) element transports. This solves a longstanding open problem, stated explicitly, e.g., in [J.I. Munro and V. Raman, Sorting with minimum data movement, J. Algorithms, 13, 374–93, 1992], of whether there exists a sorting algorithm that matches the asymptotic lower bounds on all computational resources simultaneously.
Enumerating Solutions to P(a) + Q(b) = R(c) + S(d)
, 1999
"... Let p; q; r; s be polynomials with integer coecients. This paper presents a fast method, using very little temporary storage, to nd all small integers (a; b; c; d) satisfying p(a)+q(b) = r(c)+s(d). Numerical results include all small solutions to a ; all small solutions to a ; ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Let p; q; r; s be polynomials with integer coecients. This paper presents a fast method, using very little temporary storage, to nd all small integers (a; b; c; d) satisfying p(a)+q(b) = r(c)+s(d). Numerical results include all small solutions to a ; all small solutions to a ; and the smallest positive integer that can be written in 5 ways as a sum of two coprime cubes.
Sorting inplace with a worst case complexity of n log n  1:3n + O(log n) comparisons and ffln log n +O(1) transports
 LNCS
, 1992
"... First we present a new variant of Mergesort, which needs only 1.25n space, because it uses space again, which becomes available within the current stage. It does not need more comparisons than classical Mergesort. The main result is an easy to implement method of iterating the procedure inplace s ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
First we present a new variant of Mergesort, which needs only 1.25n space, because it uses space again, which becomes available within the current stage. It does not need more comparisons than classical Mergesort. The main result is an easy to implement method of iterating the procedure inplace starting to sort 4/5 of the elements. Hereby we can keep the additional transport costs linear and only very few comparisons get lost, so that n log n − 0.8n comparisons are needed. We show that we can improve the number of comparisons if we sort blocks of constant length with MergeInsertion, before starting the algorithm. Another improvement is to start the iteration with a better version, which needs only (1+ε)n space and again additional O(n) transports. The result is, that we can improve this theoretically up to n log n − 1.3289n comparisons in the worst case. This is close to the theoretical lower bound of n log n − 1.443n. The total number of transports in all these versions can be reduced to ε n log n+O(1) for any ε> 0. 1
Graphs for Metric Space Searching
, 2008
"... The problem of Similarity Searching consists in finding the elements from a set which are similar to a given query under some criterion. If the similarity is expressed by means of a metric, the problem is called Metric Space Searching. In this thesis we present new methodologies to solve this prob ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
The problem of Similarity Searching consists in finding the elements from a set which are similar to a given query under some criterion. If the similarity is expressed by means of a metric, the problem is called Metric Space Searching. In this thesis we present new methodologies to solve this problem using graphs G(V,E) to represent the metric database. In G, the set V corresponds to the objects from the metric space and E to a small subset of edges from V × V, whose weights are computed according to the metric of the space under consideration. In particular, we study knearest neighbor graphs (knngs). The knng is a weighted graph connecting each element from V —or equivalently, each object from the metric space — to its k nearest neighbors. We develop algorithms both to construct knngs in general metric spaces, and to use
Optimal incremental sorting
 In Proc. 8th Workshop on Algorithm Engineering and Experiments (ALENEX
, 2006
"... Let A be a set of size m. Obtaining the first k ≤ m elements of A in ascending order can be done in optimal O(m+k log k) time. We present an algorithm (online on k) which incrementally gives the next smallest element of the set, so that the first k elements are obtained in optimal time for any k. We ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
Let A be a set of size m. Obtaining the first k ≤ m elements of A in ascending order can be done in optimal O(m+k log k) time. We present an algorithm (online on k) which incrementally gives the next smallest element of the set, so that the first k elements are obtained in optimal time for any k. We also give a practical algorithm with the same complexity on average, which improves in practice the existing online algorithm. As a direct application, we use our technique to implement Kruskal’s Minimum Spanning Tree algorithm, where our solution is competitive with the best current implementations. We finally show that our technique can be applied to several other problems, such as obtaining an interval of the sorted sequence and implementing heaps. 1
On the Performance of WEAKHEAPSORT
, 2000
"... . Dutton #1993# presents a further HEAPSORT variant called WEAKHEAPSORT, which also contains a new data structure for priority queues. The sorting algorithm and the underlying data structure are analyzed showing that WEAKHEAPSORT is the best HEAPSORT variant and that it has a lot of nice propert ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
. Dutton #1993# presents a further HEAPSORT variant called WEAKHEAPSORT, which also contains a new data structure for priority queues. The sorting algorithm and the underlying data structure are analyzed showing that WEAKHEAPSORT is the best HEAPSORT variant and that it has a lot of nice properties. It is shown that the worst case number of comparisons is ndlog ne# 2 dlog ne + n #dlog ne#nlog n +0:1nand weak heaps can be generated with n # 1 comparisons. A doubleended priority queue based on weakheaps can be generated in n + dn=2e#2 comparisons. Moreover, examples for the worst and the best case of WEAKHEAPSORT are presented, the number of WeakHeaps on f1;:::;ng is determined, and experiments on the average case are reported. 1