Results 1  10
of
11
Asymptotically Efficient inPlace Merging
 Theoretical Computer Science
"... Two lineartime algorithms for inplace merging are presented. Both algorithms perform at most m(t+1)+n=2 t +o(m) comparisons, where m and n are the sizes of the input sequences, m n, and t = blog 2 (n=m)c. The first algorithm is for unstable merging and it carries out no more than 3(n+m)+o(m) el ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
Two lineartime algorithms for inplace merging are presented. Both algorithms perform at most m(t+1)+n=2 t +o(m) comparisons, where m and n are the sizes of the input sequences, m n, and t = blog 2 (n=m)c. The first algorithm is for unstable merging and it carries out no more than 3(n+m)+o(m) element moves. The second algorithm is for stable merging and it accomplishes at most 5n+12m+o(m) moves. Key words: Inplace algorithms, merging, sorting ? A preliminary and weaker version of this work appeared in Proceedings of the 20th Symposium on Mathematical Foundations of Computer Science, Lecture Notes in Computer Science 969, SpringerVerlag, Berlin/Heidelberg (1995), 211220. 1 Supported by the Slovak Grant Agency for Science under contract 1/4376/97 (Project "Combinational Structures and Complexity of Algorithms"). 2 Partially supported by the Danish Natural Science Research Council under contracts 9400952 (Project "Computational Algorithmics") and 9701414 (Project "Experimental Algorithmics"). Preprint submitted to Elsevier Preprint December 19, 1995 1
Practical InPlace Mergesort
, 1996
"... Two inplace variants of the classical mergesort algorithm are analysed in detail. The first, straightforward variant performs at most N log 2 N + O(N ) comparisons and 3N log 2 N + O(N ) moves to sort N elements. The second, more advanced variant requires at most N log 2 N + O(N ) comparisons and & ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Two inplace variants of the classical mergesort algorithm are analysed in detail. The first, straightforward variant performs at most N log 2 N + O(N ) comparisons and 3N log 2 N + O(N ) moves to sort N elements. The second, more advanced variant requires at most N log 2 N + O(N ) comparisons and "N log 2 N moves, for any fixed " ? 0 and any N ? N ("). In theory, the second one is superior to advanced versions of heapsort. In practice, due to the overhead in the index manipulation, our fastest inplace mergesort behaves still about 50 per cent slower than the bottomup heapsort. However, our implementations are practical compared to mergesort algorithms based on inplace merging. Key words: sorting, mergesort, inplace algorithms CR Classification: F.2.2 1.
Radix sorting with no extra space
 In Proceedings of the 15th European Symposium on Algorithms
, 2007
"... It is well known that n integers in the range [1, n c] can be sorted in O(n) time in the RAM model using radix sorting. More generally, integers in any range [1, U] can be sorted in O(n √ log log n) time [5]. However, these algorithms use O(n) words of extra memory. Is this necessary? We present a s ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
It is well known that n integers in the range [1, n c] can be sorted in O(n) time in the RAM model using radix sorting. More generally, integers in any range [1, U] can be sorted in O(n √ log log n) time [5]. However, these algorithms use O(n) words of extra memory. Is this necessary? We present a simple, stable, integer sorting algorithm for words of size O(log n), which works in O(n) time and uses only O(1) words of extra memory on a RAM model. This is the integer sorting case most useful in practice. We extend this result with same bounds to the case when the keys are readonly, which is of theoretical interest. Another interesting question is the case of arbitrary c. Here we present a blackbox transformation from any RAM sorting algorithm to a sorting algorithm which uses only O(1) extra space and has the same running time. This settles the complexity of inplace sorting in terms of the complexity of sorting. 1
InSitu, Stable Merging by way of the Perfect Shuffle.
, 1999
"... We introduce a novel approach to the classical problem of insitu, stable merging, where "insitu" means the use of no more than O(log 2 n) bits of extra memory for lists of size n. Shufflemerge reduces the merging problem to the problem of realising the "perfect shuffle" permu ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We introduce a novel approach to the classical problem of insitu, stable merging, where "insitu" means the use of no more than O(log 2 n) bits of extra memory for lists of size n. Shufflemerge reduces the merging problem to the problem of realising the "perfect shuffle" permutation, that is, the exact interleaving of two, equal length lists. The algorithm is recursive, using a logarithmic number of variables, and so does not use absolutely minimum storage, i.e., a fixed number of variables. A simple method of realising the perfect shuffle uses one extra bit per element, and so is not insitu. We show that the perfect shuffle can be attained using absolutely minimum storage and in linear time, at the expense of doubling the number of moves, relative to the simple method. We note that there is a worst case for Shufflemerge requiring time\Omega\Gamma n log n), where n is the sum of the lengths of the input lists. We also present an analysis of a variant of Shufflemerge which uses a ...
Supervised by
, 2011
"... The discrete geometry is to classical geometry what the language is to thought, i.e. an imperfect means to represent the reality. It took centuries for the language to evolve in a way almost capable to faithfully describe our though. tel00596947, version 1 30 May 2011 Today the discrete geometry t ..."
Abstract
 Add to MetaCart
The discrete geometry is to classical geometry what the language is to thought, i.e. an imperfect means to represent the reality. It took centuries for the language to evolve in a way almost capable to faithfully describe our though. tel00596947, version 1 30 May 2011 Today the discrete geometry tries to do the same thing with the continuous geometry. The continuous geometry is a mathematical model that cannot be correctly or exactly reproduced in the real world and in computer science. A simple example is the famous number π. The theoretical mathematic model supposes an exact value of this number, however, the representation of a circle on the ground with a rope or on a sheet of paper by a compass can only give an approximated value of π, whatever the diameter of the circle, the size of the rope or the precision of the compass. In computer science, for any approximation of π used during computations, results will always be an approximation. Today, one of the biggest challenge in computer science is to nd new methods so that computers can represent reality as faithfully as possible. Regarding geometry, we strongly believe that these methods belong to the discrete geometry. tel00596947, version 1 30 May 2011
InPlace Merging Algorithms
, 2004
"... In this report we consider the problem of merging two sorted lists of m and n keys each inplace. We survey known techniques for this problem, focussing on correctness and the attributes of Stability and Practicality. We demonstrate a class of unstable inplace merge algorithms that uses block rearr ..."
Abstract
 Add to MetaCart
In this report we consider the problem of merging two sorted lists of m and n keys each inplace. We survey known techniques for this problem, focussing on correctness and the attributes of Stability and Practicality. We demonstrate a class of unstable inplace merge algorithms that uses block rearrangement and internal buffering that actually does not merge in the presence of sufficient duplicate keys of a given value. We show four relatively simple block sorting techniques that can be used to correct these algorithms. In addition, we show relatively simple and robust techniques that does stable local block merge followed by stable block sort to create a merge. Our internal merge is base on Kronrod’s method of internal buffering and block partitioning. Using block size of O ( √ m + n) we achieve complexity of no more than 1.5(m+n)+O ( √ m + n lg(m+n)) comparisons and 4(m+n)+O ( √ m + n lg(m+n)) data moves. Using block size of O((m + n) / lg(m + n)) gives complexity of no more than
Abstract
, 2005
"... Questions about order versus disorder in systems and models have been fascinating scientists over the years. In Computer Science, order is intimately related to sorting, commonly meant as the task of arranging keys in increasing or decreasing order with respect to an underlying total order relation. ..."
Abstract
 Add to MetaCart
Questions about order versus disorder in systems and models have been fascinating scientists over the years. In Computer Science, order is intimately related to sorting, commonly meant as the task of arranging keys in increasing or decreasing order with respect to an underlying total order relation. The sorted organization is amenable for searching a set of n keys, since each search requires Θ(log n) comparisons in the worst case, which is optimal if the cost of a single comparison can be considered a constant. Nevertheless, we prove that disorder implicitly provides more information than order does. For the general case of searching an array of multidimensional keys, whose comparison cost is proportional to their length (and hence cannot be considered a constant), we demonstrate that “suitable ” disorder gives better bounds than those derivable by using the natural lexicographic order. We start out from previous work done by Andersson, Hagerup, H˚astad and Petersson [SIAM Journal on Computing, 30(2), 2001], who proved that k log log n
Optimal InPlace Sorting of Vectors and Records
"... Abstract. We study the problem of determining the complexity of optimal comparisonbased inplace sorting when the key length, k, is not a constant. We present the first algorithm for lexicographically sorting n keys in O(nk+n log n) time using O(1) auxiliary data locations, which is simultaneously ..."
Abstract
 Add to MetaCart
Abstract. We study the problem of determining the complexity of optimal comparisonbased inplace sorting when the key length, k, is not a constant. We present the first algorithm for lexicographically sorting n keys in O(nk+n log n) time using O(1) auxiliary data locations, which is simultaneously optimal in time and space. 1
On the Competitiveness of Linear Search
 In Proceedings of the 8th Annual European Symposium on Algorithms
, 2000
"... We reexamine offline techniques for linear search. Under a reasonable model of computation, a method is given to perform offline linear search in amortized cost proportional to the entropy of the request sequence. It follows that no online technique can have an amortized cost of that which one ..."
Abstract
 Add to MetaCart
We reexamine offline techniques for linear search. Under a reasonable model of computation, a method is given to perform offline linear search in amortized cost proportional to the entropy of the request sequence. It follows that no online technique can have an amortized cost of that which one could obtain if given the request sequence in advance, i.e., there is no competitive linear search algorithm. 1 Introduction This is a paper about the competitiveness of algorithms. That is, the extent to which an algorithm can receive and immediately process a sequence of queries almost as well as could be done if all queries were given in advance, and a more global scheme could be developed for their processing. Online algorithms and competitiveness have been a major focus in the theory of query processing over the past 10 or 15 years ([1],[2],[6],[7]). Borodin and El Yani [2], in particular, give an excellent treatment of the topic. At the heart of proving anything about "optimal" perf...