Results 1  10
of
10
Spaceefficient planar convex hull algorithms
 Proc. Latin American Theoretical Informatics
, 2002
"... A spaceefficient algorithm is one in which the output is given in the same location as the input and only a small amount of additional memory is used by the algorithm. We describe four spaceefficient algorithms for computing the convex hull of a planar point set. ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
A spaceefficient algorithm is one in which the output is given in the same location as the input and only a small amount of additional memory is used by the algorithm. We describe four spaceefficient algorithms for computing the convex hull of a planar point set.
Asymptotically Efficient inPlace Merging
 Theoretical Computer Science
"... Two lineartime algorithms for inplace merging are presented. Both algorithms perform at most m(t+1)+n=2 t +o(m) comparisons, where m and n are the sizes of the input sequences, m n, and t = blog 2 (n=m)c. The first algorithm is for unstable merging and it carries out no more than 3(n+m)+o(m) el ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
Two lineartime algorithms for inplace merging are presented. Both algorithms perform at most m(t+1)+n=2 t +o(m) comparisons, where m and n are the sizes of the input sequences, m n, and t = blog 2 (n=m)c. The first algorithm is for unstable merging and it carries out no more than 3(n+m)+o(m) element moves. The second algorithm is for stable merging and it accomplishes at most 5n+12m+o(m) moves. Key words: Inplace algorithms, merging, sorting ? A preliminary and weaker version of this work appeared in Proceedings of the 20th Symposium on Mathematical Foundations of Computer Science, Lecture Notes in Computer Science 969, SpringerVerlag, Berlin/Heidelberg (1995), 211220. 1 Supported by the Slovak Grant Agency for Science under contract 1/4376/97 (Project "Combinational Structures and Complexity of Algorithms"). 2 Partially supported by the Danish Natural Science Research Council under contracts 9400952 (Project "Computational Algorithmics") and 9701414 (Project "Experimental Algorithmics"). Preprint submitted to Elsevier Preprint December 19, 1995 1
Practical InPlace Mergesort
, 1996
"... Two inplace variants of the classical mergesort algorithm are analysed in detail. The first, straightforward variant performs at most N log 2 N + O(N ) comparisons and 3N log 2 N + O(N ) moves to sort N elements. The second, more advanced variant requires at most N log 2 N + O(N ) comparisons and " ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Two inplace variants of the classical mergesort algorithm are analysed in detail. The first, straightforward variant performs at most N log 2 N + O(N ) comparisons and 3N log 2 N + O(N ) moves to sort N elements. The second, more advanced variant requires at most N log 2 N + O(N ) comparisons and "N log 2 N moves, for any fixed " ? 0 and any N ? N ("). In theory, the second one is superior to advanced versions of heapsort. In practice, due to the overhead in the index manipulation, our fastest inplace mergesort behaves still about 50 per cent slower than the bottomup heapsort. However, our implementations are practical compared to mergesort algorithms based on inplace merging. Key words: sorting, mergesort, inplace algorithms CR Classification: F.2.2 1.
Fast Stable Merging And Sorting In Constant Extra Space
, 1990
"... In an earlier research paper [HL1], we presented a novel, yet straightforward lineartime algorithm for merging two sorted lists in a fixed amount of additional space. Constant of proportionality estimates and empirical testing reveal that this procedure is reasonably competitive with merge routines ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
In an earlier research paper [HL1], we presented a novel, yet straightforward lineartime algorithm for merging two sorted lists in a fixed amount of additional space. Constant of proportionality estimates and empirical testing reveal that this procedure is reasonably competitive with merge routines free to squander unbounded additional memory, making it particularly attractive whenever space is a critical resource. In this paper, we devise a relatively simple strategy by which this efficient merge can be made stable, and extend our results in a nontrivial way to the problem of stable sorting by merging. We also derive upper bounds on our algorithms' constants of proportionality, suggesting that in some environments (most notably external file processing) their modest runtime premiums may be more than offset by the dramatic space savings achieved.
Radix sorting with no extra space
 In Proceedings of the 15th European Symposium on Algorithms
, 2007
"... It is well known that n integers in the range [1, n c] can be sorted in O(n) time in the RAM model using radix sorting. More generally, integers in any range [1, U] can be sorted in O(n √ log log n) time [5]. However, these algorithms use O(n) words of extra memory. Is this necessary? We present a s ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
It is well known that n integers in the range [1, n c] can be sorted in O(n) time in the RAM model using radix sorting. More generally, integers in any range [1, U] can be sorted in O(n √ log log n) time [5]. However, these algorithms use O(n) words of extra memory. Is this necessary? We present a simple, stable, integer sorting algorithm for words of size O(log n), which works in O(n) time and uses only O(1) words of extra memory on a RAM model. This is the integer sorting case most useful in practice. We extend this result with same bounds to the case when the keys are readonly, which is of theoretical interest. Another interesting question is the case of arbitrary c. Here we present a blackbox transformation from any RAM sorting algorithm to a sorting algorithm which uses only O(1) extra space and has the same running time. This settles the complexity of inplace sorting in terms of the complexity of sorting. 1
Optimal inplace planar convex hull algorithms
 Proceedings of Latin American Theoretical Informatics (LATIN 2002), volume 2286 of Lecture Notes in Computer Science
, 2002
"... An inplace algorithm is one in which the output is given in the same location as the input and only a small amount of additional memory is used by the algorithm. In this paper we describe three inplace algorithms for computing the convex hull of a planar point set. All three algorithms are optima ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
An inplace algorithm is one in which the output is given in the same location as the input and only a small amount of additional memory is used by the algorithm. In this paper we describe three inplace algorithms for computing the convex hull of a planar point set. All three algorithms are optimal, some more so than others...
Inplace suffix sorting
 In Proc. Int. 34th Colloq. Automata, Languages, and Programming
, 2007
"... Abstract. Given string T = T[1,..., n], the suffix sorting problem is to lexicographically sort the suffixes T[i,..., n] for all i. This problem is central to the construction of suffix arrays and trees with many applications in string processing, computational biology and compression. A bottleneck ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract. Given string T = T[1,..., n], the suffix sorting problem is to lexicographically sort the suffixes T[i,..., n] for all i. This problem is central to the construction of suffix arrays and trees with many applications in string processing, computational biology and compression. A bottleneck in these applications is the amount of workspace needed to perform suffix sorting beyond the space needed to store the input as well as the output. In particular, emphasis is even on the constant c in the O(n) = cn space algorithms known for this problem, Currently the best previous result [5] takes O (nv + n log n) time and O (n / √ v) extra space, for any v ∈ [1, √ n] for strings from a general alphabet. We improve this substantially and present the first known inplace suffix sorting algorithm. Our algorithm takes O (n log n) time using O(1) workspace and is optimal in the worst case for the general alphabet. 1
A.: Stable minimum storage merging by symmetric comparisons
 Algorithms  ESA 2004. Volume 3221 of Lecture Notes in Computer Science
, 2004
"... Abstract. We introduce a new stable minimum storage algorithm for merging that needs O(m log ( n + 1)) element comparisons, where m and m n are the sizes of the input sequences with m ≤ n. According to the lower bound for merging, our algorithm is asymptotically optimal regarding the number of compa ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract. We introduce a new stable minimum storage algorithm for merging that needs O(m log ( n + 1)) element comparisons, where m and m n are the sizes of the input sequences with m ≤ n. According to the lower bound for merging, our algorithm is asymptotically optimal regarding the number of comparisons. The presented algorithm rearranges the elements to be merged by rotations, where the areas to be rotated are determined by a simple principle of symmetric comparisons. This style of minimum storage merging is novel and looks promising. Our algorithm has a short and transparent definition. Experimental work has shown that it is very efficient and so might be of high practical interest. 1
On optimal and efficient in place merging
 SOFSEM 2006. Volume 3831 of Lecture Notes in Computer Science
, 2006
"... Abstract. We introduce a new stable in place merging algorithm that needs O(m log ( n +1)) comparisons and O(m+n) assignments. According m to the lower bounds for merging our algorithm is asymptotically optimal regarding the number of comparisons as well as assignments. The stable algorithm is devel ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract. We introduce a new stable in place merging algorithm that needs O(m log ( n +1)) comparisons and O(m+n) assignments. According m to the lower bounds for merging our algorithm is asymptotically optimal regarding the number of comparisons as well as assignments. The stable algorithm is developed in a modular style out of an unstable kernel for which we give a definition in pseudocode. The literature so far describes several similar algorithms but merely as sophisticated theoretical models without any reasoning about their practical value. We report specific benchmarks and show that our algorithm is for almost all input sequences faster than the efficient minimum storage algorithm by Dudzinski and Dydek. The proposed algorithm can be effectively used in practice. 1