Results 1 
9 of
9
Spaceefficient planar convex hull algorithms
 Proc. Latin American Theoretical Informatics
, 2002
"... A spaceefficient algorithm is one in which the output is given in the same location as the input and only a small amount of additional memory is used by the algorithm. We describe four spaceefficient algorithms for computing the convex hull of a planar point set. ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
A spaceefficient algorithm is one in which the output is given in the same location as the input and only a small amount of additional memory is used by the algorithm. We describe four spaceefficient algorithms for computing the convex hull of a planar point set.
SpaceEfficient Algorithms for Computing the Convex Hull of a Simple Polygonal Line in Linear Time
"... We present spaceefficient algorithms for computing the convex hull of a simple polygonal line inplace, in linear time. It turns out that the problem is as hard as stable partition, i.e., if there were a truly simple solution then stable partition would also have a truly simple solution, and vice v ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
We present spaceefficient algorithms for computing the convex hull of a simple polygonal line inplace, in linear time. It turns out that the problem is as hard as stable partition, i.e., if there were a truly simple solution then stable partition would also have a truly simple solution, and vice versa. Nevertheless, we present a simple selfcontained solution that uses O(log n) space, and indicate how to improve it to O(1) space with the same techniques used for stable partition. If the points inside the convex hull can be discarded, then there is a truly simple solution that uses a single call to stable partition, and even that call can be spared if only extreme points are desired (and not their order). If the polygonal line is closed, then the problem admits a very simple solution which does not call for stable partitioning at all.
An InPlace Sorting with O(n log n) Comparisons and O(n) Moves
 In Proc. 44th Annual IEEE Symposium on Foundations of Computer Science
, 2003
"... Abstract. We present the first inplace algorithm for sorting an array of size n that performs, in the worst case, at most O(n log n) element comparisons and O(n) element transports. This solves a longstanding open problem, stated explicitly, e.g., in [J.I. Munro and V. Raman, Sorting with minimum ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Abstract. We present the first inplace algorithm for sorting an array of size n that performs, in the worst case, at most O(n log n) element comparisons and O(n) element transports. This solves a longstanding open problem, stated explicitly, e.g., in [J.I. Munro and V. Raman, Sorting with minimum data movement, J. Algorithms, 13, 374–93, 1992], of whether there exists a sorting algorithm that matches the asymptotic lower bounds on all computational resources simultaneously.
On the Performance of WEAKHEAPSORT
, 2000
"... . Dutton #1993# presents a further HEAPSORT variant called WEAKHEAPSORT, which also contains a new data structure for priority queues. The sorting algorithm and the underlying data structure are analyzed showing that WEAKHEAPSORT is the best HEAPSORT variant and that it has a lot of nice propert ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
. Dutton #1993# presents a further HEAPSORT variant called WEAKHEAPSORT, which also contains a new data structure for priority queues. The sorting algorithm and the underlying data structure are analyzed showing that WEAKHEAPSORT is the best HEAPSORT variant and that it has a lot of nice properties. It is shown that the worst case number of comparisons is ndlog ne# 2 dlog ne + n #dlog ne#nlog n +0:1nand weak heaps can be generated with n # 1 comparisons. A doubleended priority queue based on weakheaps can be generated in n + dn=2e#2 comparisons. Moreover, examples for the worst and the best case of WEAKHEAPSORT are presented, the number of WeakHeaps on f1;:::;ng is determined, and experiments on the average case are reported. 1
Optimal inplace planar convex hull algorithms
 Proceedings of Latin American Theoretical Informatics (LATIN 2002), volume 2286 of Lecture Notes in Computer Science
, 2002
"... An inplace algorithm is one in which the output is given in the same location as the input and only a small amount of additional memory is used by the algorithm. In this paper we describe three inplace algorithms for computing the convex hull of a planar point set. All three algorithms are optima ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
An inplace algorithm is one in which the output is given in the same location as the input and only a small amount of additional memory is used by the algorithm. In this paper we describe three inplace algorithms for computing the convex hull of a planar point set. All three algorithms are optimal, some more so than others...
The Ultimate Heapsort
 In Proceedings of the Computing: the 4th Australasian Theory Symposium, Australian Computer Science Communications
, 1998
"... . A variant of Heapsortnamed Ultimate Heapsortis presented that sorts n elements inplace in \Theta(n log 2 (n+ 1)) worstcase time by performing at most n log 2 n + \Theta(n) key comparisons and n log 2 n + \Theta(n) element moves. The secret behind Ultimate Heapsort is that it occasionally ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
. A variant of Heapsortnamed Ultimate Heapsortis presented that sorts n elements inplace in \Theta(n log 2 (n+ 1)) worstcase time by performing at most n log 2 n + \Theta(n) key comparisons and n log 2 n + \Theta(n) element moves. The secret behind Ultimate Heapsort is that it occasionally transforms the heap it operates with to a twolayer heap which keeps small elements at the leaves. Basically, Ultimate Heapsort is like BottomUp Heapsort but, due to the twolayer heap property, an element taken from a leaf has to be moved towards the root only O(1) levels, on an average. Let a[1::n] be an array of n elements each consisting of a key and some information associated with this key. This array is a (maximum) heap if, for all i 2 f2; : : : ; ng, the key of element a[bi=2c] is larger than or equal to that of element a[i]. That is, a heap is a pointerfree representation of a left complete binary tree, where the elements stored are partially ordered according to their keys. Ele...
Multiway Blockwise Inplace Merging
"... Abstract. We present an algorithm for asymptotically efficient multiway blockwise inplace merging. Given an array A containing sorted subsequences A1,..., Ak of respective lengths n1,..., nk, where ∑k ni = n, we assume i=1 that extra k·s elements (so called buffer elements) are positioned at the ve ..."
Abstract
 Add to MetaCart
Abstract. We present an algorithm for asymptotically efficient multiway blockwise inplace merging. Given an array A containing sorted subsequences A1,..., Ak of respective lengths n1,..., nk, where ∑k ni = n, we assume i=1 that extra k·s elements (so called buffer elements) are positioned at the very end of array A, and that the lengths n1,..., nk are positive integer multiples of some parameter s (i.e., multiples of a given block of length s). The number of input sequences k is a fixed constant parameter, not dependent on the lengths of input sequences. Then our algorithm merges the subsequences A1,..., Ak into a single sorted sequence, performing Θ(log k·n) + O((n/s) 2) + O(s· log s) element comparisons and 3·n + O(s·log s) element moves. 1 Then, for s = ⌈n 2/3 /(log n) 1/3 ⌉, this gives an algorithm performing Θ(log k·n) + O((n·log n) 2/3) comparisons and 3·n + O((n·log n) 2/3) moves. That is, our algorithm runs in linear time, with an asymptotically optimal number of comparisons and with the number of moves independent on the number of input sequences. Moreover, our algorithm is “almost inplace”, it requires only k extra blocks of size s = o(n). 1
Generating the Communications Infrastructure for Modulebased Dynamic Reconfiguration of FPGAs
, 2008
"... I would like to thank my supervisor, Dr. Oliver Diessel, for his unwavering support in this project. His supervision was exemplary, he made the entire experience of getting a PhD rich and full, and he encouraged me to think critically in ways I would not have previously imagined. I would also like t ..."
Abstract
 Add to MetaCart
I would like to thank my supervisor, Dr. Oliver Diessel, for his unwavering support in this project. His supervision was exemplary, he made the entire experience of getting a PhD rich and full, and he encouraged me to think critically in ways I would not have previously imagined. I would also like to thank my cosupervisor Prof. Sri Parameswaran for his excellent insights. I would like to thank my wife, Molly Hu, for all her support and understanding throughout the pursuit of my degree, especially at the most critical moments. I would also like to thank my mother who encouraged me to pursue my PhD and supported me throughout. I would like to thank all of my fellow PhD students on the 5th floor in the Architecture Group, especially Jorgen Peddersen, who, not only being the best friend one might have, also encouraged me to think. The rest of my fellow students Jeremy Chan, Krutartha Patel, Anjelo Ambrose, Carol He, Michael Chong all made my research experience in UNSW Sydney the best anyone could have. Last but not least I would like to thank the Australian Government for the Australian Postgraduate
Additional Key Words and Phrases: Sorting inplace
"... Abstract. We present the first inplace algorithm for sorting an array of size n that performs, in the worst case, at most O(n log n) element comparisons and O(n) element transports. This solves a longstanding open problem, stated explicitly, for example, in Munro and Raman [1992], of whether there ..."
Abstract
 Add to MetaCart
Abstract. We present the first inplace algorithm for sorting an array of size n that performs, in the worst case, at most O(n log n) element comparisons and O(n) element transports. This solves a longstanding open problem, stated explicitly, for example, in Munro and Raman [1992], of whether there exists a sorting algorithm that matches the asymptotic lower bounds on all computational resources simultaneously.