Results 1 
3 of
3
Spaceefficient planar convex hull algorithms
 Proc. Latin American Theoretical Informatics
, 2002
"... A spaceefficient algorithm is one in which the output is given in the same location as the input and only a small amount of additional memory is used by the algorithm. We describe four spaceefficient algorithms for computing the convex hull of a planar point set. ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
A spaceefficient algorithm is one in which the output is given in the same location as the input and only a small amount of additional memory is used by the algorithm. We describe four spaceefficient algorithms for computing the convex hull of a planar point set.
Optimal inplace planar convex hull algorithms
 Proceedings of Latin American Theoretical Informatics (LATIN 2002), volume 2286 of Lecture Notes in Computer Science
, 2002
"... An inplace algorithm is one in which the output is given in the same location as the input and only a small amount of additional memory is used by the algorithm. In this paper we describe three inplace algorithms for computing the convex hull of a planar point set. All three algorithms are optima ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
An inplace algorithm is one in which the output is given in the same location as the input and only a small amount of additional memory is used by the algorithm. In this paper we describe three inplace algorithms for computing the convex hull of a planar point set. All three algorithms are optimal, some more so than others...
External Duplicate Deletion with Large Main Memories
, 1993
"... An external duplicate deletion algorithm is here developed, which makes an extensive use of hashing. With the current large main memories, a twophase version of the algorithm is sufficient in most practical situations. The first phase deletes part of the duplicates at once, and divides the rest of ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
An external duplicate deletion algorithm is here developed, which makes an extensive use of hashing. With the current large main memories, a twophase version of the algorithm is sufficient in most practical situations. The first phase deletes part of the duplicates at once, and divides the rest of the elements into mutually disjoint subfiles. These are then processed separately in the second phase. Experiments show that the new algorithm performs about the same number of disk I/O's as the traditional sortmerge technique, but the number of comparisons is considerably smaller. The CPU time is only about half of the sortmerge time in most cases. 1 Introduction Deletion of duplicates (here DD for short) from a multiset is one of the fundamental operations in database applications. The relations of a relational database are mathematical sets, by definition, and thus free of duplicates (FOD for short). Although many relational DBMSs are not too strict about this issue, an early DD may be...