Results 11  20
of
20
On optimal and efficient in place merging
 SOFSEM 2006. Volume 3831 of Lecture Notes in Computer Science
, 2006
"... Abstract. We introduce a new stable in place merging algorithm that needs O(m log ( n +1)) comparisons and O(m+n) assignments. According m to the lower bounds for merging our algorithm is asymptotically optimal regarding the number of comparisons as well as assignments. The stable algorithm is devel ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Abstract. We introduce a new stable in place merging algorithm that needs O(m log ( n +1)) comparisons and O(m+n) assignments. According m to the lower bounds for merging our algorithm is asymptotically optimal regarding the number of comparisons as well as assignments. The stable algorithm is developed in a modular style out of an unstable kernel for which we give a definition in pseudocode. The literature so far describes several similar algorithms but merely as sophisticated theoretical models without any reasoning about their practical value. We report specific benchmarks and show that our algorithm is for almost all input sequences faster than the efficient minimum storage algorithm by Dudzinski and Dydek. The proposed algorithm can be effectively used in practice. 1
InSitu, Stable Merging by way of the Perfect Shuffle.
, 1999
"... We introduce a novel approach to the classical problem of insitu, stable merging, where "insitu" means the use of no more than O(log 2 n) bits of extra memory for lists of size n. Shufflemerge reduces the merging problem to the problem of realising the "perfect shuffle" permu ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
We introduce a novel approach to the classical problem of insitu, stable merging, where "insitu" means the use of no more than O(log 2 n) bits of extra memory for lists of size n. Shufflemerge reduces the merging problem to the problem of realising the "perfect shuffle" permutation, that is, the exact interleaving of two, equal length lists. The algorithm is recursive, using a logarithmic number of variables, and so does not use absolutely minimum storage, i.e., a fixed number of variables. A simple method of realising the perfect shuffle uses one extra bit per element, and so is not insitu. We show that the perfect shuffle can be attained using absolutely minimum storage and in linear time, at the expense of doubling the number of moves, relative to the simple method. We note that there is a worst case for Shufflemerge requiring time\Omega\Gamma n log n), where n is the sum of the lengths of the input lists. We also present an analysis of a variant of Shufflemerge which uses a ...
Enhanced Generic KeyAddress Mapping Sort Algorithm
"... Various methods, such as addresscalculation sort, distribution counting sort, radix sort, and bucket sort, adopt the values being sorted to improve sorting efficiency, but require extra storage space. This work presents a specific keyaddress mapping sort implementation. The proposed algorithm has ..."
Abstract
 Add to MetaCart
(Show Context)
Various methods, such as addresscalculation sort, distribution counting sort, radix sort, and bucket sort, adopt the values being sorted to improve sorting efficiency, but require extra storage space. This work presents a specific keyaddress mapping sort implementation. The proposed algorithm has the advantages of linear averagetime performance and no requirement for linkedlist data structures, and can avoid the tedious second round of sorting required by other contentbased sorting algorithms, such as Groupsort. The keyaddress mapping function employed in the proposed algorithm can fit data in any specific distribution when the mapping function is carefully designed. The cases for the uniformly and normally distributed data are explored herein to demonstrate the effectiveness of the proposed keyaddress mapping functions. Although the computation of the average and the standard deviation increases the overhead in our sorting algorithm, the empirical results indicate that the proposed sorting algorithm is still faster than both Quicksort and Groupsort for lists comprising 1,000 to 2,000,000 positive integers. The proposed algorithm adopts a valid keyaddress mapping function for uniformly distributed data, and a desirable approximation of the cumulative distribution function by using a cubic polynomial for normally distributed data, respectively.
A KeyAddress Mapping Sort Algorithm
"... Abstract: Various methods, such as addresscalculation sort, distribution counting sort, radix sort, and bucket sort, adopt the values being sorted to improve sorting efficiency, but require extra storage space. This work presents a specific keyaddress mapping sort implementation. The proposed alg ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract: Various methods, such as addresscalculation sort, distribution counting sort, radix sort, and bucket sort, adopt the values being sorted to improve sorting efficiency, but require extra storage space. This work presents a specific keyaddress mapping sort implementation. The proposed algorithm has the advantages of linear averagetime performance and no requirement for linkedlist data structures, and can avoid the tedious second round of sorting required by other contentbased sorting algorithms, such as Groupsort. The keyaddress mapping function employed in the proposed algorithm can fit data in any specific distribution when the mapping function is carefully designed. The case for the uniformly distributed data is explored herein to demonstrate the effectiveness of the proposed keyaddress mapping functions. Although the computation of the average and the standard deviation increases the overhead in our sorting algorithm, the empirical results indicate that the proposed sorting algorithm is still faster than both Quicksort and Groupsort for lists comprising 1,000 to 1,600,000 positive integers.
Parallel methods for Solving Fundamental File Rearrangement Problems
, 1990
"... We present parallel algorithms for the elementary binary set operations that, given an EREW PRAM with k processors, operate on two sorted lists of total length n in O(n=k + log n) time and O(k) extra space, and are thus timespace optimal for any value of k n=(log n). Our methods are stable, requir ..."
Abstract
 Add to MetaCart
We present parallel algorithms for the elementary binary set operations that, given an EREW PRAM with k processors, operate on two sorted lists of total length n in O(n=k + log n) time and O(k) extra space, and are thus timespace optimal for any value of k n=(log n). Our methods are stable, require no information other than a record's key, and do not modify records as they execute. ii Symbols Used O capital Greek omicron of "big oh" notation 6 capital Greek sigma for summations [ set union " set intersection 8 set exclusive or iii 1. Introduction The design and analysis of optimal parallel file rearrangement algorithms has long been a topic of widespread attention. The vast majority of the published literature has concentrated on the search for algorithms that are time optimal , that is, those that achieve optimal speedup (see, for example, [AS]). Unfortunately, space management issues have often taken a back seat in these efforts, leaving those who seek to implement optima...
SpaceEfficient Planar Convex Hull Algorithms
"... A spaceefficient algorithm is one in which the output is given in the same location as the input and only a small amount of additional memory is used by the algorithm. We describe four spaceefficient algorithms for computing the convex hull of a planar point set. ..."
Abstract
 Add to MetaCart
A spaceefficient algorithm is one in which the output is given in the same location as the input and only a small amount of additional memory is used by the algorithm. We describe four spaceefficient algorithms for computing the convex hull of a planar point set.
InPlace Merging Algorithms
, 2004
"... In this report we consider the problem of merging two sorted lists of m and n keys each inplace. We survey known techniques for this problem, focussing on correctness and the attributes of Stability and Practicality. We demonstrate a class of unstable inplace merge algorithms that uses block rearr ..."
Abstract
 Add to MetaCart
(Show Context)
In this report we consider the problem of merging two sorted lists of m and n keys each inplace. We survey known techniques for this problem, focussing on correctness and the attributes of Stability and Practicality. We demonstrate a class of unstable inplace merge algorithms that uses block rearrangement and internal buffering that actually does not merge in the presence of sufficient duplicate keys of a given value. We show four relatively simple block sorting techniques that can be used to correct these algorithms. In addition, we show relatively simple and robust techniques that does stable local block merge followed by stable block sort to create a merge. Our internal merge is base on Kronrod’s method of internal buffering and block partitioning. Using block size of O ( √ m + n) we achieve complexity of no more than 1.5(m+n)+O ( √ m + n lg(m+n)) comparisons and 4(m+n)+O ( √ m + n lg(m+n)) data moves. Using block size of O((m + n) / lg(m + n)) gives complexity of no more than
Parallel Benchmarks and ComparisonBased Computing
, 1995
"... Nonnumeric algorithms have been largely ignored in parallel benchmarking suites. Prior studies have concentrated mainly on the computational speed of processors within very regular and structured numeric codes. In this paper, we survey the current state of nonnumeric benchmark algorithms and inves ..."
Abstract
 Add to MetaCart
Nonnumeric algorithms have been largely ignored in parallel benchmarking suites. Prior studies have concentrated mainly on the computational speed of processors within very regular and structured numeric codes. In this paper, we survey the current state of nonnumeric benchmark algorithms and investigate the use of inplace merging as a suitable candidate for this role. Inplace merging enjoys several important advantages, including the scalability of efficient memory utilization, the generality of comparisonbased computing and the representativeness of nearrandom data access patterns. Experimental results over several families of parallel architectures are presented. A preliminary version of a portion of this paper was presented at the International Conference on Parallel Computing held in Gent, Belgium, in September, 1995. This research has been supported in part by the National Science Foundation under grant CDA9115428 and by the Office of Naval Research under contract N00014...
Abstract Caches and Algorithms
, 1996
"... In presenting this dissertation in partial ful llment of the requirements for the Doctoral degree at the University ofWashington, I agree that the Library shall make its copies freely available for inspection. I further agree that extensive copying of this dissertation is allowable only for scholar ..."
Abstract
 Add to MetaCart
In presenting this dissertation in partial ful llment of the requirements for the Doctoral degree at the University ofWashington, I agree that the Library shall make its copies freely available for inspection. I further agree that extensive copying of this dissertation is allowable only for scholarly purposes, consistent with \fair use&quot; as prescribed in the U.S. Copyright Law. Requests for copying or reproduction of this dissertation may be referred to University Micro lms, 1490 Eisenhower Place,