Results 1  10
of
12
Improving the Query Performance of HighDimensional Index Structures by Bulk Load Operations
 IN PROC. CONFERENCE ON EXTENDING DATABASE TECHNOLOGY, LNCS 1377
, 1998
"... In this paper, we propose a new bulkloading technique for highdimensional indexes which represent an important component of multimedia database systems. Since it is very inefficient to construct an index for a large amount of data by dynamic insertion of single objects, there is an increasing i ..."
Abstract

Cited by 53 (12 self)
 Add to MetaCart
In this paper, we propose a new bulkloading technique for highdimensional indexes which represent an important component of multimedia database systems. Since it is very inefficient to construct an index for a large amount of data by dynamic insertion of single objects, there is an increasing interest in bulkloading techniques. In contrast to previous approaches, our technique exploits a priori knowledge of the complete data set to improve both construction time and query performance. Our algorithm operates in a mannar similar to the Quicksort algorithm and has an average runtime complexity of O(n log n). We additionally improve the query performance by optimizing the shape of the bounding boxes, by completely avoiding overlap, and by clustering the pages on disk.
Delayed Consistency And Its Effects On The Miss Rate Of Parallel Programs
 In Supercomputing'91 Proceedings
, 1991
"... In cache based multiprocessors a protocol must maintain coherence among replicated copies of shared writable data. In delayed consistency protocols the effect of outgoing and incoming invalidations or updates are delayed. Delayed coherence can reduce processor blocking time as well as the effects o ..."
Abstract

Cited by 37 (4 self)
 Add to MetaCart
(Show Context)
In cache based multiprocessors a protocol must maintain coherence among replicated copies of shared writable data. In delayed consistency protocols the effect of outgoing and incoming invalidations or updates are delayed. Delayed coherence can reduce processor blocking time as well as the effects of false sharing. In this paper, we introduce several implementations of delayed consistency for cachebased systems in the framework of a weaklyordered consistency model. A performance comparison of the delayed protocols with the corresponding OntheFly (nondelayed) consistency protocol is made, through executiondriven simulations of four parallel algorithms. The results show that, for parallel programs in which false sharing is a problem, significant reductions in the data miss rate of parallel programs can be obtained with just a small increase in the cost and complexity of the cache system. 2 1.0 Introduction The design of shared memory multiprocessors that can scale up to large n...
Quickselect and Dickman function
 Combinatorics, Probability and Computing
, 2000
"... We show that the limiting distribution of the number of comparisons used by Hoare's quickselect algorithm when given a random permutation of n elements for finding the mth smallest element, where m = o(n), is the Dickman function. The limiting distribution of the number of exchanges is also de ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
(Show Context)
We show that the limiting distribution of the number of comparisons used by Hoare's quickselect algorithm when given a random permutation of n elements for finding the mth smallest element, where m = o(n), is the Dickman function. The limiting distribution of the number of exchanges is also derived. 1 Quickselect Quickselect is one of the simplest and e#cient algorithms in practice for finding specified order statistics in a given sequence. It was invented by Hoare [19] and uses the usual partitioning procedure of quicksort: choose first a partitioning key, say x; regroup the given sequence into two parts corresponding to elements whose values are less than and larger than x, respectively; then decide, according to the size of the smaller subgroup, which part to continue recursively or to stop if x is the desired order statistics; see Figure 1 for an illustration in terms of binary search trees. For more details, see Guibas [15] and Mahmoud [26]. This algorithm , although ine#cient in the worst case, has linear mean when given a sequence of n independent and identically distributed continuous random variables, or equivalently, when given a random permutation of n elements, where, here and throughout this paper, all n! permutations are equally likely. Let C n,m denote the number of comparisons used by quickselect for finding the mth smallest element in a random permutation, where the first partitioning stage uses n 1 comparisons. Knuth [23] was the first to show, by some di#erencing argument, that E(C n,m ) = 2 (n + 3 + (n + 1)H n (m + 2)Hm (n + 3 m)H n+1m ) , n, where Hm = 1#k#m k 1 . A more transparent asymptotic approximation is E(C n,m ) (#), (#) := 2 #), # Part of the work of this author was done while he was visiting School of C...
Transitional Behaviors of the Average Cost of Quicksort With Medianof(2t + 1)
, 2001
"... A fine analysis is given of the transitional behavior of the average cost of quicksort with medianofthree. Asymptotic formulae are derived for the stepwise improvement of the average cost of quicksort when iterating medianofthree k rounds for all possible values of k. The methods used are genera ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
A fine analysis is given of the transitional behavior of the average cost of quicksort with medianofthree. Asymptotic formulae are derived for the stepwise improvement of the average cost of quicksort when iterating medianofthree k rounds for all possible values of k. The methods used are general enough to apply to quicksort with medianof(2t + 1) and to explain in a precise manner the transitional behaviors of the average cost from insertion sort to quicksort proper. Our results also imply nontrivial bounds on the expected height, "saturation level", and width in a random locally balanced binary search tree.
On the adaptiveness of quicksort
 IN: WORKSHOP ON ALGORITHM ENGINEERING & EXPERIMENTS, SIAM
, 2005
"... Quicksort was first introduced in 1961 by Hoare. Many variants have been developed, the best of which are among the fastest generic sorting algorithms available, as testified by the choice of Quicksort as the default sorting algorithm in most programming libraries. Some sorting algorithms are adapti ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
(Show Context)
Quicksort was first introduced in 1961 by Hoare. Many variants have been developed, the best of which are among the fastest generic sorting algorithms available, as testified by the choice of Quicksort as the default sorting algorithm in most programming libraries. Some sorting algorithms are adaptive, i.e. they have a complexity analysis which is better for inputs which are nearly sorted, according to some specified measure of presortedness. Quicksort is not among these, as it uses Ω(n log n) comparisons even when the input is already sorted. However, in this paper we demonstrate empirically that the actual running time of Quicksort is adaptive with respect to the presortedness measure Inv. Differences close to a factor of two are observed between instances with low and high Inv value. We then show that for the randomized version of Quicksort, the number of element swaps performed is provably adaptive with respect to the measure Inv. More precisely, we prove that randomized Quicksort performs expected O(n(1+log(1+ Inv/n))) element swaps, where Inv denotes the number of inversions in the input sequence. This result provides a theoretical explanation for the observed behavior, and gives new insights on the behavior of the Quicksort algorithm. We also give some empirical results on the adaptive behavior of Heapsort and Mergesort.
Three New Sorting Algorithms Based on a DistributeCollect Paradigm
 Deakin University, School of Computing & Mathematics, Technical Report TR C93/18 (Computing Series
, 1993
"... Three new sorting algorithms, called StackSort, DeqSort and MinMaxSort, are described. They are of interest for the following reasons: they are adaptive sorting algorithms; they are comparison based general sorting algorithms; they do not put any restriction on the type of keys; they use linked list ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Three new sorting algorithms, called StackSort, DeqSort and MinMaxSort, are described. They are of interest for the following reasons: they are adaptive sorting algorithms; they are comparison based general sorting algorithms; they do not put any restriction on the type of keys; they use linked lists; the moves or exchanges of data required in algorithms using arrays are unnecessary; the desired order is obtained by adjustment of the pointers; they provide examples of interesting applications of data structures such as stacks, queues, and singly and doublylinked lists. A new and improved variation of the well known Natural MergeSort, called here SublistMergeSort, is also presented. These algorithms are compared with one another and with other well known sorting algorithms such as InsertionSort and HeapSort in terms of times required to sort various input lists. Input lists of various sizes and degrees of 'presortedness ' were used in the comparison tests. It has been demonstrated that the performance of DeqSort and MinMaxSort is better than InsertionSort and comparable to HeapSort; and that StackSort performance is better than InsertionSort in most cases. 1.
Circular Sort: A New Improved Version of Insertion Sort Algorithm
"... A new sorting algorithm, called here Circular Sort, is described. It is an improved version of the conventional Insertion Sort. It is of interest for the following reasons: it retains all the favourable features of Insertion Sort and at the same time removes the main unfavourable feature of Insertio ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
A new sorting algorithm, called here Circular Sort, is described. It is an improved version of the conventional Insertion Sort. It is of interest for the following reasons: it retains all the favourable features of Insertion Sort and at the same time removes the main unfavourable feature of Insertion Sort; it is an adaptive sorting algorithm; it is a comparison based general sorting algorithm; it does not put any restriction on the type of keys; it provides an example of an interesting application of a circular list. The Circular Sort algorithm is compared with well known algorithms such as Insertion Sort, Heap Sort; and also with some newer algorithms such as DeqSort, MinMaxSort and SublistMergeSort in terms of times required to sort various input lists. Input lists of various sizes and degrees of 'presortedness' were used in the comparison tests. It has been demonstrated that the performance of Circular Sort is better than Insertion Sort and comparable to Heap Sort, DeqSort, MinMaxSo...
B.Tech. Project Report Part II
, 2002
"... In this paper we present two schemes to reduce disorder of given elements and thus improve the performance of adaptive merge sorting. Adaptive sorting algorithms utilize the presortedness present in a given sequence. In the rst scheme, amount of presortedness present in a sequence is probabilistica ..."
Abstract
 Add to MetaCart
In this paper we present two schemes to reduce disorder of given elements and thus improve the performance of adaptive merge sorting. Adaptive sorting algorithms utilize the presortedness present in a given sequence. In the rst scheme, amount of presortedness present in a sequence is probabilistically increased by using a swapping technique that requires little computation. In the second scheme alternate ascending and descending sequences present in the input are merged to decrease the disorder. In both the cases the analysis depends on a beautiful result about the average behaviour of permutaions which is stated and proved in the paper.
Check Sort: A New Improved `Intelligent' Version of Circular Sort Algorithm
"... A new sorting algorithm, called here Check Sort, is described. It is a new improved, intelligent version of the Circular Sort algorithm. It is of interest for the following reasons: it retains all the favourable features of Circular Sort and at the same time removes its main unfavourable feature, na ..."
Abstract
 Add to MetaCart
A new sorting algorithm, called here Check Sort, is described. It is a new improved, intelligent version of the Circular Sort algorithm. It is of interest for the following reasons: it retains all the favourable features of Circular Sort and at the same time removes its main unfavourable feature, namely use of extra space, by carrying out the sorting "in situ"; it is an intelligent sorting algorithm which first finds whether the input data is in roughly increasing, decreasing or random order and then applies a suitable sorting strategy; it is a comparison based general sorting algorithm; it does not put any restriction on the type of keys; it provides an example of an interesting application of a circular list. The Check Sort algorithm is compared with well known algorithms such as Insertion Sort, Heap Sort, Quick Sort; and also with some newer algorithms such as DeqSort, MinMaxSort, SublistMergeSort and Circular Sort in terms of times required to sort various input lists. Input lists ...
unknown title
"... In cache based multiprocessors a protocol must maintain coherence among replicated copies of shared writable data. In delayed consistency protocols the effect of outgoing and incoming invalidations or updates are delayed. Delayed coherence can reduce processor blocking time as well as the effects ..."
Abstract
 Add to MetaCart
(Show Context)
In cache based multiprocessors a protocol must maintain coherence among replicated copies of shared writable data. In delayed consistency protocols the effect of outgoing and incoming invalidations or updates are delayed. Delayed coherence can reduce processor blocking time as well as the effects of false sharing. In this paper, we introduce several implementations of delayed consistency for cachebased systems in the framework of a weaklyordered consistency model. A performance comparison of the delayed protocols with the corresponding OntheFly (nondelayed) consistency protocol is made, through executiondriven simulations of four parallel algorithms. The results show that, for parallel programs in which false sharing is a problem, significant reductions in the data miss rate of parallel programs can be obtained with just a small