Results 1  10
of
43
Time bounds for selection
 JCSS
, 1973
"... The number of comparisons required to select the ith smallest of n numbers is shown to be at most a linear function of n by analysis of a new selection algorithmPICK. Specifically, no more than 5.4305 n comparisons are ever required. This bound is improved for extreme values of i, and a new lower ..."
Abstract

Cited by 470 (6 self)
 Add to MetaCart
The number of comparisons required to select the ith smallest of n numbers is shown to be at most a linear function of n by analysis of a new selection algorithmPICK. Specifically, no more than 5.4305 n comparisons are ever required. This bound is improved for extreme values of i, and a new lower bound on the requisite number of comparisons is also proved.
Fast computation of database operations using graphics processors
 Proc. of ACM SIGMOD
, 2004
"... We present new algorithms for performing fast computation of several common database operations on commodity graphics processors. Specifically, we consider operations such as conjunctive selections, aggregations, and semilinear queries, which are essential computational components of typical databa ..."
Abstract

Cited by 113 (15 self)
 Add to MetaCart
(Show Context)
We present new algorithms for performing fast computation of several common database operations on commodity graphics processors. Specifically, we consider operations such as conjunctive selections, aggregations, and semilinear queries, which are essential computational components of typical database, data warehousing, and data mining applications. While graphics processing units (GPUs) have been designed for fast display of geometric primitives, we utilize the inherent pipelining and parallelism, single instruction and multiple data (SIMD) capabilities, and vector processing functionality of GPUs, for evaluating boolean predicate combinations and semilinear queries on attributes and executing database operations efficiently. Our algorithms take into account some of the limitations of the programming model of current GPUs and perform no data rearrangements. Our algorithms have been implemented on a programmable GPU (e.g. NVIDIA’s GeForce FX 5900) and applied to databases consisting of up to a million records. We have compared their performance with an optimized implementation of CPUbased algorithms. Our experiments indicate that the graphics processor available on commodity computer systems is an effective coprocessor for performing database operations.
Optimal Sampling Strategies in Quicksort and Quickselect
 PROC. OF THE 25TH INTERNATIONAL COLLOQUIUM (ICALP98), VOLUME 1443 OF LNCS
, 1998
"... It is well known that the performance of quicksort can be substantially improved by selecting the median of a sample of three elements as the pivot of each partitioning stage. This variant is easily generalized to samples of size s = 2k + 1. For large samples the partitions are better as the median ..."
Abstract

Cited by 35 (5 self)
 Add to MetaCart
It is well known that the performance of quicksort can be substantially improved by selecting the median of a sample of three elements as the pivot of each partitioning stage. This variant is easily generalized to samples of size s = 2k + 1. For large samples the partitions are better as the median of the sample makes a more accurate estimate of the median of the array to be sorted, but the amount of additional comparisons and exchanges to find the median of the sample also increases. We show that the optimal sample size to minimize the average total cost of quicksort (which includes both comparisons and exchanges) is s = a \Delta p n + o( p n ). We also give a closed expression for the constant factor a, which depends on the medianfinding algorithm and the costs of elementary comparisons and exchanges. The result above holds in most situations, unless the cost of an exchange exceeds by far the cost of a comparison. In that particular case, it is better to select not the median of...
Analysis of Hoare's Find Algorithm with Medianofthree partition. Random Structures & Algorithms
 Random Structures & Algorithms
, 1997
"... ABSTRACT: Hoare’s FIND algorithm can be used to select the jth element out of a file of n elements. It bears a remarkable similarity to Quicksort; in each pass of the algorithm, a pivot element is used to split the file into two subfiles, and recursively the algorithm proceeds with the subfile that ..."
Abstract

Cited by 25 (2 self)
 Add to MetaCart
ABSTRACT: Hoare’s FIND algorithm can be used to select the jth element out of a file of n elements. It bears a remarkable similarity to Quicksort; in each pass of the algorithm, a pivot element is used to split the file into two subfiles, and recursively the algorithm proceeds with the subfile that contains the sought element. As in Quicksort, different strategies for selecting the pivot are reasonable. In this paper, we consider the Medianofthree version, where the pivot element is chosen as the median of a random sample of three elements. Establishing some hypergeometric differential equations, we find explicit formulae for both the average number of passes and comparisons. We compare these results with the corresponding ones for the basic partition strategy. � 1997 John Wiley & Sons, Inc. Random Struct. Alg., 10, 143�156 Ž 1997. 1.
On the probabilistic worstcase time of "FIND"
 ALGORITHMICA
, 2001
"... We analyze the worstcase number of comparisons Tn of Hoare’s selection algorithm find when the input is a random permutation, and worst case is measured with respect to the rank k. We give a new short proof that Tn/n tends to a limit distribution, and provide new bounds for the limiting distributi ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
(Show Context)
We analyze the worstcase number of comparisons Tn of Hoare’s selection algorithm find when the input is a random permutation, and worst case is measured with respect to the rank k. We give a new short proof that Tn/n tends to a limit distribution, and provide new bounds for the limiting distribution.
An improved master theorem for divideandconquer recurrences
 In Automata, languages and programming
, 1997
"... Abstract. This paper presents new theorems to analyze divideandconquer recurrences, which improve other similar ones in several aspects. In particular, these theorems provide more information, free us almost completely from technicalities like floors and ceilings, and cover a wider set of toll fun ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
(Show Context)
Abstract. This paper presents new theorems to analyze divideandconquer recurrences, which improve other similar ones in several aspects. In particular, these theorems provide more information, free us almost completely from technicalities like floors and ceilings, and cover a wider set of toll functions and weight distributions, stochastic recurrences included.
Distributional convergence for the number of symbol comparisons used by QuickSort
, 2012
"... Most previous studies of the sorting algorithm QuickSort have used the number of key comparisons as a measure of the cost of executing the algorithm. Here we suppose that the n independent and identically distributed (iid) keys are each represented as a sequence of symbols from a probabilistic sourc ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
(Show Context)
Most previous studies of the sorting algorithm QuickSort have used the number of key comparisons as a measure of the cost of executing the algorithm. Here we suppose that the n independent and identically distributed (iid) keys are each represented as a sequence of symbols from a probabilistic source and that QuickSort operates on individual symbols, and we measure the execution cost as the number of symbol comparisons. Assuming only a mild “tameness ” condition on the source, we show that there is a limiting distribution for the number of symbol comparisons after normalization: first centering by the mean and then dividing by n. Additionally, under a condition that grows more restrictive as p increases, we have convergence of moments of orders p and smaller. In particular, we have convergence in distribution and convergence of moments of every order whenever the source is memoryless, i.e., whenever each key is generated as an infinite string of iid symbols. This is somewhat surprising: Even for the classical model that each key is an iid string of unbiased (“fair”) bits, the mean exhibits periodic fluctuations of order n.
PERFECT SIMULATION OF VERVAAT PERPETUITIES
, 908
"... Abstract. We use coupling into and from the past to sample perfectly in a simple and provably fast fashion from the Vervaat family of perpetuities. The family includes the Dickman distribution, which arises both in number theory and in the analysis of the Quickselect algorithm, which was the motivat ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We use coupling into and from the past to sample perfectly in a simple and provably fast fashion from the Vervaat family of perpetuities. The family includes the Dickman distribution, which arises both in number theory and in the analysis of the Quickselect algorithm, which was the motivation for our work.
Analysis of the expected number of bit comparisons required by Quickselect
 Algorithmica
"... When algorithms for sorting and searching are applied to keys that are represented as bit strings, we can quantify the performance of the algorithms not only in terms of the number of key comparisons required by the algorithms but also in terms of the number of bit comparisons. Some of the standard ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
(Show Context)
When algorithms for sorting and searching are applied to keys that are represented as bit strings, we can quantify the performance of the algorithms not only in terms of the number of key comparisons required by the algorithms but also in terms of the number of bit comparisons. Some of the standard sorting and searching algorithms have been analyzed with respect to key comparisons but not with respect to bit comparisons. In this paper, we investigate the expected number of bit comparisons required by Quickselect (also known as Find). We develop exact and asymptotic formulae for the expected number of bit comparisons required to find the smallest or largest key by Quickselect and show that the expectation is asymptotically linear with respect to the number of keys. Similar results are obtained for the average case. For finding keys of arbitrary rank, we derive an exact formula for the expected number of bit comparisons that (using rational arithmetic) requires only finite summation (rather than such operations as numerical integration) and use it to compute the expectation for each target rank. AMS 2000 subject classifications. Primary 68W40; secondary 68P10, 60C05. Key words and phrases. Quickselect,Find, searching algorithms, asymptotics, averagecase analysis, key comparisons, bit comparisons.
Exponential bounds for the running time of a selection algorithm
 Journal of Computer and System Sciences
, 1984
"... Hoare’s selection algorithm for finding the &hlargest element in a set of n elements is shown to use C comparisons where (i) E(P) < A,n ” for some constant A,> 0 and all p> 1; (ii) P(C/n) u) < (i)“(‘+“(‘) ’ asum. Exact values for the “A p ” and “o ( 1) ” terms are given. 1. ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
(Show Context)
Hoare’s selection algorithm for finding the &hlargest element in a set of n elements is shown to use C comparisons where (i) E(P) < A,n ” for some constant A,> 0 and all p> 1; (ii) P(C/n) u) < (i)“(‘+“(‘) ’ asum. Exact values for the “A p ” and “o ( 1) ” terms are given. 1.