Results 1  10
of
38
Time bounds for selection
 JCSS
, 1973
"... The number of comparisons required to select the ith smallest of n numbers is shown to be at most a linear function of n by analysis of a new selection algorithmPICK. Specifically, no more than 5.4305 n comparisons are ever required. This bound is improved for extreme values of i, and a new lower ..."
Abstract

Cited by 377 (6 self)
 Add to MetaCart
The number of comparisons required to select the ith smallest of n numbers is shown to be at most a linear function of n by analysis of a new selection algorithmPICK. Specifically, no more than 5.4305 n comparisons are ever required. This bound is improved for extreme values of i, and a new lower bound on the requisite number of comparisons is also proved. 1.
Optimal Sampling Strategies in Quicksort and Quickselect
 PROC. OF THE 25TH INTERNATIONAL COLLOQUIUM (ICALP98), VOLUME 1443 OF LNCS
, 1998
"... It is well known that the performance of quicksort can be substantially improved by selecting the median of a sample of three elements as the pivot of each partitioning stage. This variant is easily generalized to samples of size s = 2k + 1. For large samples the partitions are better as the median ..."
Abstract

Cited by 29 (4 self)
 Add to MetaCart
It is well known that the performance of quicksort can be substantially improved by selecting the median of a sample of three elements as the pivot of each partitioning stage. This variant is easily generalized to samples of size s = 2k + 1. For large samples the partitions are better as the median of the sample makes a more accurate estimate of the median of the array to be sorted, but the amount of additional comparisons and exchanges to find the median of the sample also increases. We show that the optimal sample size to minimize the average total cost of quicksort (which includes both comparisons and exchanges) is s = a \Delta p n + o( p n ). We also give a closed expression for the constant factor a, which depends on the medianfinding algorithm and the costs of elementary comparisons and exchanges. The result above holds in most situations, unless the cost of an exchange exceeds by far the cost of a comparison. In that particular case, it is better to select not the median of...
Analysis of Hoare's Find Algorithm with Medianofthree partition. Random Structures & Algorithms
 Random Structures & Algorithms
, 1997
"... ABSTRACT: Hoare’s FIND algorithm can be used to select the jth element out of a file of n elements. It bears a remarkable similarity to Quicksort; in each pass of the algorithm, a pivot element is used to split the file into two subfiles, and recursively the algorithm proceeds with the subfile that ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
ABSTRACT: Hoare’s FIND algorithm can be used to select the jth element out of a file of n elements. It bears a remarkable similarity to Quicksort; in each pass of the algorithm, a pivot element is used to split the file into two subfiles, and recursively the algorithm proceeds with the subfile that contains the sought element. As in Quicksort, different strategies for selecting the pivot are reasonable. In this paper, we consider the Medianofthree version, where the pivot element is chosen as the median of a random sample of three elements. Establishing some hypergeometric differential equations, we find explicit formulae for both the average number of passes and comparisons. We compare these results with the corresponding ones for the basic partition strategy. � 1997 John Wiley & Sons, Inc. Random Struct. Alg., 10, 143�156 Ž 1997. 1.
On the probabilistic worstcase time of "FIND"
 ALGORITHMICA
, 2001
"... We analyze the worstcase number of comparisons Tn of Hoare’s selection algorithm find when the input is a random permutation, and worst case is measured with respect to the rank k. We give a new short proof that Tn/n tends to a limit distribution, and provide new bounds for the limiting distributi ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
We analyze the worstcase number of comparisons Tn of Hoare’s selection algorithm find when the input is a random permutation, and worst case is measured with respect to the rank k. We give a new short proof that Tn/n tends to a limit distribution, and provide new bounds for the limiting distribution.
An improved master theorem for divideandconquer recurrences
 In Automata, languages and programming
, 1997
"... Abstract. This paper presents new theorems to analyze divideandconquer recurrences, which improve other similar ones in several aspects. In particular, these theorems provide more information, free us almost completely from technicalities like floors and ceilings, and cover a wider set of toll fun ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
Abstract. This paper presents new theorems to analyze divideandconquer recurrences, which improve other similar ones in several aspects. In particular, these theorems provide more information, free us almost completely from technicalities like floors and ceilings, and cover a wider set of toll functions and weight distributions, stochastic recurrences included.
Distributional convergence for the number of symbol comparisons used by QuickSort
, 2012
"... Most previous studies of the sorting algorithm QuickSort have used the number of key comparisons as a measure of the cost of executing the algorithm. Here we suppose that the n independent and identically distributed (iid) keys are each represented as a sequence of symbols from a probabilistic sourc ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Most previous studies of the sorting algorithm QuickSort have used the number of key comparisons as a measure of the cost of executing the algorithm. Here we suppose that the n independent and identically distributed (iid) keys are each represented as a sequence of symbols from a probabilistic source and that QuickSort operates on individual symbols, and we measure the execution cost as the number of symbol comparisons. Assuming only a mild “tameness ” condition on the source, we show that there is a limiting distribution for the number of symbol comparisons after normalization: first centering by the mean and then dividing by n. Additionally, under a condition that grows more restrictive as p increases, we have convergence of moments of orders p and smaller. In particular, we have convergence in distribution and convergence of moments of every order whenever the source is memoryless, i.e., whenever each key is generated as an infinite string of iid symbols. This is somewhat surprising: Even for the classical model that each key is an iid string of unbiased (“fair”) bits, the mean exhibits periodic fluctuations of order n.
Analysis of the expected number of bit comparisons required by Quickselect
 Algorithmica
"... When algorithms for sorting and searching are applied to keys that are represented as bit strings, we can quantify the performance of the algorithms not only in terms of the number of key comparisons required by the algorithms but also in terms of the number of bit comparisons. Some of the standard ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
When algorithms for sorting and searching are applied to keys that are represented as bit strings, we can quantify the performance of the algorithms not only in terms of the number of key comparisons required by the algorithms but also in terms of the number of bit comparisons. Some of the standard sorting and searching algorithms have been analyzed with respect to key comparisons but not with respect to bit comparisons. In this paper, we investigate the expected number of bit comparisons required by Quickselect (also known as Find). We develop exact and asymptotic formulae for the expected number of bit comparisons required to find the smallest or largest key by Quickselect and show that the expectation is asymptotically linear with respect to the number of keys. Similar results are obtained for the average case. For finding keys of arbitrary rank, we derive an exact formula for the expected number of bit comparisons that (using rational arithmetic) requires only finite summation (rather than such operations as numerical integration) and use it to compute the expectation for each target rank. AMS 2000 subject classifications. Primary 68W40; secondary 68P10, 60C05. Key words and phrases. Quickselect,Find, searching algorithms, asymptotics, averagecase analysis, key comparisons, bit comparisons.
Exponential bounds for the running time of a selection algorithm
 Journal of Computer and System Sciences
, 1984
"... Hoare’s selection algorithm for finding the &hlargest element in a set of n elements is shown to use C comparisons where (i) E(P) < A,n ” for some constant A,> 0 and all p> 1; (ii) P(C/n) u) < (i)“(‘+“(‘) ’ asum. Exact values for the “A p ” and “o ( 1) ” terms are given. 1. ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Hoare’s selection algorithm for finding the &hlargest element in a set of n elements is shown to use C comparisons where (i) E(P) < A,n ” for some constant A,> 0 and all p> 1; (ii) P(C/n) u) < (i)“(‘+“(‘) ’ asum. Exact values for the “A p ” and “o ( 1) ” terms are given. 1.
On the Number of Descendants and Ascendants in Random Search Trees
, 1997
"... We consider here the probabilistic analysis of the number of descendants and the number of ascendants of a given internal node in a random search tree. The performance of several important algorithms on search trees is closely related to these quantities. For instance, the cost of a successful searc ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
We consider here the probabilistic analysis of the number of descendants and the number of ascendants of a given internal node in a random search tree. The performance of several important algorithms on search trees is closely related to these quantities. For instance, the cost of a successful search is proportional to the number of ascendants of the sought element. On the other hand, the probabilistic behavior of the number of descendants is relevant for the analysis of paged data structures and for the analysis of the performance of quicksort, when recursive calls are not made on small subfiles. We also consider the number of ascendants and descendants of a random node in a random search tree, i.e., the grand averages of the quantities mentioned above. We address these questions for standard binary search trees and for locally balanced search trees. These search trees were introduced by Poblete and Munro and are binary search trees such that each subtree of size 3 is balanced; in oth...
PERFECT SIMULATION OF VERVAAT PERPETUITIES
, 908
"... Abstract. We use coupling into and from the past to sample perfectly in a simple and provably fast fashion from the Vervaat family of perpetuities. The family includes the Dickman distribution, which arises both in number theory and in the analysis of the Quickselect algorithm, which was the motivat ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Abstract. We use coupling into and from the past to sample perfectly in a simple and provably fast fashion from the Vervaat family of perpetuities. The family includes the Dickman distribution, which arises both in number theory and in the analysis of the Quickselect algorithm, which was the motivation for our work.