Results 1 
7 of
7
Quickselect and Dickman function
 Combinatorics, Probability and Computing
, 2000
"... We show that the limiting distribution of the number of comparisons used by Hoare's quickselect algorithm when given a random permutation of n elements for finding the mth smallest element, where m = o(n), is the Dickman function. The limiting distribution of the number of exchanges is also de ..."
Abstract

Cited by 29 (1 self)
 Add to MetaCart
We show that the limiting distribution of the number of comparisons used by Hoare's quickselect algorithm when given a random permutation of n elements for finding the mth smallest element, where m = o(n), is the Dickman function. The limiting distribution of the number of exchanges is also derived. 1 Quickselect Quickselect is one of the simplest and e#cient algorithms in practice for finding specified order statistics in a given sequence. It was invented by Hoare [19] and uses the usual partitioning procedure of quicksort: choose first a partitioning key, say x; regroup the given sequence into two parts corresponding to elements whose values are less than and larger than x, respectively; then decide, according to the size of the smaller subgroup, which part to continue recursively or to stop if x is the desired order statistics; see Figure 1 for an illustration in terms of binary search trees. For more details, see Guibas [15] and Mahmoud [26]. This algorithm , although ine#cient in the worst case, has linear mean when given a sequence of n independent and identically distributed continuous random variables, or equivalently, when given a random permutation of n elements, where, here and throughout this paper, all n! permutations are equally likely. Let C n,m denote the number of comparisons used by quickselect for finding the mth smallest element in a random permutation, where the first partitioning stage uses n 1 comparisons. Knuth [23] was the first to show, by some di#erencing argument, that E(C n,m ) = 2 (n + 3 + (n + 1)H n (m + 2)Hm (n + 3 m)H n+1m ) , n, where Hm = 1#k#m k 1 . A more transparent asymptotic approximation is E(C n,m ) (#), (#) := 2 #), # Part of the work of this author was done while he was visiting School of C...
On the probabilistic worstcase time of "FIND"
 ALGORITHMICA
, 2001
"... We analyze the worstcase number of comparisons Tn of Hoare’s selection algorithm find when the input is a random permutation, and worst case is measured with respect to the rank k. We give a new short proof that Tn/n tends to a limit distribution, and provide new bounds for the limiting distributi ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
We analyze the worstcase number of comparisons Tn of Hoare’s selection algorithm find when the input is a random permutation, and worst case is measured with respect to the rank k. We give a new short proof that Tn/n tends to a limit distribution, and provide new bounds for the limiting distribution.
Average CaseAnalysis Of Priority Trees: A Structure For Priority Queue Administration
"... . Priority trees (ptrees) are a certain variety of binary trees of size n constructed from permutations of the numbers 1; : : : ; n. In this paper we analyse several parameters depending on n (the size) and j (a number between 1 and n), such as the length of the left path (connecting the root and t ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
. Priority trees (ptrees) are a certain variety of binary trees of size n constructed from permutations of the numbers 1; : : : ; n. In this paper we analyse several parameters depending on n (the size) and j (a number between 1 and n), such as the length of the left path (connecting the root and the leftmost leaf), the height of node j (= distance from the root), the number of left edges on the path from the root to the node j, the number of descendants of node j, the number of key comparisons when inserting an element between j and j + 1, the number of key comparisons when cutting the ptrees into two ptrees, the number of nodes with 0, 1 or 2 children. Methodologically, recursions are set up according to a fundamental decomposition of the family A of ptrees (using auxiliary quantities B and C); using generating functions, they lead to systems of differential equations that can be solved explicitly with some efforts. The quantities of interest can then be identified as coefficie...
A Gaussian limit process for optimal FIND algorithms
, 2013
"... We consider versions of the FIND algorithm where the pivot element used is the median of a subset chosen uniformly at random from the data. For the median selection we assume that subsamples of size asymptotic to c · nα are chosen, where 0 < α ≤ 1 2, c> 0 and n is the size of the data set to b ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
We consider versions of the FIND algorithm where the pivot element used is the median of a subset chosen uniformly at random from the data. For the median selection we assume that subsamples of size asymptotic to c · nα are chosen, where 0 < α ≤ 1 2, c> 0 and n is the size of the data set to be split. We consider the complexity of FIND as a process in the rank to be selected and measured by the number of key comparisons required. After normalization we show weak convergence of the complexity to a centered Gaussian process as n → ∞, which depends on α. The proof relies on a contraction argument for probability distributions on càdlàg functions. We also identify the covariance function of the Gaussian limit process and discuss path and tail properties. AMS 2010 subject classifications. Primary 60F17, 68P10; secondary 60G15, 60C05, 68Q25. Key words. FIND algorithm, Quickselect, complexity, key comparisons, functional limit theorem,
The Number of Symbol Comparisons in
"... Abstract. We revisit the classical QuickSort and QuickSelect algorithms, under a complexity model that fully takes into account the elementary comparisons between symbols composing the records to be processed. Our probabilistic models belong to a broad category of information sources that encompas ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. We revisit the classical QuickSort and QuickSelect algorithms, under a complexity model that fully takes into account the elementary comparisons between symbols composing the records to be processed. Our probabilistic models belong to a broad category of information sources that encompasses memoryless (i.e., independentsymbols) and Markov sources, as well as many unboundedcorrelation sources. We establish that, under our conditions, the averagecase complexity of QuickSort is O(n log2 n) [rather than O(n log n), classically], whereas that of QuickSelect remains O(n). Explicit expressions for the implied constants are provided by our combinatorial–analytic methods.
OPTIMAL SAMPLING STRATEGIES IN QUICKSORT AND QUICKSELECT ∗
"... Abstract. It is well known that the performance of quicksort can be improved by selecting the median of a sample of elements as the pivot of each partitioning stage. For large samples the partitions are better, but the amount of additional comparisons and exchanges to find the median of the sample a ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. It is well known that the performance of quicksort can be improved by selecting the median of a sample of elements as the pivot of each partitioning stage. For large samples the partitions are better, but the amount of additional comparisons and exchanges to find the median of the sample also increases. We show in this paper that the optimal sample size to minimize the average total cost of quicksort, as a function of the size n of the current subarray size, is a · √ n + o ( √ n). We give a closed expression for a, which depends on the selection algorithm and the costs of elementary comparisons and exchanges. Moreover, we show that selecting the medians of the samples as pivots is not the best strategy when exchanges are much more expensive than comparisons. We also apply the same ideas and techniques to the analysis of quickselect and get similar results.