Results 1  10
of
14
Quickselect and Dickman function
 Combinatorics, Probability and Computing
, 2000
"... We show that the limiting distribution of the number of comparisons used by Hoare's quickselect algorithm when given a random permutation of n elements for finding the mth smallest element, where m = o(n), is the Dickman function. The limiting distribution of the number of exchanges is also de ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
(Show Context)
We show that the limiting distribution of the number of comparisons used by Hoare's quickselect algorithm when given a random permutation of n elements for finding the mth smallest element, where m = o(n), is the Dickman function. The limiting distribution of the number of exchanges is also derived. 1 Quickselect Quickselect is one of the simplest and e#cient algorithms in practice for finding specified order statistics in a given sequence. It was invented by Hoare [19] and uses the usual partitioning procedure of quicksort: choose first a partitioning key, say x; regroup the given sequence into two parts corresponding to elements whose values are less than and larger than x, respectively; then decide, according to the size of the smaller subgroup, which part to continue recursively or to stop if x is the desired order statistics; see Figure 1 for an illustration in terms of binary search trees. For more details, see Guibas [15] and Mahmoud [26]. This algorithm , although ine#cient in the worst case, has linear mean when given a sequence of n independent and identically distributed continuous random variables, or equivalently, when given a random permutation of n elements, where, here and throughout this paper, all n! permutations are equally likely. Let C n,m denote the number of comparisons used by quickselect for finding the mth smallest element in a random permutation, where the first partitioning stage uses n 1 comparisons. Knuth [23] was the first to show, by some di#erencing argument, that E(C n,m ) = 2 (n + 3 + (n + 1)H n (m + 2)Hm (n + 3 m)H n+1m ) , n, where Hm = 1#k#m k 1 . A more transparent asymptotic approximation is E(C n,m ) (#), (#) := 2 #), # Part of the work of this author was done while he was visiting School of C...
Trickledown processes and their boundaries
, 2012
"... It is possible to represent each of a number of Markov chains as an evolving sequence of connected subsets of a directed acyclic graph that grow in the following way: initially, all vertices of the graph are unoccupied, particles are fed in onebyone at a distinguished source vertex, successive part ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
(Show Context)
It is possible to represent each of a number of Markov chains as an evolving sequence of connected subsets of a directed acyclic graph that grow in the following way: initially, all vertices of the graph are unoccupied, particles are fed in onebyone at a distinguished source vertex, successive particles proceed along directed edges according to an appropriate stochastic mechanism, and each particle comes to rest once it encounters an unoccupied vertex. Examples include the binary and digital search tree processes, the random recursive tree process and generalizations of it arising from nested instances of Pitman’s twoparameter Chinese restaurant process, treegrowth models associated with Mallows ’ φ model of random permutations and with Schützenberger’s noncommutative qbinomial theorem, and a construction due to Luczak and Winkler that grows uniform random binary trees in a Markovian manner. We introduce a framework that encompasses such Markov chains, and we characterize their asymptotic behavior by analyzing in detail their DoobMartin compactifications, Poisson boundaries and tail σfields.
Partial Quicksort and . . .
"... Partial Quicksort sorts the l smallest elements in a list of length n. We provide a complete running time analysis for this combination of Find and Quicksort. Further we give some optimal adapted versions, called Partition Quicksort, with an asymptotic running time c1l lnl + c2l + n + o(n). The con ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Partial Quicksort sorts the l smallest elements in a list of length n. We provide a complete running time analysis for this combination of Find and Quicksort. Further we give some optimal adapted versions, called Partition Quicksort, with an asymptotic running time c1l lnl + c2l + n + o(n). The constant c1 can be as small as the information theoretic lower bound log 2 e.
Partitioning schemes for quicksort and quickselect
, 2003
"... We introduce several modifications of the partitioning schemes used in Hoare’s quicksort and quickselect algorithms, including ternary schemes which identify keys less or greater than the pivot. We give estimates for the numbers of swaps made by each scheme. Our computational experiments indicate th ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We introduce several modifications of the partitioning schemes used in Hoare’s quicksort and quickselect algorithms, including ternary schemes which identify keys less or greater than the pivot. We give estimates for the numbers of swaps made by each scheme. Our computational experiments indicate that ternary schemes allow quickselect to identify all keys equal to the selected key at little additional cost. Key words. Sorting, selection, quicksort, quickselect, partitioning. 1
Partial quicksort and quickpartitionsort
, 2009
"... Partial Quicksort sorts the l smallest elements in a list of length n. We provide a complete running time analysis for this combination of Find and Quicksort. Further we give some optimal adapted versions, called Partition Quicksort, with an asymptotic running time c1l ln l+c2l+n+o(n). The constant ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Partial Quicksort sorts the l smallest elements in a list of length n. We provide a complete running time analysis for this combination of Find and Quicksort. Further we give some optimal adapted versions, called Partition Quicksort, with an asymptotic running time c1l ln l+c2l+n+o(n). The constant c1 can be as small as the information theoretic lower bound log 2 e.
A Gaussian limit process for optimal FIND algorithms
, 2013
"... We consider versions of the FIND algorithm where the pivot element used is the median of a subset chosen uniformly at random from the data. For the median selection we assume that subsamples of size asymptotic to c · nα are chosen, where 0 < α ≤ 1 2, c> 0 and n is the size of the data set to b ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
We consider versions of the FIND algorithm where the pivot element used is the median of a subset chosen uniformly at random from the data. For the median selection we assume that subsamples of size asymptotic to c · nα are chosen, where 0 < α ≤ 1 2, c> 0 and n is the size of the data set to be split. We consider the complexity of FIND as a process in the rank to be selected and measured by the number of key comparisons required. After normalization we show weak convergence of the complexity to a centered Gaussian process as n → ∞, which depends on α. The proof relies on a contraction argument for probability distributions on càdlàg functions. We also identify the covariance function of the Gaussian limit process and discuss path and tail properties. AMS 2010 subject classifications. Primary 60F17, 68P10; secondary 60G15, 60C05, 68Q25. Key words. FIND algorithm, Quickselect, complexity, key comparisons, functional limit theorem,
On the Variance of Quickselect
, 2005
"... Quickselect with medianofthree is routinely used as the method of choice for selection of the mth element out of n in generalpurpose libraries such as the C++ Standard Template Library. Its average behavior is fairly well understood and has been shown to outperform that of the standard variant, w ..."
Abstract
 Add to MetaCart
(Show Context)
Quickselect with medianofthree is routinely used as the method of choice for selection of the mth element out of n in generalpurpose libraries such as the C++ Standard Template Library. Its average behavior is fairly well understood and has been shown to outperform that of the standard variant, which chooses a random pivot on each stage. However, no results were previously known about the variance of the medianofthree variant, other than for the number of comparisons made when the rank m of the sought element is given by a uniform random variable. Here, we consider the variance of the number of comparisons made by quickselect with medianofthree and other quickselect variants when selecting the mth element for m/n → α as n → ∞. We also investigate the behavior of proportionfroms sampling as s → ∞.
On the Variance of Quickselect
, 2005
"... Quickselect with medianofthree is routinely used as the method of choice for selection of the mth element out of n in generalpurpose libraries such as the C++ Standard Template Library. Its average behavior is fairly well understood and has been shown to outperform that of the standard variant, ..."
Abstract
 Add to MetaCart
(Show Context)
Quickselect with medianofthree is routinely used as the method of choice for selection of the mth element out of n in generalpurpose libraries such as the C++ Standard Template Library. Its average behavior is fairly well understood and has been shown to outperform that of the standard variant, which chooses a random pivot on each stage. However, no results were previously known about the variance of the medianofthree variant, other than for the number of comparisons made when the rank m of the sought element is given by a uniform random variable. Here, we consider the variance of the number of comparisons made by quickselect with medianofthree and other quickselect variants when selecting the mth element for m=n! as n! 1. We also investigate the behavior of proportionfroms sampling as s!1. 1
QUICKSELECT revisited by Uwe R"osler Mathematisches Seminar ChristianAlbrechtUniversit"at zu Kiel
"... ..."
(Show Context)
OPTIMAL SAMPLING STRATEGIES IN QUICKSORT AND QUICKSELECT ∗
"... Abstract. It is well known that the performance of quicksort can be improved by selecting the median of a sample of elements as the pivot of each partitioning stage. For large samples the partitions are better, but the amount of additional comparisons and exchanges to find the median of the sample a ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. It is well known that the performance of quicksort can be improved by selecting the median of a sample of elements as the pivot of each partitioning stage. For large samples the partitions are better, but the amount of additional comparisons and exchanges to find the median of the sample also increases. We show in this paper that the optimal sample size to minimize the average total cost of quicksort, as a function of the size n of the current subarray size, is a · √ n + o ( √ n). We give a closed expression for a, which depends on the selection algorithm and the costs of elementary comparisons and exchanges. Moreover, we show that selecting the medians of the samples as pivots is not the best strategy when exchanges are much more expensive than comparisons. We also apply the same ideas and techniques to the analysis of quickselect and get similar results.