Results 1 
5 of
5
Optimal Sampling Strategies in Quicksort and Quickselect
 PROC. OF THE 25TH INTERNATIONAL COLLOQUIUM (ICALP98), VOLUME 1443 OF LNCS
, 1998
"... It is well known that the performance of quicksort can be substantially improved by selecting the median of a sample of three elements as the pivot of each partitioning stage. This variant is easily generalized to samples of size s = 2k + 1. For large samples the partitions are better as the median ..."
Abstract

Cited by 28 (4 self)
 Add to MetaCart
It is well known that the performance of quicksort can be substantially improved by selecting the median of a sample of three elements as the pivot of each partitioning stage. This variant is easily generalized to samples of size s = 2k + 1. For large samples the partitions are better as the median of the sample makes a more accurate estimate of the median of the array to be sorted, but the amount of additional comparisons and exchanges to find the median of the sample also increases. We show that the optimal sample size to minimize the average total cost of quicksort (which includes both comparisons and exchanges) is s = a \Delta p n + o( p n ). We also give a closed expression for the constant factor a, which depends on the medianfinding algorithm and the costs of elementary comparisons and exchanges. The result above holds in most situations, unless the cost of an exchange exceeds by far the cost of a comparison. In that particular case, it is better to select not the median of...
Analysis of Quickfind with small subfiles
"... In this paper we investigate the variants of the wellknown Hoare's Quickfind algorithm for the selection of the jth element out of n, when recursion stops for subfiles whose size is below a predefined threshold and a simpler algorithm is run instead. We provide estimates for the combined number of ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In this paper we investigate the variants of the wellknown Hoare's Quickfind algorithm for the selection of the jth element out of n, when recursion stops for subfiles whose size is below a predefined threshold and a simpler algorithm is run instead. We provide estimates for the combined number of passes, comparisons and exchanges, for both the basic quickfind and medianofthree quickfind. In each case, we consider two policies for the small subfiles: insertion sort and selection sort, but the analysis could be easily adapted for alternative policies. We obtain the average cost for each of these variants and compare them with the costs of standard variants which do not use cutoff. We also give the best explicit cutoff bound for each of the variants.
The Analysis of Find or Perpetuities on Cadlag Functions
 DISCRETE MATHEMATICS AND THEORETICAL COMPUTER SCIENCE (SUBM.)
"... In the running time analysis of the algorithm Find and versions of it appear as limiting distributions solutions of some stochastic fixed points equation of the form ..."
Abstract
 Add to MetaCart
In the running time analysis of the algorithm Find and versions of it appear as limiting distributions solutions of some stochastic fixed points equation of the form
K(µ) D = ∑
, 2007
"... In the running time analysis of the algorithm Find and versions of it appear as limiting distributions solutions of stochastic fixed points equation of the form ..."
Abstract
 Add to MetaCart
In the running time analysis of the algorithm Find and versions of it appear as limiting distributions solutions of stochastic fixed points equation of the form
A Gaussian limit process for optimal FIND algorithms
, 2013
"... We consider versions of the FIND algorithm where the pivot element used is the median of a subset chosen uniformly at random from the data. For the median selection we assume that subsamples of size asymptotic to c · nα are chosen, where 0 < α ≤ 1 2, c> 0 and n is the size of the data set to be spli ..."
Abstract
 Add to MetaCart
We consider versions of the FIND algorithm where the pivot element used is the median of a subset chosen uniformly at random from the data. For the median selection we assume that subsamples of size asymptotic to c · nα are chosen, where 0 < α ≤ 1 2, c> 0 and n is the size of the data set to be split. We consider the complexity of FIND as a process in the rank to be selected and measured by the number of key comparisons required. After normalization we show weak convergence of the complexity to a centered Gaussian process as n → ∞, which depends on α. The proof relies on a contraction argument for probability distributions on càdlàg functions. We also identify the covariance function of the Gaussian limit process and discuss path and tail properties. AMS 2010 subject classifications. Primary 60F17, 68P10; secondary 60G15, 60C05, 68Q25. Key words. FIND algorithm, Quickselect, complexity, key comparisons, functional limit theorem,