Results 1  10
of
27
Analysis of Shellsort and related algorithms
 ESA ’96: Fourth Annual European Symposium on Algorithms
, 1996
"... This is an abstract of a survey talk on the theoretical and empirical studies that have been done over the past four decades on the Shellsort algorithm and its variants. The discussion includes: upper bounds, including linkages to numbertheoretic properties of the algorithm; lower bounds on Shellso ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
This is an abstract of a survey talk on the theoretical and empirical studies that have been done over the past four decades on the Shellsort algorithm and its variants. The discussion includes: upper bounds, including linkages to numbertheoretic properties of the algorithm; lower bounds on Shellsort and Shellsortbased networks; averagecase results; proposed probabilistic sorting networks based on the algorithm; and a list of open problems. 1 Shellsort The basic Shellsort algorithm is among the earliest sorting methods to be discovered (by D. L. Shell in 1959 [36]) and is among the easiest to implement, as exhibited by the following C code for sorting an array a[l],..., a[r]: shellsort(itemType a[], int l, int r) { int i, j, h; itemType v;
Lower Bounds for Shellsort
 In Proceedings of the 33rd Annual IEEE Symposium on Foundations of Computer Science
, 1997
"... We show lower bounds on the worstcase complexity of Shellsort. In particular, we give a fairly simple proof of an \Omega\Gamma n lg 2 n=(lg lg n) 2 ) lower bound for the size of Shellsort sorting networks, for arbitrary increment sequences. We also show an identical lower bound for the running ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
We show lower bounds on the worstcase complexity of Shellsort. In particular, we give a fairly simple proof of an \Omega\Gamma n lg 2 n=(lg lg n) 2 ) lower bound for the size of Shellsort sorting networks, for arbitrary increment sequences. We also show an identical lower bound for the running time of Shellsort algorithms, again for arbitrary increment sequences. Our lower bounds establish an almost tight tradeoff between the running time of a Shellsort algorithm and the length of the underlying increment sequence. Proposed running head: Lower Bounds for Shellsort. Contact author: Prof. Greg Plaxton, Department of Computer Science, University of Texas at Austin, Austin, Texas 787121188. 1 Introduction Shellsort is a classical sorting algorithm introduced by Shell in 1959 [15]. The algorithm is based on a sequence H = h 0 ; : : : ; hm\Gamma1 of positive integers called an increment sequence. An input file A = A[0]; : : : ; A[n \Gamma 1] of elements is sorted by performing an ...
Randomized Shellsort: A simple oblivious sorting algorithm
 In Proceedings 21st ACMSIAM Symposium on Discrete Algorithms (SODA
, 2010
"... In this paper, we describe a randomized Shellsort algorithm. This algorithm is a simple, randomized, dataoblivious version of the Shellsort algorithm that always runs in O(n log n) time and succeeds in sorting any given input permutation with very high probability. Taken together, these properties ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
In this paper, we describe a randomized Shellsort algorithm. This algorithm is a simple, randomized, dataoblivious version of the Shellsort algorithm that always runs in O(n log n) time and succeeds in sorting any given input permutation with very high probability. Taken together, these properties imply applications in the design of new efficient privacypreserving computations based on the secure multiparty computation (SMC) paradigm. In addition, by a trivial conversion of this Monte Carlo algorithm to its Las Vegas equivalent, one gets the first version of Shellsort with a running time that is provably O(n log n) with very high probability. 1
A Lower Bound on the AverageCase Complexity of Shellsort
, 1999
"... We give a general lower bound on the averagecase complexity of Shellsort: the average number of datamovements (and comparisons) made by a ppass Shellsort for any incremental sequence is \Omega\Gamma pn 1+1=p ) for every p. The proof is an example of the use of Kolmogorov complexity (the incompr ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
We give a general lower bound on the averagecase complexity of Shellsort: the average number of datamovements (and comparisons) made by a ppass Shellsort for any incremental sequence is \Omega\Gamma pn 1+1=p ) for every p. The proof is an example of the use of Kolmogorov complexity (the incompressibility method) in the analysis of algorithms. 1 Introduction The question of a nontrivial general lower bound (or upper bound) on the average complexity of Shellsort (due to D.L. Shell [14]) has been open for about four decades [5, 13]. We present such a lower bound for ppass Shellsort for every p. Shellsort sorts a list of n elements in p passes using a sequence of increments h 1 ; : : : ; h p . In the kth pass the main list is divided in h k separate sublists of length dn=h k e, where the ith sublist consists of the elements at positions j, where j mod h k = i \Gamma 1, of the main list (i = 1; : : : ; h k ). Every sublist is sorted using a straightforward insertion sort. The effi...
Oblivious RAM Revisited
"... We reinvestigate the oblivious RAM concept introduced by Goldreich and Ostrovsky, which enables a client, that can store locally only a constant amount of data, to store remotely n data items, and access them while hiding the identities of the items which are being accessed. Oblivious RAM is often c ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
We reinvestigate the oblivious RAM concept introduced by Goldreich and Ostrovsky, which enables a client, that can store locally only a constant amount of data, to store remotely n data items, and access them while hiding the identities of the items which are being accessed. Oblivious RAM is often cited as a powerful tool, which can be used, for example, for search on encrypted data or for preventing cache attacks. However, oblivious RAM it is also commonly considered to be impractical due to its overhead, which is asymptotically efficient but is quite high: each data request is replaced by O(log 4 n) requests, or by O(log 3 n) requests where the constant in the “O ” notation is a few thousands. In addition, O(n log n) external memory is required in order to store the n data items. We redesign the oblivious RAM protocol using modern tools, namely Cuckoo hashing and a new oblivious sorting algorithm. The resulting protocol uses only O(n) external memory, and replaces each data request by only O(log 2 n) requests (with a small constant). This analysis is validated by experiments that we ran. Keywords: Secure twoparty computation, oblivious RAM.
On the adaptiveness of quicksort
 IN: WORKSHOP ON ALGORITHM ENGINEERING & EXPERIMENTS, SIAM
, 2005
"... Quicksort was first introduced in 1961 by Hoare. Many variants have been developed, the best of which are among the fastest generic sorting algorithms available, as testified by the choice of Quicksort as the default sorting algorithm in most programming libraries. Some sorting algorithms are adapti ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Quicksort was first introduced in 1961 by Hoare. Many variants have been developed, the best of which are among the fastest generic sorting algorithms available, as testified by the choice of Quicksort as the default sorting algorithm in most programming libraries. Some sorting algorithms are adaptive, i.e. they have a complexity analysis which is better for inputs which are nearly sorted, according to some specified measure of presortedness. Quicksort is not among these, as it uses Ω(n log n) comparisons even when the input is already sorted. However, in this paper we demonstrate empirically that the actual running time of Quicksort is adaptive with respect to the presortedness measure Inv. Differences close to a factor of two are observed between instances with low and high Inv value. We then show that for the randomized version of Quicksort, the number of element swaps performed is provably adaptive with respect to the measure Inv. More precisely, we prove that randomized Quicksort performs expected O(n(1+log(1+ Inv/n))) element swaps, where Inv denotes the number of inversions in the input sequence. This result provides a theoretical explanation for the observed behavior, and gives new insights on the behavior of the Quicksort algorithm. We also give some empirical results on the adaptive behavior of Heapsort and Mergesort.
On the Performance of WEAKHEAPSORT
, 2000
"... . Dutton #1993# presents a further HEAPSORT variant called WEAKHEAPSORT, which also contains a new data structure for priority queues. The sorting algorithm and the underlying data structure are analyzed showing that WEAKHEAPSORT is the best HEAPSORT variant and that it has a lot of nice propert ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
. Dutton #1993# presents a further HEAPSORT variant called WEAKHEAPSORT, which also contains a new data structure for priority queues. The sorting algorithm and the underlying data structure are analyzed showing that WEAKHEAPSORT is the best HEAPSORT variant and that it has a lot of nice properties. It is shown that the worst case number of comparisons is ndlog ne# 2 dlog ne + n #dlog ne#nlog n +0:1nand weak heaps can be generated with n # 1 comparisons. A doubleended priority queue based on weakheaps can be generated in n + dn=2e#2 comparisons. Moreover, examples for the worst and the best case of WEAKHEAPSORT are presented, the number of WeakHeaps on f1;:::;ng is determined, and experiments on the average case are reported. 1
P.: The averagecase complexity of Shellsort
 Lecture Notes in Computer Science 1644
, 1999
"... We prove a general lower bound on the averagecase complexity of Shellsort: the average number of datamovements (and comparisons) made by a ppass Shellsort for 1 1+ any incremental sequence is Ω(pn p) for all p ≤ log n. Using similar arguments, we analyze the averagecase complexity of several oth ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
We prove a general lower bound on the averagecase complexity of Shellsort: the average number of datamovements (and comparisons) made by a ppass Shellsort for 1 1+ any incremental sequence is Ω(pn p) for all p ≤ log n. Using similar arguments, we analyze the averagecase complexity of several other sorting algorithms. 1
The worst case in Shellsort and related algorithms
 Journal of Algorithms
, 1993
"... Abstract. We show that sorting a sufficiently long list of length N using Shellsort with m increments (not necessarily decreasing) requires at least N 1+c/ √ m comparisons in the worst case, for some constant c> 0. For m ≤ (log N / log log N) 2 we obtain an upper bound of the same form. We also prov ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Abstract. We show that sorting a sufficiently long list of length N using Shellsort with m increments (not necessarily decreasing) requires at least N 1+c/ √ m comparisons in the worst case, for some constant c> 0. For m ≤ (log N / log log N) 2 we obtain an upper bound of the same form. We also prove that Ω(N(log N / log log N) 2) comparisons are needed regardless of the number of increments. Our approach is general enough to apply to other sorting algorithms, including Shakersort, for which an even stronger result is proved. 1.
Asymptotic Complexity from Experiments?  A Case Study for Randomized Algorithms
 IN PROCEEDINGS OF THE 4TH WORKSHOP OF ALGORITHMS AND ENGINEERING (WAE'00
, 2000
"... In the analysis of algorithms we are usually interested in obtaining closed form expressions for their complexity, or at least asymptotic expressions in O()notation. Unfortunately, there are fundamental reasons why we cannot obtain such expressions from experiments. This paper explains how we can ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
In the analysis of algorithms we are usually interested in obtaining closed form expressions for their complexity, or at least asymptotic expressions in O()notation. Unfortunately, there are fundamental reasons why we cannot obtain such expressions from experiments. This paper explains how we can at least come close to this goal using the scientific method. Besides the traditional role of experiments as a source of preliminary ideas for theoretical analysis, experiments can test falsifiable hypotheses obtained by incomplete theoretical analysis. Asymptotic behavior can also be deduced from stronger hypotheses which have been induced from experiments. As long as a complete mathematical analysis is impossible, well tested hypotheses may have to take their place. Several