Results 1  10
of
20
Randomized Shellsort: A simple oblivious sorting algorithm
 In Proceedings 21st ACMSIAM Symposium on Discrete Algorithms (SODA
, 2010
"... In this paper, we describe a randomized Shellsort algorithm. This algorithm is a simple, randomized, dataoblivious version of the Shellsort algorithm that always runs in O(n log n) time and succeeds in sorting any given input permutation with very high probability. Taken together, these properties ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
In this paper, we describe a randomized Shellsort algorithm. This algorithm is a simple, randomized, dataoblivious version of the Shellsort algorithm that always runs in O(n log n) time and succeeds in sorting any given input permutation with very high probability. Taken together, these properties imply applications in the design of new efficient privacypreserving computations based on the secure multiparty computation (SMC) paradigm. In addition, by a trivial conversion of this Monte Carlo algorithm to its Las Vegas equivalent, one gets the first version of Shellsort with a running time that is provably O(n log n) with very high probability. 1
A Lower Bound on the AverageCase Complexity of Shellsort
, 1999
"... We give a general lower bound on the averagecase complexity of Shellsort: the average number of datamovements (and comparisons) made by a ppass Shellsort for any incremental sequence is \Omega\Gamma pn 1+1=p ) for every p. The proof is an example of the use of Kolmogorov complexity (the incompr ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
We give a general lower bound on the averagecase complexity of Shellsort: the average number of datamovements (and comparisons) made by a ppass Shellsort for any incremental sequence is \Omega\Gamma pn 1+1=p ) for every p. The proof is an example of the use of Kolmogorov complexity (the incompressibility method) in the analysis of algorithms. 1 Introduction The question of a nontrivial general lower bound (or upper bound) on the average complexity of Shellsort (due to D.L. Shell [14]) has been open for about four decades [5, 13]. We present such a lower bound for ppass Shellsort for every p. Shellsort sorts a list of n elements in p passes using a sequence of increments h 1 ; : : : ; h p . In the kth pass the main list is divided in h k separate sublists of length dn=h k e, where the ith sublist consists of the elements at positions j, where j mod h k = i \Gamma 1, of the main list (i = 1; : : : ; h k ). Every sublist is sorted using a straightforward insertion sort. The effi...
Oblivious RAM Revisited
"... We reinvestigate the oblivious RAM concept introduced by Goldreich and Ostrovsky, which enables a client, that can store locally only a constant amount of data, to store remotely n data items, and access them while hiding the identities of the items which are being accessed. Oblivious RAM is often c ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
We reinvestigate the oblivious RAM concept introduced by Goldreich and Ostrovsky, which enables a client, that can store locally only a constant amount of data, to store remotely n data items, and access them while hiding the identities of the items which are being accessed. Oblivious RAM is often cited as a powerful tool, which can be used, for example, for search on encrypted data or for preventing cache attacks. However, oblivious RAM it is also commonly considered to be impractical due to its overhead, which is asymptotically efficient but is quite high: each data request is replaced by O(log 4 n) requests, or by O(log 3 n) requests where the constant in the “O ” notation is a few thousands. In addition, O(n log n) external memory is required in order to store the n data items. We redesign the oblivious RAM protocol using modern tools, namely Cuckoo hashing and a new oblivious sorting algorithm. The resulting protocol uses only O(n) external memory, and replaces each data request by only O(log 2 n) requests (with a small constant). This analysis is validated by experiments that we ran. Keywords: Secure twoparty computation, oblivious RAM.
On the adaptiveness of quicksort
 IN: WORKSHOP ON ALGORITHM ENGINEERING & EXPERIMENTS, SIAM
, 2005
"... Quicksort was first introduced in 1961 by Hoare. Many variants have been developed, the best of which are among the fastest generic sorting algorithms available, as testified by the choice of Quicksort as the default sorting algorithm in most programming libraries. Some sorting algorithms are adapti ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Quicksort was first introduced in 1961 by Hoare. Many variants have been developed, the best of which are among the fastest generic sorting algorithms available, as testified by the choice of Quicksort as the default sorting algorithm in most programming libraries. Some sorting algorithms are adaptive, i.e. they have a complexity analysis which is better for inputs which are nearly sorted, according to some specified measure of presortedness. Quicksort is not among these, as it uses Ω(n log n) comparisons even when the input is already sorted. However, in this paper we demonstrate empirically that the actual running time of Quicksort is adaptive with respect to the presortedness measure Inv. Differences close to a factor of two are observed between instances with low and high Inv value. We then show that for the randomized version of Quicksort, the number of element swaps performed is provably adaptive with respect to the measure Inv. More precisely, we prove that randomized Quicksort performs expected O(n(1+log(1+ Inv/n))) element swaps, where Inv denotes the number of inversions in the input sequence. This result provides a theoretical explanation for the observed behavior, and gives new insights on the behavior of the Quicksort algorithm. We also give some empirical results on the adaptive behavior of Heapsort and Mergesort.
P.: The averagecase complexity of Shellsort
 Lecture Notes in Computer Science 1644
, 1999
"... We prove a general lower bound on the averagecase complexity of Shellsort: the average number of datamovements (and comparisons) made by a ppass Shellsort for 1 1+ any incremental sequence is Ω(pn p) for all p ≤ log n. Using similar arguments, we analyze the averagecase complexity of several oth ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
We prove a general lower bound on the averagecase complexity of Shellsort: the average number of datamovements (and comparisons) made by a ppass Shellsort for 1 1+ any incremental sequence is Ω(pn p) for all p ≤ log n. Using similar arguments, we analyze the averagecase complexity of several other sorting algorithms. 1
Asymptotic Complexity from Experiments?  A Case Study for Randomized Algorithms
 IN PROCEEDINGS OF THE 4TH WORKSHOP OF ALGORITHMS AND ENGINEERING (WAE'00
, 2000
"... In the analysis of algorithms we are usually interested in obtaining closed form expressions for their complexity, or at least asymptotic expressions in O()notation. Unfortunately, there are fundamental reasons why we cannot obtain such expressions from experiments. This paper explains how we can ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
In the analysis of algorithms we are usually interested in obtaining closed form expressions for their complexity, or at least asymptotic expressions in O()notation. Unfortunately, there are fundamental reasons why we cannot obtain such expressions from experiments. This paper explains how we can at least come close to this goal using the scientific method. Besides the traditional role of experiments as a source of preliminary ideas for theoretical analysis, experiments can test falsifiable hypotheses obtained by incomplete theoretical analysis. Asymptotic behavior can also be deduced from stronger hypotheses which have been induced from experiments. As long as a complete mathematical analysis is impossible, well tested hypotheses may have to take their place. Several
Bureaucratic protocols for secure twoparty sorting, selection, and permuting
 In 5th ACM Symposium on Information, Computer and Communications Security (ASIACCS
, 2010
"... In this paper, we introduce a framework for secure twoparty (S2P) computations, which we call bureaucratic computing, and we demonstrate its efficiency by designing practical S2P computations for sorting, selection, and random permutation. In a nutshell, the main idea behind bureaucratic computing ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
In this paper, we introduce a framework for secure twoparty (S2P) computations, which we call bureaucratic computing, and we demonstrate its efficiency by designing practical S2P computations for sorting, selection, and random permutation. In a nutshell, the main idea behind bureaucratic computing is to design dataoblivious algorithms that push all knowledge and influence of input values down to small blackbox circuits, which are simulated using Yao’s garbled paradigm. The practical benefit of this approach is that it maintains the zeroknowledge features of secure twoparty computations while avoiding the significant computational overheads that come from trying to apply Yao’s garbled paradigm to anything other than simple twoinput
Stochastic Analysis of Shell Sort
 Algorithmica
, 1999
"... We analyze the Shell Sort algorithm under the usual random permutation model. Using empirical distribution functions, we recover Louchard's result that the running time of the 1stage of (2; 1)Shell Sort has a limiting distribution given by the area under the absolute Brownian bridge. The analysis ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We analyze the Shell Sort algorithm under the usual random permutation model. Using empirical distribution functions, we recover Louchard's result that the running time of the 1stage of (2; 1)Shell Sort has a limiting distribution given by the area under the absolute Brownian bridge. The analysis extends to (h; 1)Shell Sort where we nd a limiting distribution given by the sum of areas under correlated absolute Brownian bridges. A variation of (h; 1)Shell Sort which is slightly more ecient is presented and its asymptotic behavior analyzed. 1 Research supported in part by National Science Foundation grant ?? 2 Research supported in part by National Science Foundation grant DMSResearch supported in part by National Science Foundation grant DMS9532039 and NIAID grant 2R01 AI29196804. AMS 1980 subject classications. Primary: 62E17; secondary 65D20. Key words and phrases. Brownian bridge, empirical process, keys, permutation, sorting. 1 1. Introduction Shell Sort is an algor...
Sorting Large Records On A Cell Broadband Engine
"... We consider the sorting of a large number of multifield records on the Cell Broadband engine. We show that our method, which generates runs using a 2way merge and then merges these runs using a 4way merge, outperforms previously proposed sort methods that use either comb sort or bitonic sort for ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
We consider the sorting of a large number of multifield records on the Cell Broadband engine. We show that our method, which generates runs using a 2way merge and then merges these runs using a 4way merge, outperforms previously proposed sort methods that use either comb sort or bitonic sort for run generation followed by a 2way oddeven merging of runs. Interestingly, best performance is achieved by using scalar memory copy instructions rather than vector instructions.
An Enhancement of Major Sorting Algorithms
, 2008
"... Abstract: One of the fundamental issues in computer science is ordering a list of items. Although there is a huge number of sorting algorithms, sorting problem has attracted a great deal of research; because efficient sorting is important to optimize the use of other algorithms. This paper presents ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract: One of the fundamental issues in computer science is ordering a list of items. Although there is a huge number of sorting algorithms, sorting problem has attracted a great deal of research; because efficient sorting is important to optimize the use of other algorithms. This paper presents two new sorting algorithms, enhanced selection sort and enhanced bubble Sort algorithms. Enhanced selection sort is an enhancement on selection sort by making it slightly faster and stable sorting algorithm. Enhanced bubble sort is an enhancement on both bubble sort and selection sort algorithms with O(nlgn) complexity instead of O(n 2) for bubble sort and selection sort algorithms. The two new algorithms are analyzed, implemented, tested, and compared and the results were promising.