Results 1 
8 of
8
Optimal and Sublogarithmic Time Randomized Parallel Sorting Algorithms
 SIAM JOURNAL ON COMPUTING
, 1989
"... We assume a parallel RAM model which allows both concurrent reads and concurrent writes of a global memory. Our main result is an optimal randomized parallel algorithm for INTEGER SORT (i.e., for sorting n integers in the range [1; n]). Our algorithm costs only logarithmic time and is the first know ..."
Abstract

Cited by 62 (12 self)
 Add to MetaCart
We assume a parallel RAM model which allows both concurrent reads and concurrent writes of a global memory. Our main result is an optimal randomized parallel algorithm for INTEGER SORT (i.e., for sorting n integers in the range [1; n]). Our algorithm costs only logarithmic time and is the first known that is optimal: the product of its time and processor bounds is upper bounded by a linear function of the input size. We also give a deterministic sublogarithmic time algorithm for prefix sum. In addition we present a sublogarithmic time algorithm for obtaining a random permutation of n elements in parallel. And finally, we present sublogarithmic time algorithms for GENERAL SORT and INTEGER SORT. Our sublogarithmic GENERAL SORT algorithm is also optimal.
Using Difficulty of Prediction to Decrease Computation: Fast Sort, Priority Queue and Convex Hull on Entropy Bounded Inputs
"... There is an upsurge in interest in the Markov model and also more general stationary ergodic stochastic distributions in theoretical computer science community recently (e.g. see [Vitter,KrishnanSl], [Karlin,Philips,Raghavan92], [Raghavan9 for use of Markov models for online algorithms, e.g., cashi ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
There is an upsurge in interest in the Markov model and also more general stationary ergodic stochastic distributions in theoretical computer science community recently (e.g. see [Vitter,KrishnanSl], [Karlin,Philips,Raghavan92], [Raghavan9 for use of Markov models for online algorithms, e.g., cashing and prefetching). Their results used the fact that compressible sources are predictable (and vise versa), and showed that online algorithms can improve their performance by prediction. Actual page access sequences are in fact somewhat compressible, so their predictive methods can be of benefit. This paper investigates the interesting idea of decreasing computation by using learning in the opposite way, namely to determine the difficulty of prediction. That is, we will ap proximately learn the input distribution, and then improve the performance of the computation when the input is not too predictable, rather than the reverse. To our knowledge,
An Optimal Selection Algorithm
, 1986
"... We give an optimal parallel algorithm for selection on the EREW PRAM. It requires a linear number of operations and O(log n log* n) time. ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We give an optimal parallel algorithm for selection on the EREW PRAM. It requires a linear number of operations and O(log n log* n) time.
Parallel Randomized Algorithms for Selection, Sorting and Convex Hulls
 ILLUSTRATION OF REIF MACROS
, 2000
"... ..."
Optimal and Sublogarithmic Time
"... Abstract.We assume a parallel RAM model which allows both concurrent reads and concurrent writes of a global memory. Our main result is an optimal randomized parallel algorithm for INTEGER SORT (i.e., for sorting n integers in the range [1,n]). Our algorithm costs only logarithmic time and is the f ..."
Abstract
 Add to MetaCart
Abstract.We assume a parallel RAM model which allows both concurrent reads and concurrent writes of a global memory. Our main result is an optimal randomized parallel algorithm for INTEGER SORT (i.e., for sorting n integers in the range [1,n]). Our algorithm costs only logarithmic time and is the first known that is optimal: the product of its time and processor bounds is upper bounded by a linear function of the input size. We also give a deterministic sublogarithmic time algorithm for prefix sum. In addition we present a sublogarithmic time algorithm for obtaining a random permutation of n elements in parallel. And finally, we present sublogarithmic time algorithms for GENERAL SORT and INTEGER SORT. Our sublogarithmic GENERAL SORT algorithm is also optimal.
Using Learning and Difficulty of Prediction to Decrease Computation: A Fast Sort and Priority Queue on Entropy Bounded Inputs
"... There is an upsurge in interest in the Markov model and also more general stationary ergodic stochastic distributions in theoretical computer science community recently, (e.g. see [Vitter, Krishnan, FOCS91], [Karlin, Philips, Raghavan, FOCS92] [Raghavan92]) for use of Markov models for online algor ..."
Abstract
 Add to MetaCart
There is an upsurge in interest in the Markov model and also more general stationary ergodic stochastic distributions in theoretical computer science community recently, (e.g. see [Vitter, Krishnan, FOCS91], [Karlin, Philips, Raghavan, FOCS92] [Raghavan92]) for use of Markov models for online algorithms e.g., cashing and prefetching). Their results used the fact that compressible sources are predictable (and vise versa), and show that online algorithms can improve their performance by prediction. Actual page access sequences are in fact somewhat compressible, so their predictive methods can be of benefit. This paper investigates the interesting idea of decreasing computation by using learning in the opposite way, namely to determine the difficulty of prediction. That is, we will approximately learn the input distribution, and then improve the performance of the computation when the input is not too predictable, rather than the reverse. To our knowledge, this is first case of a computational problem where we do not assume any particular fixed input distribution and yet computation is decreased when the input is less predictable, rather than the reverse. We concentrate our investigation on a basic computational problem: sorting and a basic data structure problem: maintaining a priority queue. We present the first known case of sorting and priority queue algorithms whose complexity depends on the binary entropy H ≤ 1 of input keys where assume that input keys are generated from an unknown but arbitrary stationary ergodic source. This is, we assume that each of the input keys can be each arbitrarily long, but have entropy H. Note that H can be
Random Sampling Techniques and Parallel Algorithms Design
, 2003
"... 3.1.1 Randomized Algorithms The technique of randomizing an algorithm to improve its efficiency was first introduced in 1976 independently by Rabin and Solovay & Strassen. Since then, this idea has been used to solve myriads of computational problems ..."
Abstract
 Add to MetaCart
3.1.1 Randomized Algorithms The technique of randomizing an algorithm to improve its efficiency was first introduced in 1976 independently by Rabin and Solovay & Strassen. Since then, this idea has been used to solve myriads of computational problems
unknown title
"... let P denote the processor bound, and T denote the time bound ofa parallel algorithm for a given problem, the product PT is, clearly, lower bounded by the minimum sequential time, Ts, required to solve this problem. We say a parallel algorithm is optimal ifPT O(Ts). Discovering optimal parallel algo ..."
Abstract
 Add to MetaCart
let P denote the processor bound, and T denote the time bound ofa parallel algorithm for a given problem, the product PT is, clearly, lower bounded by the minimum sequential time, Ts, required to solve this problem. We say a parallel algorithm is optimal ifPT O(Ts). Discovering optimal parallel algorithms for sorting both general and integer keys remained an open problem for a long time. Reischuk [25] proposed a randomized parallel algorithm that used n synchronous PRAM processors to sort n general keys in O(log n) time. This algorithm, however, is impracticalowing to its large wordlength requirements. Reifand Valiant [24] presented a randomized sorting algorithm that ran on a fixedconnection network called cubeconnected cycles (CCC). This algorithm employed n processors to sort n general keys in time O(log n). Since f(n log n) is a sequential lower bound for this problem, their algorithm is indeed optimal. Simultaneously, Atai, Koml6s, and Szemer6di [4] discovered a deterministic parallel algorithm for sorting n general keys in time O(log n) using a sorting network of O(n log n) processors. Later, Leighton [17] showed that this algorithm could be modified to run in O(log n) time on an nnode fixedconnection