Results 1 
3 of
3
Simple Fast Parallel Hashing by Oblivious Execution
 AT&T Bell Laboratories
, 1994
"... A hash table is a representation of a set in a linear size data structure that supports constanttime membership queries. We show how to construct a hash table for any given set of n keys in O(lg lg n) parallel time with high probability, using n processors on a weak version of a crcw pram. Our algo ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
A hash table is a representation of a set in a linear size data structure that supports constanttime membership queries. We show how to construct a hash table for any given set of n keys in O(lg lg n) parallel time with high probability, using n processors on a weak version of a crcw pram. Our algorithm uses a novel approach of hashing by "oblivious execution" based on probabilistic analysis to circumvent the parity lower bound barrier at the nearlogarithmic time level. The algorithm is simple and is sketched by the following: 1. Partition the input set into buckets by a random polynomial of constant degree. 2. For t := 1 to O(lg lg n) do (a) Allocate M t memory blocks, each of size K t . (b) Let each bucket select a block at random, and try to injectively map its keys into the block using a random linear function. Buckets that fail carry on to the next iteration. The crux of the algorithm is a careful a priori selection of the parameters M t and K t . The algorithm uses only O(lg lg...
Converting High Probability into NearlyConstant Time  with Applications to Parallel Hashing
, 1991
"... ) Yossi Matias Uzi Vishkin University of Maryland & TelAviv University Abstract We present a new paradigm for efficient randomized parallel algorithms that needs O(log n) time, where O(x) means `O(x) expected'. It leads to: (1) constructing a perfect hash function for n elements in O(l ..."
Abstract
 Add to MetaCart
(Show Context)
) Yossi Matias Uzi Vishkin University of Maryland & TelAviv University Abstract We present a new paradigm for efficient randomized parallel algorithms that needs O(log n) time, where O(x) means `O(x) expected'. It leads to: (1) constructing a perfect hash function for n elements in O(log n log(log n)) time and O(n) operations; (2) an algorithm for generating a random permutation in O(log n) time, using n processors or in O(log n log(log n)) time and O(n) operations; and (3) an efficient optimizer: consider a parallel algorithm that runs in t time using p processors; since at each time unit some of the processors may be idle, we let x, the total number of actual operations, be the sum over all nonidle processors at every time unit; assuming the algorithm belongs to a certain kind, it can be adapted to run in O(t+log n log(log n)) time (additive overhead!) using x=(t + log n log(log n)) processors. We also get an optimal integer sorting algorithm. Given...