Results 11  20
of
31
Dynamic Generation of Discrete Random Variates
, 1997
"... We present and analyze efficient new algorithms for generating a random variate distributed according to a dynamically changing set of weights. The base version of each algorithm generates the discrete random variate in O(log N) expected time and updates a weight in O(2 time in the worst case. ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
We present and analyze efficient new algorithms for generating a random variate distributed according to a dynamically changing set of weights. The base version of each algorithm generates the discrete random variate in O(log N) expected time and updates a weight in O(2 time in the worst case. We then show how to reduce the update time to O(log N) amortized expected time. The algorithms are simple, practical, and easy to implement. We show how to apply our techniques to a recent lookup table technique in order to obtain expected constant time in the worst case for generation and update, with no assumptions about the input being made. We give parallel algorithms for parallel generation and update having optimal processortime product. We also apply our techniques to obtain an efficient dynamic algorithm for maintaining fflheaps of elements; each query is required to return an element whose value is within an ffl relative factor of the maximal element value. For ffl = 1=polylog(n), each query, insertion, or deletion takes O(log log log n) time.
Randomized Data Structures for the Dynamic ClosestPair Problem
, 1993
"... We describe a new randomized data structure, the sparse partition, for solving the dynamic closestpair problem. Using this data structure the closest pair of a set of n points in Ddimensional space, for any fixed D, can be found in constant time. If a frame containing all the points is known in adv ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
We describe a new randomized data structure, the sparse partition, for solving the dynamic closestpair problem. Using this data structure the closest pair of a set of n points in Ddimensional space, for any fixed D, can be found in constant time. If a frame containing all the points is known in advance, and if the floor function is available at unitcost, then the data structure supports insertions into and deletions from the set in expected O(log n) time and requires expected O(n) space. Here, it is assumed that the updates are chosen by an adversary who does not know the random choices made by the data structure. This method is more efficient than any deterministic algorithm for solving the problem in dimension D ? 1. The data structure can be modified to run in O(log 2 n) expected time per update in the algebraic computation tree model of computation. Even this version is more efficient than the currently best known deterministic algorithm for D ? 2. 1 Introduction We ...
Streaming and fully dynamic centralized algorithms for constructing and maintaining sparse spanners
 In International Colloquium on Automata, Languages and Programming
, 2007
"... Abstract. We present a streaming algorithm for constructing sparse spanners and show that our algorithm outperforms significantly the stateoftheart algorithm for this task [20]. Specifically, the processing timeperedge of our algorithm is drastically smaller than that of the algorithm of [20], ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
Abstract. We present a streaming algorithm for constructing sparse spanners and show that our algorithm outperforms significantly the stateoftheart algorithm for this task [20]. Specifically, the processing timeperedge of our algorithm is drastically smaller than that of the algorithm of [20], and all other efficiency parameters of our algorithm are no greater (and some of them are strictly smaller) than the respective parameters for the stateoftheart algorithm. We also devise a fully dynamic centralized algorithm maintaining sparse spanners. This algorithm has a very small incremental update time, and a nontrivial decremental update time. To our knowledge, this is the first fully dynamic centralized algorithm for maintaining sparse spanners that provides nontrivial bounds on both incremental and decremental update time for a wide range of stretch parameter t. 1
Faster Suffix Tree Construction with Missing Suffix Links
 In Proceedings of the Thirty Second Annual Symposium on the Theory of Computing
, 2000
"... We consider suffix tree construction for situations with missing suffix links. Two examples of such situations are suffix trees for parameterized strings and suffix trees for 2D arrays. These trees also have the property that the node degrees may be large. We add a new backpropagation component to ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
We consider suffix tree construction for situations with missing suffix links. Two examples of such situations are suffix trees for parameterized strings and suffix trees for 2D arrays. These trees also have the property that the node degrees may be large. We add a new backpropagation component to McCreight's algorithm and also give a high probability perfect hashing scheme to cope with large degrees. We show that these two features enable construction of suffix trees for general situations with missing suffix links in O(n) time, with high probability. This gives the first randomized linear time algorithm for constructing suffix trees for parameterized strings.
Deamortized Cuckoo Hashing: Provable WorstCase Performance and Experimental Results
"... Cuckoo hashing is a highly practical dynamic dictionary: it provides amortized constant insertion time, worst case constant deletion time and lookup time, and good memory utilization. However, with a noticeable probability during the insertion of n elements some insertion requires Ω(log n) time. Whe ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Cuckoo hashing is a highly practical dynamic dictionary: it provides amortized constant insertion time, worst case constant deletion time and lookup time, and good memory utilization. However, with a noticeable probability during the insertion of n elements some insertion requires Ω(log n) time. Whereas such an amortized guarantee may be suitable for some applications, in other applications (such as highperformance routing) this is highly undesirable. Kirsch and Mitzenmacher (Allerton ’07) proposed a deamortization of cuckoo hashing using queueing techniques that preserve its attractive properties. They demonstrated a significant improvement to the worst case performance of cuckoo hashing via experimental results, but left open the problem of constructing a scheme with provable properties. In this work we present a deamortization of cuckoo hashing that provably guarantees constant worst case operations. Specifically, for any sequence of polynomially many operations, with overwhelming probability over the randomness of the initialization phase, each operation is performed in constant time. In addition, we present a general approach for proving that the performance guarantees are preserved when using hash functions with limited independence
PRAM Programming: Theory vs. Practice
 IN PROCEEDINGS OF 6TH EUROMICRO WORKSHOP ON PARALLEL AND DISTRIBUTED PROCESSING
, 1997
"... In this paper we investigate the practical viability of PRAM programming within the BSP framework. We argue that there is a necessity for PRAM computations in situations where the problem exhibits poor data locality. We introduce a C++ PRAM simulator that is built on top of the Oxford BSP Toolset, B ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
In this paper we investigate the practical viability of PRAM programming within the BSP framework. We argue that there is a necessity for PRAM computations in situations where the problem exhibits poor data locality. We introduce a C++ PRAM simulator that is built on top of the Oxford BSP Toolset, BSPlib, and provide a succinct PRAM language. Our approach achieves simplicity of programming over directmode BSP programming for reasonable overhead cost. We objectively compare optimised BSP algorithms with PRAM algorithms implemented with our C++ PRAM library and provide encouraging experimental results for the latter style of programming.
Balanced Allocation on Graphs
 In Proc. 7th Symposium on Discrete Algorithms (SODA
, 2006
"... It is well known that if n balls are inserted into n bins, with high probability, the bin with maximum load contains (1 + o(1))log n / loglog n balls. Azar, Broder, Karlin, and Upfal [1] showed that instead of choosing one bin, if d ≥ 2 bins are chosen at random and the ball inserted into the least ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
It is well known that if n balls are inserted into n bins, with high probability, the bin with maximum load contains (1 + o(1))log n / loglog n balls. Azar, Broder, Karlin, and Upfal [1] showed that instead of choosing one bin, if d ≥ 2 bins are chosen at random and the ball inserted into the least loaded of the d bins, the maximum load reduces drastically to log log n / log d+O(1). In this paper, we study the two choice balls and bins process when balls are not allowed to choose any two random bins, but only bins that are connected by an edge in an underlying graph. We show that for n balls and n bins, if the graph is almost regular with degree n ǫ, where ǫ is not too small, the previous bounds on the maximum load continue to hold. Precisely, the maximum load is
Improved Optimal Shared Memory Simulations, and the Power of Reconfiguration
 In Proceedings of the 3rd Israel Symposium on Theory of Computing and Systems
"... We present timeprocessor optimal randomized algorithms for simulating a shared memory machine (EREW PRAM) on a distributed memory machine (DMM). The first algorithm simulates each step of an nprocessor EREW PRAM on an nprocessor DMM with O( log log n log log log n ) delay with high probability. ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
We present timeprocessor optimal randomized algorithms for simulating a shared memory machine (EREW PRAM) on a distributed memory machine (DMM). The first algorithm simulates each step of an nprocessor EREW PRAM on an nprocessor DMM with O( log log n log log log n ) delay with high probability. This simulation is work optimal and can be made timeprocessor optimal. The best previous optimal simulations require O(log log n) delay. We also study reconfigurable DMMs which are a "complete network version " of the well studied reconfigurable meshes. We show an algorithm that simulates each step of an n processor EREW PRAM on an nprocessor reconfigurable DMM with only O(log n) delay with high probability. We further show how to make this simulation timeprocessor optimal. 1 Introduction Parallel machines that communicate via a shared memory (Parallel Random Access Machines, PRAMs) are the most commonly used machine model for describing parallel algorithms [J92]. The PRAM is relative...
Backyard Cuckoo Hashing: Constant WorstCase Operations with a Succinct Representation
, 2010
"... The performance of a dynamic dictionary is measured mainly by its update time, lookup time, and space consumption. In terms of update time and lookup time there are known constructions that guarantee constanttime operations in the worst case with high probability, and in terms of space consumption ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
The performance of a dynamic dictionary is measured mainly by its update time, lookup time, and space consumption. In terms of update time and lookup time there are known constructions that guarantee constanttime operations in the worst case with high probability, and in terms of space consumption there are known constructions that use essentially optimal space. In this paper we settle two fundamental open problems: • We construct the first dynamic dictionary that enjoys the best of both worlds: we present a twolevel variant of cuckoo hashing that stores n elements using (1+ϵ)n memory words, and guarantees constanttime operations in the worst case with high probability. Specifically, for any ϵ = Ω((log log n / log n) 1/2) and for any sequence of polynomially many operations, with high probability over the randomness of the initialization phase, all operations are performed in constant time which is independent of ϵ. The construction is based on augmenting cuckoo hashing with a “backyard ” that handles a large fraction of the elements, together with a deamortized perfect hashing scheme for eliminating the dependency on ϵ.
Simple Fast Parallel Hashing by Oblivious Execution
 AT&T Bell Laboratories
, 1994
"... A hash table is a representation of a set in a linear size data structure that supports constanttime membership queries. We show how to construct a hash table for any given set of n keys in O(lg lg n) parallel time with high probability, using n processors on a weak version of a crcw pram. Our algo ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
A hash table is a representation of a set in a linear size data structure that supports constanttime membership queries. We show how to construct a hash table for any given set of n keys in O(lg lg n) parallel time with high probability, using n processors on a weak version of a crcw pram. Our algorithm uses a novel approach of hashing by "oblivious execution" based on probabilistic analysis to circumvent the parity lower bound barrier at the nearlogarithmic time level. The algorithm is simple and is sketched by the following: 1. Partition the input set into buckets by a random polynomial of constant degree. 2. For t := 1 to O(lg lg n) do (a) Allocate M t memory blocks, each of size K t . (b) Let each bucket select a block at random, and try to injectively map its keys into the block using a random linear function. Buckets that fail carry on to the next iteration. The crux of the algorithm is a careful a priori selection of the parameters M t and K t . The algorithm uses only O(lg lg...