Results 11  20
of
50
Rigorous Time/Space Tradeoffs for Inverting Functions
 SIAM Journal on Computing
, 2000
"... We provide rigorous time/space tradeoffs for inverting any function. Given a function f , we give a time/space tradeoff of TS q(f ), where q(f) is the probability that two random elements (taken with replacement) are mapped to the same image under f . We also give a more general tradeoff, TS ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
We provide rigorous time/space tradeoffs for inverting any function. Given a function f , we give a time/space tradeoff of TS q(f ), where q(f) is the probability that two random elements (taken with replacement) are mapped to the same image under f . We also give a more general tradeoff, TS , that can invert any function at any point.
Strongly historyindependent hashing with applications
 In Proceedings of the 48th Annual IEEE Symposium on Foundations of Computer Science
, 2007
"... We present a strongly history independent (SHI) hash table that supports search in O(1) worstcase time, and insert and delete in O(1) expected time using O(n) data space. This matches the bounds for dynamic perfect hashing, and improves on the best previous results by Naor and Teague on history ind ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
(Show Context)
We present a strongly history independent (SHI) hash table that supports search in O(1) worstcase time, and insert and delete in O(1) expected time using O(n) data space. This matches the bounds for dynamic perfect hashing, and improves on the best previous results by Naor and Teague on history independent hashing, which were either weakly history independent, or only supported insertion and search (no delete) each in O(1) expected time. The results can be used to construct many other SHI data structures. We show straightforward constructions for SHI ordered dictionaries: for n keys from {1,..., n k} searches take O(log log n) worstcase time and updates (insertions and deletions) O(log log n) expected time, and for keys in the comparison model searches take O(log n) worstcase time and updates O(log n) expected time. We also describe a SHI data structure for the ordermaintenance problem. It supports comparisons in O(1) worstcase time, and updates in O(1) expected time. All structures use O(n) data space. 1
Efficient PRAM Simulation on a Distributed Memory Machine
 IN PROCEEDINGS OF THE TWENTYFOURTH ACM SYMPOSIUM ON THEORY OF COMPUTING
, 1992
"... We present algorithms for the randomized simulation of a shared memory machine (PRAM) on a Distributed Memory Machine (DMM). In a PRAM, memory conflicts occur only through concurrent access to the same cell, whereas the memory of a DMM is divided into modules, one for each processor, and concurrent ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
We present algorithms for the randomized simulation of a shared memory machine (PRAM) on a Distributed Memory Machine (DMM). In a PRAM, memory conflicts occur only through concurrent access to the same cell, whereas the memory of a DMM is divided into modules, one for each processor, and concurrent accesses to the same module create a conflict. The delay of a simulation is the time needed to simulate a parallel memory access of the PRAM. Any general simulation of an m processor PRAM on a n processor DMM will necessarily have delay at least m=n. A randomized simulation is called timeprocessor optimal if the delay is O(m=n) with high probability. Using a novel simulation scheme based on hashing we obtain a timeprocessor optimal simulation with delay O(loglog(n)log (n)). The best previous simulations use a simpler scheme based on hashing and have much larger delay: \Theta(log(n)= loglog(n)) for the simulation of an n processor PRAM on an n processor DMM, and \Theta(log(n)) in the case ...
On the CostEffectiveness and Realization of the Theoretical PRAM Model
 SONDERFORSCHUNGSBEREICH 124 VLSI ENTWURFSMETHODEN UND PARALLELITAT, UNIVERSITAT SAARBRUCKEN
, 1991
"... Todays parallel computers provide good support for problems that can be easily embedded on the machines' topologies with regular and sparse communication patterns. But they show poor performance on problems that do not satisfy these conditions. A general purpose parallel computer should guarant ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
Todays parallel computers provide good support for problems that can be easily embedded on the machines' topologies with regular and sparse communication patterns. But they show poor performance on problems that do not satisfy these conditions. A general purpose parallel computer should guarantee good performance on most parallelizable problems and should allow users to program without special knowledge about the underlying architecture. Access to memory cells should be fast for local and non local cells and should not depend on the access pattern. A theoretical model that reaches this goal is the PRAM. But it was thought to be very expensive in terms of constant factors. Our goal is to show that the PRAM is a realistic approach for a general purpose architecture for any class of algorithms. To do that we sketch a measure of costeffectiveness that allows to determine constant factors in costs and speed of machines. This measure is based on the price/performance ratio and can be compu...
HistoryIndependent Cuckoo Hashing
"... Cuckoo hashing is an efficient and practical dynamic dictionary. It provides expected amortized constant update time, worst case constant lookup time, and good memory utilization. Various experiments demonstrated that cuckoo hashing is highly suitable for modern computer architectures and distribute ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
(Show Context)
Cuckoo hashing is an efficient and practical dynamic dictionary. It provides expected amortized constant update time, worst case constant lookup time, and good memory utilization. Various experiments demonstrated that cuckoo hashing is highly suitable for modern computer architectures and distributed settings, and offers significant improvements compared to other schemes. In this work we construct a practical historyindependent dynamic dictionary based on cuckoo hashing. In a historyindependent data structure, the memory representation at any point in time yields no information on the specific sequence of insertions and deletions that led to its current content, other than the content itself. Such a property is significant when preventing unintended leakage of information, and was also found useful in several algorithmic settings. Our construction enjoys most of the attractive properties of cuckoo hashing. In particular, no dynamic memory allocation is required, updates are performed in expected amortized constant time, and membership queries are performed in worst case constant time. Moreover, with high probability, the lookup procedure queries only two memory entries which are independent and can be queried in parallel. The approach underlying our construction is to enforce a canonical memory representation on cuckoo hashing. That is, up to the initial randomness, each set of elements has a unique memory representation.
Faster Suffix Tree Construction with Missing Suffix Links
 In Proceedings of the Thirty Second Annual Symposium on the Theory of Computing
, 2000
"... We consider suffix tree construction for situations with missing suffix links. Two examples of such situations are suffix trees for parameterized strings and suffix trees for 2D arrays. These trees also have the property that the node degrees may be large. We add a new backpropagation component to ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
(Show Context)
We consider suffix tree construction for situations with missing suffix links. Two examples of such situations are suffix trees for parameterized strings and suffix trees for 2D arrays. These trees also have the property that the node degrees may be large. We add a new backpropagation component to McCreight's algorithm and also give a high probability perfect hashing scheme to cope with large degrees. We show that these two features enable construction of suffix trees for general situations with missing suffix links in O(n) time, with high probability. This gives the first randomized linear time algorithm for constructing suffix trees for parameterized strings.
Implementing the Hierarchical PRAM on the 2D Mesh: Analyses and Experiments
, 1995
"... We investigate aspects of the performance of the EREW instance of the Hierarchical PRAM (HPRAM) model, a recursively partitionable PRAM, on the 2D mesh architecture via analysis and simulation experiments. Since one of the ideas behind the HPRAM is to systematically exploit locality in order to ne ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
We investigate aspects of the performance of the EREW instance of the Hierarchical PRAM (HPRAM) model, a recursively partitionable PRAM, on the 2D mesh architecture via analysis and simulation experiments. Since one of the ideas behind the HPRAM is to systematically exploit locality in order to negate the need for expensive communication hardware and thus promote costeffective scalability, our design decisions are based on minimizing implementation costs. The Peano indexing scheme is used as a simple and natural means of allowing the dynamic, recursive partitioning of the mesh into arbitrarilysized submeshes, as required by the HPRAM. We show that for any submesh the ratio of the largest manhattan distance between two nodes of the submesh to that of the square mesh with an identical number of processors is at most 3/2, thereby demonstrating the locality preserving properties of the Peano scheme for arbitrary partitions of the mesh. We provide matching analytical and experimenta...
Some Open Questions Related to Cuckoo Hashing
"... Abstract. The purpose of this brief note is to describe recent work in the area of cuckoo hashing, including a clear description of several open problems, with the hope of spurring further research. 1 ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
(Show Context)
Abstract. The purpose of this brief note is to describe recent work in the area of cuckoo hashing, including a clear description of several open problems, with the hope of spurring further research. 1
CONTENTION RESOLUTION IN HASHING BASED SHARED MEMORY SIMULATIONS
, 2000
"... In this paper we study the problem of simulating shared memory on the distributed memory machine (DMM). Our approach uses multiple copies of shared memory cells, distributed among the memory modules of the DMM via universal hashing. The main aim is to design strategies that resolve contention at th ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
In this paper we study the problem of simulating shared memory on the distributed memory machine (DMM). Our approach uses multiple copies of shared memory cells, distributed among the memory modules of the DMM via universal hashing. The main aim is to design strategies that resolve contention at the memory modules. Extending results and methods from random graphs and very fast randomized algorithms, we present new simulation techniques that enable us to improve the previously best results exponentially. In particular, we show that an nprocessor CRCW PRAM can be simulated by an nprocessor DMM with delay O(log log log n log ∗ n), with high probability. Next we describe a general technique that can be used to turn these simulations into timeprocessor optimal ones, in the case of EREW PRAMs to be simulated. We obtain a timeprocessor optimal simulation of an (n log log log n log ∗ n)processor EREW PRAM on an nprocessor DMM with delay O(log log log n log ∗ n), with high probability. When an (n log log log n log ∗ n)processor CRCW PRAM is simulated, the delay is only by a log ∗ n factor larger. We further demonstrate that the simulations presented can not be significantly improved using our techniques. We show an Ω(log log log n / log log log log n) lower bound on the expected delay for a class of PRAM simulations, called topological simulations, that covers all previously known simulations as well as the simulations presented in the paper.
Linear Hash Functions
, 1999
"... Consider the set # of all linear (or affine) transformations between two vector spaces over a finite field F. We study how good # is as a class of hash functions, namely we consider hashing a set S of size n into a range having the same cardinality n by a randomly chosen function from # and look at ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
Consider the set # of all linear (or affine) transformations between two vector spaces over a finite field F. We study how good # is as a class of hash functions, namely we consider hashing a set S of size n into a range having the same cardinality n by a randomly chosen function from # and look at the expected size of the largest hash bucket. # is a universal class of hash functions for any finite field, but with respect to our measure different fields behave differently. If the