Results 1 
3 of
3
Backyard Cuckoo Hashing: Constant WorstCase Operations with a Succinct Representation
, 2010
"... The performance of a dynamic dictionary is measured mainly by its update time, lookup time, and space consumption. In terms of update time and lookup time there are known constructions that guarantee constanttime operations in the worst case with high probability, and in terms of space consumption ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
The performance of a dynamic dictionary is measured mainly by its update time, lookup time, and space consumption. In terms of update time and lookup time there are known constructions that guarantee constanttime operations in the worst case with high probability, and in terms of space consumption there are known constructions that use essentially optimal space. In this paper we settle two fundamental open problems: • We construct the first dynamic dictionary that enjoys the best of both worlds: we present a twolevel variant of cuckoo hashing that stores n elements using (1+ϵ)n memory words, and guarantees constanttime operations in the worst case with high probability. Specifically, for any ϵ = Ω((log log n / log n) 1/2) and for any sequence of polynomially many operations, with high probability over the randomness of the initialization phase, all operations are performed in constant time which is independent of ϵ. The construction is based on augmenting cuckoo hashing with a “backyard ” that handles a large fraction of the elements, together with a deamortized perfect hashing scheme for eliminating the dependency on ϵ.
Dynamic external hashing: The limit of buffering
 In Proc. ACM Symposium on Parallelism in Algorithms and Architectures
, 2009
"... Hash tables are one of the most fundamental data structures in computer science, in both theory and practice. They are especially useful in external memory, where their query performance approaches the ideal cost of just one disk access. Knuth [16] gave an elegant analysis showing that with some sim ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
(Show Context)
Hash tables are one of the most fundamental data structures in computer science, in both theory and practice. They are especially useful in external memory, where their query performance approaches the ideal cost of just one disk access. Knuth [16] gave an elegant analysis showing that with some simple collision resolution strategies such as linear probing or chaining, the expected average number of disk I/Os of a lookup is merely 1 + 1/2 Ω(b) , where each I/O can read and/or write a disk block containing b items. Inserting a new item into the hash table also costs 1 + 1/2 Ω(b) I/Os, which is again almost the best one can do if the hash table is entirely stored on disk. However, this requirement is unrealistic since any algorithm operating on an external hash table must have some internal memory (at least Ω(1) blocks) to work with. The availability of a small internal memory buffer can dramatically reduce the amortized insertion cost to o(1) I/Os for many external memory data structures. In this paper we study the inherent queryinsertion tradeoff of external hash tables in the presence of a memory buffer. In particular, we show that for any constant c> 1, if the expected average successful query cost is targeted at 1 + O(1/b c) I/Os, then it is not possible to support insertions in less than 1 − O(1/b c−1 6) I/Os amortized, which means that the memory buffer is essentially useless. While if the query cost is relaxed to 1 + O(1/b c) I/Os for any constant c < 1, there is a simple dynamic hash table with o(1) insertion cost. Categories and Subject Descriptors F.2.3 [Analysis of algorithms and problem complexity]: Tradeoffs between complexity measures; E.2 [Data storage]: hashtable representations
On the korientability of random graphs
, 2009
"... Let G(n, m) be an undirected random graph with n vertices and m multiedges that may include loops, where each edge is realized by choosing its two vertices independently and uniformly at random with replacement from the set of all n vertices. The random graph G(n, m) is said to be korientable, wher ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Let G(n, m) be an undirected random graph with n vertices and m multiedges that may include loops, where each edge is realized by choosing its two vertices independently and uniformly at random with replacement from the set of all n vertices. The random graph G(n, m) is said to be korientable, where k ≥ 2 is an integer, if there exists an orientation of the edges such that the maximum outdegree is at most k. Let ck = sup {c: G(n, cn) is korientable w.h.p.}. We prove that for k large enough, 1 − 2 k exp −k + 1 + e −k/4) < ck/k < 1 − exp