Results 1 
6 of
6
Cuckoo hashing
 JOURNAL OF ALGORITHMS
, 2001
"... We present a simple dictionary with worst case constant lookup time, equaling the theoretical performance of the classic dynamic perfect hashing scheme of Dietzfelbinger et al. (Dynamic perfect hashing: Upper and lower bounds. SIAM J. Comput., 23(4):738–761, 1994). The space usage is similar to that ..."
Abstract

Cited by 126 (6 self)
 Add to MetaCart
We present a simple dictionary with worst case constant lookup time, equaling the theoretical performance of the classic dynamic perfect hashing scheme of Dietzfelbinger et al. (Dynamic perfect hashing: Upper and lower bounds. SIAM J. Comput., 23(4):738–761, 1994). The space usage is similar to that of binary search trees, i.e., three words per key on average. Besides being conceptually much simpler than previous dynamic dictionaries with worst case constant lookup time, our data structure is interesting in that it does not use perfect hashing, but rather a variant of open addressing where keys can be moved back in their probe sequences. An implementation inspired by our algorithm, but using weaker hash functions, is found to be quite practical. It is competitive with the best known dictionaries having an average case (but no nontrivial worst case) guarantee.
Integer Priority Queues with Decrease Key in . . .
 STOC'03
, 2003
"... We consider Fibonacci heap style integer priority queues supporting insert and decrease key operations in constant time. We present a deterministic linear space solution that with n integer keys support delete in O(log log n) time. If the integers are in the range [0,N), we can also support delete i ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
We consider Fibonacci heap style integer priority queues supporting insert and decrease key operations in constant time. We present a deterministic linear space solution that with n integer keys support delete in O(log log n) time. If the integers are in the range [0,N), we can also support delete in O(log log N) time. Even for the special case of monotone priority queues, where the minimum has to be nondecreasing, the best previous bounds on delete were O((log n) 1/(3−ε) ) and O((log N) 1/(4−ε)). These previous bounds used both randomization and amortization. Our new bounds a deterministic, worstcase, with no restriction to monotonicity, and exponentially faster. As a classical application, for a directed graph with n nodes and m edges with nonnegative integer weights, we get single source shortest paths in O(m + n log log n) time, or O(m + n log log C) ifC is the maximal edge weight. The later solves an open problem of Ahuja, Mehlhorn, Orlin, and
Dynamic Ordered Sets with Exponential Search Trees
 Combination of results presented in FOCS 1996, STOC 2000 and SODA
, 2001
"... We introduce exponential search trees as a novel technique for converting static polynomial space search structures for ordered sets into fullydynamic linear space data structures. This leads to an optimal bound of O ( √ log n/log log n) for searching and updating a dynamic set of n integer keys i ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
We introduce exponential search trees as a novel technique for converting static polynomial space search structures for ordered sets into fullydynamic linear space data structures. This leads to an optimal bound of O ( √ log n/log log n) for searching and updating a dynamic set of n integer keys in linear space. Here searching an integer y means finding the maximum key in the set which is smaller than or equal to y. This problem is equivalent to the standard text book problem of maintaining an ordered set (see, e.g., Cormen, Leiserson, Rivest, and Stein: Introduction to Algorithms, 2nd ed., MIT Press, 2001). The best previous deterministic linear space bound was O(log n/log log n) due Fredman and Willard from STOC 1990. No better deterministic search bound was known using polynomial space.
Dybvig. Generationfriendly eq hash tables
 In 2007 Workshop on Scheme and Functional Programming
, 2007
"... Eq hash tables, which support arbitrary objects as keys, distinguish keys via pointer comparison and often employ hash functions that utilize the address of the object. When compacting garbage collectors move garbage collected objects, the addresses of such objects may change, thus invalidating the ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Eq hash tables, which support arbitrary objects as keys, distinguish keys via pointer comparison and often employ hash functions that utilize the address of the object. When compacting garbage collectors move garbage collected objects, the addresses of such objects may change, thus invalidating the hash function computation. A common solution is to rehash all of the entries in an eq hash table on the first access to the table after a collection. For a simple stopandcopy garbage collector, which moves every element of a hash table, the rehashing overhead is proportional to the amount of work done by the collector and so the cost of rehashing adds only constant overhead. Generational copying collectors, however, may move a few or none of the entries, so rehashing a large table may cost considerably more than the garbage collection run that caused the rehash. In other words, such rehashing is not “generation friendly.” In this paper, we describe an efficient, generationfriendly mechanism for implementing eq hash tables. The amount of work required for rehashing is proportional to the work performed by the collector as only objects that actually move during a collection are rehashed. The collector supports eq hash tables and their variants via a simple new type of object, a transport link cell, the handling of which by the collector is nearly trivial.
Randomized Signature Sort: Implementation & Performance Analysis
"... Recently the lower bound for integer sorting has considerably improved and achieved with comparison sorting to [1] for a deterministic algorithms or to for a radix sort algorithm in space that depends only on the number of input integers. Andersson et al. [2] presented signature sort in the expected ..."
Abstract
 Add to MetaCart
Recently the lower bound for integer sorting has considerably improved and achieved with comparison sorting to [1] for a deterministic algorithms or to for a radix sort algorithm in space that depends only on the number of input integers. Andersson et al. [2] presented signature sort in the expected linear time and space which gives very bad performance than randomized quick sort. We earlier presented in [14] that performance of signature sort can be enhanced using hashing and bitwise operators. This paper gives the implementation of that idea and later we have compared the performance of algorithm with existing randomized signature sort and randomized quick Sort.