Results

**1 - 3**of**3**### On Worst Case Robin-Hood Hashing

- SIAM J. Computing
, 2004

"... We consider open addressing hashing and implement it by using the Robin Hood strategy; that is, in case of collision, the element that has traveled the farthest can stay in the slot. We hash ∼ αn elements into a table of size n where each probe is independent and uniformly distributed over the tab ..."

Abstract
- Add to MetaCart

We consider open addressing hashing and implement it by using the Robin Hood strategy; that is, in case of collision, the element that has traveled the farthest can stay in the slot. We hash ∼ αn elements into a table of size n where each probe is independent and uniformly distributed over the table, and α<1 is a constant. Let Mn be the maximum search time for any of the elements in the table. We show that with probability tending to one, Mn ∈ [log 2 log n + σ, log 2 log n + τ] for some constants σ, τ depending upon α only. This is an exponential improvement over the maximum search time in case of the standard FCFS (first come first served) collision strategy and virtually matches the performance of multiple-choice hash methods.

### PROBING HASHING

"... Abstract. In this paper we study the maximum displacement for linear probing hashing. We use the standard probabilistic model together with the insertion policy known as First-Come(-First-Served). The results are of asymptotic nature and focus on dense hash tables. That is, the number of occupied ce ..."

Abstract
- Add to MetaCart

Abstract. In this paper we study the maximum displacement for linear probing hashing. We use the standard probabilistic model together with the insertion policy known as First-Come(-First-Served). The results are of asymptotic nature and focus on dense hash tables. That is, the number of occupied cells n, and the size of the hash table m, tend to infinity with ratio n/m → 1. We present distributions and moments for the size of the maximum displacement, as well as for the number of items with displacement larger than some critical value. This is done via process convergence of the (appropriately normalized) length of the largest block of consecutive occupied cells, when the total number of occupied cells n varies. 1.

### Coherent Parallel Hashing

, 2011

"... (a) The flower image is 3820 × 3820 image (14.5 million pixels) and contains 3.7 million non–white pixels. The coordinates of these pixels are shown as colors in (b). We store the image in a hash table under a 0.99 load factor: the hash table contains only 3.73 million entries. These are used as key ..."

Abstract
- Add to MetaCart

(a) The flower image is 3820 × 3820 image (14.5 million pixels) and contains 3.7 million non–white pixels. The coordinates of these pixels are shown as colors in (b). We store the image in a hash table under a 0.99 load factor: the hash table contains only 3.73 million entries. These are used as keys for hashing. (c) The table obtained with a typical randomizing hash function: Keys are randomly spread and all coherence is lost. (d) Our spatially coherent hash table, built in parallel on the GPU. The table is built in 15 ms on a GeForce GTX 480, and the image is reconstructed from the hash in 3.5 ms. The visible structures are due to preserved coherence. This translates to faster access as neighboring threads perform similar operations and access nearby memory. (e) Neighboring keys are kept together during probing, thereby improving the coherence of memory accesses of neighboring threads. Recent spatial hashing schemes hash millions of keys in parallel, compacting sparse spatial data in small hash tables while still allowing for fast access from the GPU. Unfortunately, available schemes suffer from two drawbacks: Multiple runs of the construction process are often required before success, and the random nature of the hash functions decreases access performance. We introduce a new parallel hashing scheme which reaches high