Results 1 
5 of
5
THE MAXIMUM DISPLACEMENT FOR LINEAR PROBING HASHING
, 2008
"... In this paper we study the maximum displacement for linear probing hashing. We use the standard probabilistic model together with the insertion policy known as FirstCome(FirstServed). The results are of asymptotic nature and focus on dense hash tables. That is, the number of occupied cells n, a ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
In this paper we study the maximum displacement for linear probing hashing. We use the standard probabilistic model together with the insertion policy known as FirstCome(FirstServed). The results are of asymptotic nature and focus on dense hash tables. That is, the number of occupied cells n, and the size of the hash table m, tend to infinity with ratio n/m → 1. We present distributions and moments for the size of the maximum displacement, as well as for the number of items with displacement larger than some critical value. This is done via process convergence of the (appropriately normalized) length of the largest block of consecutive occupied cells, when the total number of occupied cells n varies.
Vol. 00, No. 00, Month 200x, 1–15 Efficient data structures for sparse network representation
"... Modernday computers are characterized by a striking contrast between the processing power of the CPU and the latency of main memory accesses. If the data processed is both large compared to processor caches and sparse or highdimensional in nature, as is commonly the case in complex network researc ..."
Abstract
 Add to MetaCart
(Show Context)
Modernday computers are characterized by a striking contrast between the processing power of the CPU and the latency of main memory accesses. If the data processed is both large compared to processor caches and sparse or highdimensional in nature, as is commonly the case in complex network research, the main memory latency can become a performace bottleneck. In this Article, we present a cache efficient data structure, a variant of a linear probing hash table, for representing edge sets of such networks. The performance benchmarks show that it is, indeed, quite superior to its commonly used counterparts in this application. In addition, its memory footprint only exceeds the absolute minimum by a small constant factor. The practical usability of our approach has been well demonstrated in the study of very large realworld networks.
INDIVIDUAL DISPLACEMENTS IN HASHING WITH COALESCED CHAINS
"... Abstract. We study the asymptotic distribution of the displacements in hashing with coalesced chains, for both lateinsertion and earlyinsertion. Asymptotic formulas for means and variances follow. The method uses Poissonization and some stochastic calculus. 1. ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. We study the asymptotic distribution of the displacements in hashing with coalesced chains, for both lateinsertion and earlyinsertion. Asymptotic formulas for means and variances follow. The method uses Poissonization and some stochastic calculus. 1.
Distributional Analysis of the Parking Problem and Robin Hood Linear Probing Hashing with Buckets
, 2009
"... This paper presents the first distributional analysis of both, a parking problem and a linear probing hashing scheme with buckets of size b. The exact distribution of the cost of successful searches for a bαfull table is obtained, and moments and asymptotic results are derived. With the use of the ..."
Abstract
 Add to MetaCart
(Show Context)
This paper presents the first distributional analysis of both, a parking problem and a linear probing hashing scheme with buckets of size b. The exact distribution of the cost of successful searches for a bαfull table is obtained, and moments and asymptotic results are derived. With the use of the Poisson transform distributional results are also obtained for tables of size m and n elements. A key element in the analysis is the use of a new family of numbers, called Tuba Numbers, that satisfies a recurrence resembling that of the Bernoulli numbers. These numbers may prove helpful in studying recurrences involving truncated generating functions, as well as in other problems related with buckets.
doi:10.1093/comjnl/bxm097 Analysis of Linear Time Sorting Algorithms
, 2008
"... We derive CPU time formulae for the two simplest linear timesorting algorithms, linear probing sort and bucket sort, as a function of the load factor, and show agreement with experimentally measured CPU times. This allows us to compute optimal load factors for each algorithm, whose values have prev ..."
Abstract
 Add to MetaCart
(Show Context)
We derive CPU time formulae for the two simplest linear timesorting algorithms, linear probing sort and bucket sort, as a function of the load factor, and show agreement with experimentally measured CPU times. This allows us to compute optimal load factors for each algorithm, whose values have previously been identified only approximately in the literature. We also present a simple model of cache latency and apply it not only to linear probing sort and bucket sort, where the bulk of the latency is due to random access, but also to the log–linear algorithm quicksort, where the access is primarily sequential, and again show agreement with experimental CPU times. With minor modifications, our model also fits CPU times previously reported by LaMarca and Ladner for radix sort, and by Rahman and Raman for most significant digit radix sort, Flashsort1, and memory tuned quicksort. Keywords: linear timesorting algorithms; cache latency; linear probing sort; bucket sort; Flashsort1; radix sort; quicksort