Results 1 
7 of
7
Dynamic Perfect Hashing: Upper and Lower Bounds
, 1990
"... The dynamic dictionary problem is considered: provide an algorithm for storing a dynamic set, allowing the operations insert, delete, and lookup. A dynamic perfect hashing strategy is given: a randomized algorithm for the dynamic dictionary problem that takes O(1) worstcase time for lookups and ..."
Abstract

Cited by 127 (13 self)
 Add to MetaCart
The dynamic dictionary problem is considered: provide an algorithm for storing a dynamic set, allowing the operations insert, delete, and lookup. A dynamic perfect hashing strategy is given: a randomized algorithm for the dynamic dictionary problem that takes O(1) worstcase time for lookups and O(1) amortized expected time for insertions and deletions; it uses space proportional to the size of the set stored. Furthermore, lower bounds for the time complexity of a class of deterministic algorithms for the dictionary problem are proved. This class encompasses realistic hashingbased schemes that use linear space. Such algorithms have amortized worstcase time complexity \Omega(log n) for a sequence of n insertions and
ChernoffHoeffding Bounds for Applications with Limited Independence
 SIAM J. Discrete Math
, 1993
"... ChernoffHoeffding bounds are fundamental tools used in bounding the tail probabilities of the sums of bounded and independent random variables. We present a simple technique which gives slightly better bounds than these, and which more importantly requires only limited independence among the rando ..."
Abstract

Cited by 104 (10 self)
 Add to MetaCart
ChernoffHoeffding bounds are fundamental tools used in bounding the tail probabilities of the sums of bounded and independent random variables. We present a simple technique which gives slightly better bounds than these, and which more importantly requires only limited independence among the random variables, thereby importing a variety of standard results to the case of limited independence for free. Additional methods are also presented, and the aggregate results are sharp and provide a better understanding of the proof techniques behind these bounds. They also yield improved bounds for various tail probability distributions and enable improved approximation algorithms for jobshop scheduling. The "limited independence" result implies that a reduced amount of randomness and weaker sources of randomness are sufficient for randomized algorithms whose analyses use the ChernoffHoeffding bounds, e.g., the analysis of randomized algorithms for random sampling and oblivious packet routi...
Closed Hashing is Computable and Optimally Randomizable with Universal Hash Functions
"... Universal hash functions that exhibit c log nwise independence are shown to give a performance in double hashing, uniform hashing and virtually any reasonable generalization of double hashing that has an expected probe count of 1 1\Gammaff +O( 1 n ) for the insertion of the ffnth item into a ta ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Universal hash functions that exhibit c log nwise independence are shown to give a performance in double hashing, uniform hashing and virtually any reasonable generalization of double hashing that has an expected probe count of 1 1\Gammaff +O( 1 n ) for the insertion of the ffnth item into a table of size n, for any fixed ff ! 1. This performance is optimal. These results are derived from a novel formulation that overestimates the expected probe count by underestimating the presence of local items already inserted into the hash table, and from a very sharp analysis of the underlying stochastic structures formed by colliding items. Analogous bounds are attained for the expected rth moment of the probe count, for any fixed r, and linear probing is also shown to achieve a performance with universal hash functions that is equivalent to the fully random case. Categories and Subject Descriptors: E.1 [Data]: Data Structuresarrays; tables; E.2 [Data]: Data Storage Representationsha...
Double Hashing is Computable and Randomizable with Universal Hash Functions
"... Universal hash functions that exhibit c log nwise independence are shown to give a performance in double hashing and virtually any reasonable generalization of double hashing that has an expected probe count of 1/(1alpha) + epsilon for the insertion of the alpha nth item into a table of size n, f ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Universal hash functions that exhibit c log nwise independence are shown to give a performance in double hashing and virtually any reasonable generalization of double hashing that has an expected probe count of 1/(1alpha) + epsilon for the insertion of the alpha nth item into a table of size n, for any fixed alpha 0. This performance is within epsilon of optimal. These results are derived from a novel formulation that overestimates the expected probe count by underestimating the presence of partial items already inserted into the hash table, and from a sharp analysis of the underlying stochastic structures formed by colliding items.
On the statistical dependencies of coalesced hashing and their implications for both full and limited independence (Extended Abstract)
"... Alan Siegely 1 Summary. This paper gives the first optimal bounds for coalesced hashing schemes in the case of limited randomness, and thereby establishes the analytic performance of these schemes in a model that supports formal randomized computation. As a byproduct of this work, we attain a much ..."
Abstract
 Add to MetaCart
Alan Siegely 1 Summary. This paper gives the first optimal bounds for coalesced hashing schemes in the case of limited randomness, and thereby establishes the analytic performance of these schemes in a model that supports formal randomized computation. As a byproduct of this work, we attain a much simpler analysis of coalesced hashing schemes, which provides more information about the statistics of the underlying processes. We present the generating functions that govern the chain distribution and probe performance for coalesced hashing schemes, including asymptotic formulations when cellars are used. 2 Background. In coalesced hashing, a sequence of distinct keys D = (x1 ; x2 ; . . . ; x ff(C+n) ) that belong to a universe [0::m] is stored in a twopart table T [\GammaC::n \Gamma 1]. A hash function h, that maps [0::m] into the probe region [0::n \Gamma 1] is used to insert a key x as follows. If table slot T [h(x)] is vacant, x then is stored in T [h(x)]. Otherwise x is called a ...
Lecturer: Mihai Pǎtra¸scu
, 2005
"... In this lecture, we discuss hashing as a solution to dictionary/membership problem. Various results on hashing are presented with emphasis on static perfect hashing and Cuckoo hashing. 2 Dictionary/Membership Problem In dictionary/membership problem, we want to keep a set S of items with possibly so ..."
Abstract
 Add to MetaCart
In this lecture, we discuss hashing as a solution to dictionary/membership problem. Various results on hashing are presented with emphasis on static perfect hashing and Cuckoo hashing. 2 Dictionary/Membership Problem In dictionary/membership problem, we want to keep a set S of items with possibly some extra information associated with each one of them. (From now on, we denote the number of elements in S by n.) For the membership problem, the goal is to create a data structure that allows us to ask whether a given item x is in S or not. For a dictionary, the data structure should also return the infomation associated with x. For example, S can be a set of Swahili words such that each of the words is associated with a piece of text which describes its meaning. (Duh!) The problems have two versions: static and dynamic. In the static version, S is predetermined and never changes. On the other hand, the dynamic version allows items to be inserted to and removed from S. 3 Hashing with Chaining Let U denote the universe of items, and let m be a positive integer. A hash function is a function from U to Zm.
6.851: Advanced Data Structures Spring 2010
, 2010
"... In the last lecture we introduced ray shooting, where we determine which is the first object in a set intersected by a given ray. We overviewed how to solve this problem if our objects are simple polygons. This lecture explores ray shooting more generally, beginning with data structures designed to ..."
Abstract
 Add to MetaCart
In the last lecture we introduced ray shooting, where we determine which is the first object in a set intersected by a given ray. We overviewed how to solve this problem if our objects are simple polygons. This lecture explores ray shooting more generally, beginning with data structures designed to perform halfspace and simplex range queries such as partition trees, and continuing with an explanation of how to use these data structures to perform ray shooting. 2 Partition Trees Problem. Given a pointset S = {p1, p2,...,pn}, we would like to perform two sorts of queries: 1. Halfspace Range Queries: find properties relating to the subset of S on one side of a line hq (e.g., how many points are above hq?). 2. Simplex Range Queries: find properties relating to the subset of S inside a simplex tq (e.g., how many points lie inside tq?). In two dimensions, a simplex is a triangle, and we will use twodimensional examples for the remainder of these notes. Idea. Partition S into r disjoint subsets S1, S2,...,Sr. Each subset Si is associated with a triangle ti that contains the points in that subset (the triangles need not be disjoint). We call this partition