Results 1 
6 of
6
Dynamic Perfect Hashing: Upper and Lower Bounds
, 1990
"... The dynamic dictionary problem is considered: provide an algorithm for storing a dynamic set, allowing the operations insert, delete, and lookup. A dynamic perfect hashing strategy is given: a randomized algorithm for the dynamic dictionary problem that takes O(1) worstcase time for lookups and ..."
Abstract

Cited by 142 (14 self)
 Add to MetaCart
The dynamic dictionary problem is considered: provide an algorithm for storing a dynamic set, allowing the operations insert, delete, and lookup. A dynamic perfect hashing strategy is given: a randomized algorithm for the dynamic dictionary problem that takes O(1) worstcase time for lookups and O(1) amortized expected time for insertions and deletions; it uses space proportional to the size of the set stored. Furthermore, lower bounds for the time complexity of a class of deterministic algorithms for the dictionary problem are proved. This class encompasses realistic hashingbased schemes that use linear space. Such algorithms have amortized worstcase time complexity \Omega(log n) for a sequence of n insertions and
Backyard Cuckoo Hashing: Constant WorstCase Operations with a Succinct Representation
, 2010
"... The performance of a dynamic dictionary is measured mainly by its update time, lookup time, and space consumption. In terms of update time and lookup time there are known constructions that guarantee constanttime operations in the worst case with high probability, and in terms of space consumption ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
The performance of a dynamic dictionary is measured mainly by its update time, lookup time, and space consumption. In terms of update time and lookup time there are known constructions that guarantee constanttime operations in the worst case with high probability, and in terms of space consumption there are known constructions that use essentially optimal space. In this paper we settle two fundamental open problems: • We construct the first dynamic dictionary that enjoys the best of both worlds: we present a twolevel variant of cuckoo hashing that stores n elements using (1+ϵ)n memory words, and guarantees constanttime operations in the worst case with high probability. Specifically, for any ϵ = Ω((log log n / log n) 1/2) and for any sequence of polynomially many operations, with high probability over the randomness of the initialization phase, all operations are performed in constant time which is independent of ϵ. The construction is based on augmenting cuckoo hashing with a “backyard ” that handles a large fraction of the elements, together with a deamortized perfect hashing scheme for eliminating the dependency on ϵ.
The limits of buffering: A tight lower bound for dynamic membership in the external memory model
 In Proc. ACM Symposium on Theory of Computing
, 2010
"... We study the dynamic membership (or dynamic dictionary) problem, which is one of the most fundamental problems in data structures. We study the problem in the external memory model with cell size b bits and cache size m bits. We prove that if the amortized cost of updates is at most 0.999 (or any ot ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
We study the dynamic membership (or dynamic dictionary) problem, which is one of the most fundamental problems in data structures. We study the problem in the external memory model with cell size b bits and cache size m bits. We prove that if the amortized cost of updates is at most 0.999 (or any other constant < 1), then the query cost must be Ω(logb log n (n/m)), where n is the number of elements in the dictionary. In contrast, when the update time is allowed to be 1 + o(1), then a bit vector or hash table give query time O(1). Thus, this is a threshold phenomenon for data structures. This lower bound answers a folklore conjecture of the external memory community. Since almost any data structure task can solve membership, our lower bound implies a dichotomy between two alternatives: (i) make the amortized update time at least 1 (so the data structure does not buffer, and we lose one of the main potential advantages of the cache), or (ii) make the query time at least roughly logarithmic in n. Our result holds even when the updates and queries are chosen uniformly at random and there are no deletions; it holds for randomized data structures, holds when the universe size is O(n), and does not make any restrictive assumptions such as indivisibility. All of the lower bounds we prove hold regardless of the space consumption of the data structure, while the upper bounds only need linear space. The lower bound has some striking implications for external memory data structures. It shows that the query complexities of many problems such as 1Drange counting, predecessor, rankselect, and many others, are all the same
On the Cell Probe Complexity of Dynamic Membership
"... We study the dynamic membership problem, one of the most fundamental data structure problems, in the cell probe model with an arbitrary cell size. We consider a cell probe model equipped with a cache that consists of at least a constant number of cells; reading or writing the cache is free of charge ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
We study the dynamic membership problem, one of the most fundamental data structure problems, in the cell probe model with an arbitrary cell size. We consider a cell probe model equipped with a cache that consists of at least a constant number of cells; reading or writing the cache is free of charge. For nearly all common data structures, it is known that with sufficiently large cells together with the cache, we can significantly lower the amortized update cost to o(1). In this paper, we show that this is not the case for the dynamic membership problem. Specifically, for any deterministic membership data structure under a random input sequence, if the expected average query cost is no more than 1+δ for some small constant δ, we prove that the expected amortized update cost must be at least Ω(1), namely, it does not benefit from large block writes (and a cache). The space the structure uses is irrelevant to this lower bound. We also extend this lower bound to randomized membership structures, by using a variant of Yao’s minimax principle. Finally, we show that the structure cannot do better even if it is allowed to answer a query mistakenly with a small constant probability. 1
Adjacency queries in dynamic sparse graphs
 Inf. Process. Lett
, 2007
"... We deal with the problem of maintaining a dynamic graph so that queries of the form “is there an edge between u and v? ” are processed fast. We consider graphs of bounded arboricity, i.e., graphs with no dense subgraphs, like for example planar graphs. Brodal and Fagerberg [WADS’99] described a very ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
We deal with the problem of maintaining a dynamic graph so that queries of the form “is there an edge between u and v? ” are processed fast. We consider graphs of bounded arboricity, i.e., graphs with no dense subgraphs, like for example planar graphs. Brodal and Fagerberg [WADS’99] described a very simple linearsize data structure which processes queries in constant worstcase time and performs insertions and deletions in O(1) and O(log n) amortized time, respectively. We show a complementary result that their data structure can be used to get O(log n) worstcase time for query, O(1) amortized time for insertions and O(1) worstcase time for deletions. Moreover, our analysis shows that by combining the data structure of Brodal and Fagerberg with efficient dictionaries one gets O(log log logn) worstcase time bound for queries and deletions and O(log log logn) amortized time for insertions, with size of the data structure still linear. This last result holds even for graphs of arboricity bounded by O(logk n), for some constant k.
Fast Algorithms for the TRON Game on Trees
, 1990
"... TRON is the following game which can be played on any graph: Two players choose lternately a node of the graph subject to the requirement that each player must choose a node which is adjacent to his previously chosen node and such that every node is chosen only once. In this paper O(n) and O(nv/ ..."
Abstract
 Add to MetaCart
TRON is the following game which can be played on any graph: Two players choose lternately a node of the graph subject to the requirement that each player must choose a node which is adjacent to his previously chosen node and such that every node is chosen only once. In this paper O(n) and O(nv/d ) algorithms axe given for deciding whether there is a winning strategy for the first player when TRON is played on a given tree, for the vaxiants with and without specified staxting nodes, respectively. The problem is shown to be both NPhaxd and coNPhard for connected undirected graphs in general