Results 1 
9 of
9
Backyard Cuckoo Hashing: Constant WorstCase Operations with a Succinct Representation
, 2010
"... The performance of a dynamic dictionary is measured mainly by its update time, lookup time, and space consumption. In terms of update time and lookup time there are known constructions that guarantee constanttime operations in the worst case with high probability, and in terms of space consumption ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
The performance of a dynamic dictionary is measured mainly by its update time, lookup time, and space consumption. In terms of update time and lookup time there are known constructions that guarantee constanttime operations in the worst case with high probability, and in terms of space consumption there are known constructions that use essentially optimal space. In this paper we settle two fundamental open problems: • We construct the first dynamic dictionary that enjoys the best of both worlds: we present a twolevel variant of cuckoo hashing that stores n elements using (1+ϵ)n memory words, and guarantees constanttime operations in the worst case with high probability. Specifically, for any ϵ = Ω((log log n / log n) 1/2) and for any sequence of polynomially many operations, with high probability over the randomness of the initialization phase, all operations are performed in constant time which is independent of ϵ. The construction is based on augmenting cuckoo hashing with a “backyard ” that handles a large fraction of the elements, together with a deamortized perfect hashing scheme for eliminating the dependency on ϵ.
Hereditary history preserving bisimilarity is undecidable
 STACS 2000, 17th Annual Symposium on Theoretical Aspects of Computer Science, Proceedings, volume 1770 of Lecture Notes in Computer Science
, 2000
"... Abstract History preserving bisimilarity (hpbisimilarity) and hereditary history preserving bisimilarity (hhpbisimilarity) are behavioural equivalences taking into account causal relationships between events of concurrent systems. Their prominent feature is being preserved under action refinement, ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Abstract History preserving bisimilarity (hpbisimilarity) and hereditary history preserving bisimilarity (hhpbisimilarity) are behavioural equivalences taking into account causal relationships between events of concurrent systems. Their prominent feature is being preserved under action refinement, an operation important for the topdown design of concurrent systems. We show thatunlike hpbisimilaritychecking hhpbisimilarity for finite labelled asynchronous transition systems is not decidable, by a reduction from the halting problem of 2counter machines. To make the proof more transparent we introduce an intermediate problem of checking domino bisimilarity for origin constrained tiling systems, whose undecidability is interesting in its own right. We also argue that the undecidability of hhpbisimilarity holds for finite labelled 1safe Petri nets. 1 Introduction The notion of behavioural equivalence that has attracted most attention in concurrency theory is bisimilarity, originally introduced by Park [20] and Milner [15]; concurrent programs are considered to have the same meaning if they are bisimilar. The prominent role of bisimilarity is due to many pleasant properties it enjoys; we mention a few of them here. A process of checking whether two transition systems are bisimilar can beseen as a two player game which is in fact an EhrenfeuchtFra&quot;iss'e type of game
Expander based dictionary data structures
, 2005
"... We consider dictionary data structures based on expander graphs. We show that any one probe scheme with the properties of the previous data structure from [OP02] is indeed space optimal. We then construct four different dictionary data structures for various models of parallel external memory. All o ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
We consider dictionary data structures based on expander graphs. We show that any one probe scheme with the properties of the previous data structure from [OP02] is indeed space optimal. We then construct four different dictionary data structures for various models of parallel external memory. All of them allows lookups using a single parallel probe. In the following n denotes the number of keys in the dictionary, and u the universe of possible keys. ∆opt denotes the space in bits required to store the n keys and their satellite data without any type of compression and d = O(log(u/n)). • A static dictionary data structure with error correcting codes using O(∆opt) bits of space, and one requiring O(ndlog d + ∆opt) bits of space without using error correcting codes. • A dynamic dictionary data structure for the parallel disk head model using O(ndlog n + ∆opt) bits of space, where updates take O(1) I/O’s amortized. • A dynamic dictionary data structure for the parallel disk model, with
Maintaining External Memory Efficient Hash Tables
"... Abstract. In typical applications of hashing algorithms the amount of data to be stored is often too large to fit into internal memory. In this case it is desirable to find the data with as few as possible nonconsecutive or at least nonoblivious probes into external memory. Extending a static sche ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. In typical applications of hashing algorithms the amount of data to be stored is often too large to fit into internal memory. In this case it is desirable to find the data with as few as possible nonconsecutive or at least nonoblivious probes into external memory. Extending a static scheme of Pagh [11] we obtain new randomized algorithms for maintaining hash tables, where a hash function can be evaluated in constant time and by probing only one external memory cell or O(1) consecutive external memory cells. We describe a dynamic version of Pagh’s hashing scheme achieving 100 % table utilization but requiring (2 + ɛ) · n log n space for the hash function encoding as well as (3 + ɛ) · n log n space for the auxiliary data structure. Update operations are possible in expected constant amortized time. Then we show how to reduce the space for the hash function encoding and the auxiliary data structure to O(n log log n). We achieve 100 % utilization in the static version (and thus a minimal perfect hash function) and 1 − ɛ utilization in the dynamic case. 1
On the Cell Probe Complexity of Membership and Perfect Hashing ∗
"... We study two fundamental static data structure problems, membership and perfect hashing, in Yao’s cell probe model. The first space and bit probe optimal worst case upper bound is given for the membership problem. We also give a new efficient membership scheme where the query algorithm makes just on ..."
Abstract
 Add to MetaCart
(Show Context)
We study two fundamental static data structure problems, membership and perfect hashing, in Yao’s cell probe model. The first space and bit probe optimal worst case upper bound is given for the membership problem. We also give a new efficient membership scheme where the query algorithm makes just one adaptive choice, and probes a total of three words. A lower bound shows that two word probes generally do not suffice. For minimal perfect hashing we show a tight bit probe lower bound, and give a simple scheme achieving this performance, making just one adaptive choice. Linear range perfect hashing is shown to be implementable with the same number of bit probes, of which just one is adaptive. In contrast, we establish that for sufficiently sparse sets, nonadaptive perfect hashing needs exponentially more bit probes. This is the first such separation of adaptivity and nonadaptivity. 1.
This document in subdirectoryRS/99/19/ Hereditary History Preserving Bisimilarity Is Undecidable
, 1999
"... Reproduction of all or part of this work is permitted for educational or research use on condition that this copyright notice is included in any copy. See back inner page for a list of recent BRICS Report Series publications. Copies may be obtained by contacting: BRICS ..."
Abstract
 Add to MetaCart
Reproduction of all or part of this work is permitted for educational or research use on condition that this copyright notice is included in any copy. See back inner page for a list of recent BRICS Report Series publications. Copies may be obtained by contacting: BRICS
Guided Tour of Some Results on Hashing and Dictionaries
, 2001
"... This document presents some results on dictionaries in the form of stepbystep problems. Familiarity with the material on uinversal hash functions in the first four sections of [3] is assumed. Also, some basic probability theory is needed (see e.g. [4]). Emphasis is on simple ways of doing things, ..."
Abstract
 Add to MetaCart
(Show Context)
This document presents some results on dictionaries in the form of stepbystep problems. Familiarity with the material on uinversal hash functions in the first four sections of [3] is assumed. Also, some basic probability theory is needed (see e.g. [4]). Emphasis is on simple ways of doing things, and some results go further back than the cited references. The last few parts of several problems are di#cult (marked with * or **), but these can easily be skipped. Aswers can be found in the cited references (though sometimes in a more general form than asked for here). 1 Perfect hashing A perfect hash function for a set of keys S is a function that maps no two elements of S to the same value (is 11 on S). Such a function with range of size O(S) can be used to solve the static dictionary problem in linear space, if the description of the hash function itself uses linear space. This problem looks at a simple design of such a function proposed in [5]. Let f : U # {1, . . . , S} and g : U # {1, . . . , 10 S} be chosen independently and uniformly at random from "nearly universal" families of functions. Our candidate for a perfect hash function is h(x) = (f(x) + a g(x) ) mod S, where a i # {1, . . . , S} must be suitably chosen for i = 1, . . . , 10 S