Results 1 
7 of
7
OneProbe Search
, 2002
"... We consider dictionaries that perform lookups by probing a single word of memory, knowing only the size of the data structure. We describe a randomized dictionary where a lookup returns the correct answer with probability 1 − ɛ, and otherwise returns “don’t know”. The lookup procedure uses an expa ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We consider dictionaries that perform lookups by probing a single word of memory, knowing only the size of the data structure. We describe a randomized dictionary where a lookup returns the correct answer with probability 1 − ɛ, and otherwise returns “don’t know”. The lookup procedure uses an expander graph to select the memory location to probe. Recent explicit expander constructions are shown to yield space usage far smaller than what would be required using a deterministic lookup procedure. Our data structure supports efficient deterministic updates, exhibiting new probabilistic guarantees on dictionary running time.
Searching the integers
"... 1 Problem Definition Consider an ordered universe U, and a set T ⊂ U with T  = n. The goal is to preprocess T, such that the following query can be answered efficiently: given x ∈ U, report the predecessor of x, i.e. max{y ∈ T  y < x}. One can also consider the dynamic problem, where elements ..."
Abstract
 Add to MetaCart
1 Problem Definition Consider an ordered universe U, and a set T ⊂ U with T  = n. The goal is to preprocess T, such that the following query can be answered efficiently: given x ∈ U, report the predecessor of x, i.e. max{y ∈ T  y < x}. One can also consider the dynamic problem, where elements are inserted and deleted into T. Let tq be the query time, and tu the update time. This is a fundamental search problem, with an impressive number of applications. Later, this entry discusses IP lookup (forwarding packets on the Internet), orthogonal range queries and persistent data structures as examples. The problem was considered in many computational models. In fact, most models below were initially defined to study the predecessor problem. comparison model: The problem can be solved through binary search in Θ(lg n) comparisons. There is a lot of work on adaptive bounds, which may be sublogarithmic. Such bounds may depend on the finger distance, the working set, entropy etc. binary search trees: Predecessor search is one of the fundamental motivations for binary search trees. In this restrictive model, one can hope for an instance optimal (competitive) algorithm.
6.897: Advanced Data Structures Spring 2005
, 2005
"... In the last lecture we used round elimination to prove lower bounds for the static predecessor problem in the cell probe model. We showed a lower bound of Ω(min{lga w, lgb n}) on the number of probes required to solve the problem, where a = O(lg space(n)), the number of bits to index the data struct ..."
Abstract
 Add to MetaCart
In the last lecture we used round elimination to prove lower bounds for the static predecessor problem in the cell probe model. We showed a lower bound of Ω(min{lga w, lgb n}) on the number of probes required to solve the problem, where a = O(lg space(n)), the number of bits to index the data structure, and b = w, the number of bits returned by a single cell probe. For a polynomial size data structure, this implies that when lg n lg lg n = lg 2 w, some problem instances require Ω( lg w lg lg w) = Ω( lg n lg lg n) probes. The bound lgw n is matched by fusion trees, but van Emde Boas achieves lg w per query, which does not match lga w. In this lecture we show upper and lower bounds of Θ(min{lgw n, 2 An O( lg w lg a
Uniform Deterministic Dictionaries
"... Abstract. We present a new analysis of the wellknown family of multiplicative hash functions, and improved deterministic algorithms for selecting “good ” hash functions. The main motivation is realization of deterministic dictionaries with fast lookups and reasonably fast updates. The model of comp ..."
Abstract
 Add to MetaCart
Abstract. We present a new analysis of the wellknown family of multiplicative hash functions, and improved deterministic algorithms for selecting “good ” hash functions. The main motivation is realization of deterministic dictionaries with fast lookups and reasonably fast updates. The model of computation is the Word RAM, and it is assumed that the machine word size matches the size of keys in bits. Many of the modern solutions to the dictionary problem are weakly nonuniform, i.e. they require a number of constants to be computed at “compile time ” for stated time bounds to hold. The currently fastest deterministic dictionary uses constants not known to be computable in polynomial time. In contrast, our dictionaries do not require any special constants or instructions, and running times are independent of the word (and key) length. Our family of dynamic dictionaries achieves a performance of the following type: lookups in time O(t) and updates in amortized time O(n 1/t), for an appropriate parameter function t. Update procedures require division, whereas searching uses multiplication only. 1
This document in subdirectoryRS/02/9/ OneProbe Search ⋆
, 909
"... Reproduction of all or part of this work is permitted for educational or research use on condition that this copyright notice is included in any copy. See back inner page for a list of recent BRICS Report Series publications. Copies may be obtained by contacting: BRICS ..."
Abstract
 Add to MetaCart
Reproduction of all or part of this work is permitted for educational or research use on condition that this copyright notice is included in any copy. See back inner page for a list of recent BRICS Report Series publications. Copies may be obtained by contacting: BRICS
Constructing Efficient Dictionaries in Close to Sorting Time
"... The dictionary problem is among the oldest problems in computer science. Yet our understanding of the complexity of the dictionary problem in realistic models of computation has been far from complete. Designing highly efficient dictionaries without resorting to use of randomness appeared to be a pa ..."
Abstract
 Add to MetaCart
The dictionary problem is among the oldest problems in computer science. Yet our understanding of the complexity of the dictionary problem in realistic models of computation has been far from complete. Designing highly efficient dictionaries without resorting to use of randomness appeared to be a particularly challenging task. We present solutions to the static dictionary problem that significantly improve the previously known upper bounds and bring them close to obvious lower bounds. Our dictionaries have a constant lookup cost and use linear space, which was known to be possible, but the worstcase cost of construction of the structures is proportional to only log log n times the cost of sorting the input. Our claimed performance bounds are obtained in the word RAM model and in the external memory models; only the involved sorting procedures in the algorithms need to be changed between the models. 1