Results 1  10
of
10
Nearest Common Ancestors: A survey and a new distributed algorithm
, 2002
"... Several papers describe linear time algorithms to preprocess a tree, such that one can answer subsequent nearest common ancestor queries in constant time. Here, we survey these algorithms and related results. A common idea used by all the algorithms for the problem is that a solution for complete ba ..."
Abstract

Cited by 90 (12 self)
 Add to MetaCart
(Show Context)
Several papers describe linear time algorithms to preprocess a tree, such that one can answer subsequent nearest common ancestor queries in constant time. Here, we survey these algorithms and related results. A common idea used by all the algorithms for the problem is that a solution for complete balanced binary trees is straightforward. Furthermore, for complete balanced binary trees we can easily solve the problem in a distributed way by labeling the nodes of the tree such that from the labels of two nodes alone one can compute the label of their nearest common ancestor. Whether it is possible to distribute the data structure into short labels associated with the nodes is important for several applications such as routing. Therefore, related labeling problems have received a lot of attention recently.
Optimal Bounds for the Predecessor Problem and Related Problems
 Journal of Computer and System Sciences
, 2001
"... We obtain matching upper and lower bounds for the amount of time to find the predecessor of a given element among the elements of a fixed compactly stored set. Our algorithms are for the unitcost word RAM with multiplication and are extended to give dynamic algorithms. The lower bounds are proved ..."
Abstract

Cited by 72 (0 self)
 Add to MetaCart
We obtain matching upper and lower bounds for the amount of time to find the predecessor of a given element among the elements of a fixed compactly stored set. Our algorithms are for the unitcost word RAM with multiplication and are extended to give dynamic algorithms. The lower bounds are proved for a large class of problems, including both static and dynamic predecessor problems, in a much stronger communication game model, but they apply to the cell probe and RAM models.
Roundtrip Spanners and Roundtrip Routing in Directed Graphs
"... We introduce the notion of roundtripspanners of weighted directed graphs and describe ecient algorithms for their construction. For every integer k 1 and any > 0, we show that any directed graph on n vertices with edge weights in the range [1; W ] has a (2k + )roundtripspanner with O( ..."
Abstract

Cited by 38 (0 self)
 Add to MetaCart
We introduce the notion of roundtripspanners of weighted directed graphs and describe ecient algorithms for their construction. For every integer k 1 and any > 0, we show that any directed graph on n vertices with edge weights in the range [1; W ] has a (2k + )roundtripspanner with O( edges. We then extend these constructions and obtain compact roundtrip routing schemes. For every integer k 1 and every > 0, we describe a roundtrip routing scheme that has stretch 4k + , and uses at each vertex a routing table of size ~ O( log(nW )). We also show that any weighted directed graph with arbitrary positive edge weights has a 3roundtripspanner with O(n ) edges. This result is optimal. Finally, we present a stretch 3 roundtrip routing scheme that uses local routing tables of size ~ O(n ). This routing scheme is essentially optimal. The roundtripspanner constructions and the roundtrip routing schemes for directed graphs that we describe are only slightly worse than the best available spanners and routing schemes for undirected graphs. Our roundtrip routing schemes substantially improve previous results of Cowen and Wagner. Our results are obtained by combining ideas of Cohen, Cowen and Wagner, Thorup and Zwick, with some new ideas.
Hash and displace: Efficient evaluation of minimal perfect hash functions
 In Workshop on Algorithms and Data Structures
, 1999
"... A new way of constructing (minimal) perfect hash functions is described. The technique considerably reduces the overhead associated with resolving buckets in twolevel hashing schemes. Evaluating a hash function requires just one multiplication and a few additions apart from primitive bit operations ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
(Show Context)
A new way of constructing (minimal) perfect hash functions is described. The technique considerably reduces the overhead associated with resolving buckets in twolevel hashing schemes. Evaluating a hash function requires just one multiplication and a few additions apart from primitive bit operations. The number of accesses to memory is two, one of which is to a fixed location. This improves the probe performance of previous minimal perfect hashing schemes, and is shown to be optimal. The hash function description (“program”) for a set of size n occupies O(n) words, and can be constructed in expected O(n) time. 1
Identifying Nearest Common Ancestors in a Distributed Environment
, 2001
"... We give a simple algorithm that labels the nodes of a rooted tree such that from the labels of two nodes alone one can compute in constant time the label of their nearest common ancestor. The labels assigned by our algorithm are of size O(log n) bits where n is the number of nodes in the tree. The a ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
We give a simple algorithm that labels the nodes of a rooted tree such that from the labels of two nodes alone one can compute in constant time the label of their nearest common ancestor. The labels assigned by our algorithm are of size O(log n) bits where n is the number of nodes in the tree. The algorithm runs in O(n) time.
Dispersing Hash Functions
 In Proceedings of the 4th International Workshop on Randomization and Approximation Techniques in Computer Science (RANDOM ’00), volume 8 of Proceedings in Informatics
, 2000
"... A new hashing primitive is introduced: dispersing hash functions. A family of hash functions F is dispersing if, for any set S of a certain size and random h ∈ F, the expected value of S  − h[S]  is not much larger than the expectancy if h had been chosen at random from the set of all functions ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
(Show Context)
A new hashing primitive is introduced: dispersing hash functions. A family of hash functions F is dispersing if, for any set S of a certain size and random h ∈ F, the expected value of S  − h[S]  is not much larger than the expectancy if h had been chosen at random from the set of all functions. We give tight, up to a logarithmic factor, upper and lower bounds on the size of dispersing families. Such families previously studied, for example universal families, are significantly larger than the smallest dispersing families, making them less suitable for derandomization. We present several applications of dispersing families to derandomization (fast element distinctness, set inclusion, and static dictionary initialization). Also, a tight relationship between dispersing families and extractors, which may be of independent interest, is exhibited. We also investigate the related issue of program size for hash functions which are nearly perfect. In particular, we exhibit a dramatic increase in program size for hash functions more dispersing than a random function. 1
A New Tradeoff for Deterministic Dictionaries
, 2000
"... . We consider dictionaries over the universe U = f0; 1g w on a unitcost RAM with word size w and a standard instruction set. We present a linear space deterministic dictionary with membership queries in time (log log n) O(1) and updates in time (log n) O(1) , where n is the size of the se ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
. We consider dictionaries over the universe U = f0; 1g w on a unitcost RAM with word size w and a standard instruction set. We present a linear space deterministic dictionary with membership queries in time (log log n) O(1) and updates in time (log n) O(1) , where n is the size of the set stored. This is the rst such data structure to simultaneously achieve query time (log n) o(1) and update time O(2 (log n) c ) for a constant c < 1. 1 Introduction Among the most fundamental data structures is the dictionary. A dictionary stores a subset S of a universe U , oering membership queries of the form \x 2 S?". The result of a membership query is either 'no' or a piece of satellite data associated with x. Updates of the set are supported via insertion and deletion of single elements. Several performance measures are of interest for dictionaries: The amount of space used, the time needed to answer queries, and the time needed to perform updates. The most ecient dictionar...
This document in subdirectoryRS/00/4/ A New Tradeoff for Deterministic Dictionaries
, 909
"... Reproduction of all or part of this work is permitted for educational or research use on condition that this copyright notice is included in any copy. See back inner page for a list of recent BRICS Report Series publications. Copies may be obtained by contacting: BRICS ..."
Abstract
 Add to MetaCart
(Show Context)
Reproduction of all or part of this work is permitted for educational or research use on condition that this copyright notice is included in any copy. See back inner page for a list of recent BRICS Report Series publications. Copies may be obtained by contacting: BRICS