Results 11  20
of
23
Hash and displace: Efficient evaluation of minimal perfect hash functions
 In Workshop on Algorithms and Data Structures
, 1999
"... A new way of constructing (minimal) perfect hash functions is described. The technique considerably reduces the overhead associated with resolving buckets in twolevel hashing schemes. Evaluating a hash function requires just one multiplication and a few additions apart from primitive bit operations ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
A new way of constructing (minimal) perfect hash functions is described. The technique considerably reduces the overhead associated with resolving buckets in twolevel hashing schemes. Evaluating a hash function requires just one multiplication and a few additions apart from primitive bit operations. The number of accesses to memory is two, one of which is to a fixed location. This improves the probe performance of previous minimal perfect hashing schemes, and is shown to be optimal. The hash function description (“program”) for a set of size n occupies O(n) words, and can be constructed in expected O(n) time. 1
Faster Deterministic Dictionaries
 In 11 th Annual ACM Symposium on Discrete Algorithms (SODA
, 1999
"... We consider static dictionaries over the universe U = on a unitcost RAM with word size w. Construction of a static dictionary with linear space consumption and constant lookup time can be done in linear expected time by a randomized algorithm. In contrast, the best previous deterministic a ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
(Show Context)
We consider static dictionaries over the universe U = on a unitcost RAM with word size w. Construction of a static dictionary with linear space consumption and constant lookup time can be done in linear expected time by a randomized algorithm. In contrast, the best previous deterministic algorithm for constructing such a dictionary with n elements runs in time O(n ) for # > 0. This paper narrows the gap between deterministic and randomized algorithms exponentially, from the factor of to an O(log n) factor. The algorithm is weakly nonuniform, i.e. requires certain precomputed constants dependent on w. A byproduct of the result is a lookup time vs insertion time tradeo# for dynamic dictionaries, which is optimal for a certain class of deterministic hashing schemes.
Sparse graph codes for compression, sensing, and secrecy
, 2010
"... Doctor of Philosophy in Electrical Engineering and Computer Science Sparse graph codes were first introduced by Gallager over 40 years ago. Over the last two decades, such codes have been the subject of intense research, and capacityapproaching sparse graph codes with low complexity encoding and de ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Doctor of Philosophy in Electrical Engineering and Computer Science Sparse graph codes were first introduced by Gallager over 40 years ago. Over the last two decades, such codes have been the subject of intense research, and capacityapproaching sparse graph codes with low complexity encoding and decoding algorithms have been designed for many channels. Motivated by the success of sparse graph codes for channel coding, we explore the use of sparse graph codes for four other problems related to compression, sensing, and security. First, we construct locally encodable and decodable source codes for a simple class of sources. Local encodability refers to the property that when the original source data changes slightly, the compression produced by the source code can be updated easily. Local decodability refers to the property that a single source symbol can be recovered without having to decode the entire source block.
Optimal spacetime dictionaries over an unbounded universe with flat implicit trees
, 2003
"... In the classical dictionary problem, a set of n distinct keys over an unbounded and ordered universe is maintained under insertions and deletions of individual keys while supporting search operations. An implicit dictionary has the additional constraint of occupying the space merely required by stor ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
In the classical dictionary problem, a set of n distinct keys over an unbounded and ordered universe is maintained under insertions and deletions of individual keys while supporting search operations. An implicit dictionary has the additional constraint of occupying the space merely required by storing the n keys, that is, exactly n contiguous words of space in total. All what is known is the starting position of the memory segment hosting the keys, as the rest of the information is implicitly encoded by a suitable permutation of the keys. This paper describes the
at implicit tree, which is the rst implicit dictionary requiring O(log n) time per search and update operation.
Perfect Hash Families: Constructions and Applications. Dissertation in master of mathematics
, 2003
"... ..."
A New Tradeoff for Deterministic Dictionaries
, 2000
"... . We consider dictionaries over the universe U = f0; 1g w on a unitcost RAM with word size w and a standard instruction set. We present a linear space deterministic dictionary with membership queries in time (log log n) O(1) and updates in time (log n) O(1) , where n is the size of the se ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
. We consider dictionaries over the universe U = f0; 1g w on a unitcost RAM with word size w and a standard instruction set. We present a linear space deterministic dictionary with membership queries in time (log log n) O(1) and updates in time (log n) O(1) , where n is the size of the set stored. This is the rst such data structure to simultaneously achieve query time (log n) o(1) and update time O(2 (log n) c ) for a constant c < 1. 1 Introduction Among the most fundamental data structures is the dictionary. A dictionary stores a subset S of a universe U , oering membership queries of the form \x 2 S?". The result of a membership query is either 'no' or a piece of satellite data associated with x. Updates of the set are supported via insertion and deletion of single elements. Several performance measures are of interest for dictionaries: The amount of space used, the time needed to answer queries, and the time needed to perform updates. The most ecient dictionar...
Lineartime algorithms to color topological graphs
, 2005
"... We describe a lineartime algorithm for 4coloring planar graphs. We indeed give an O(V + E + χ  + 1)time algorithm to Ccolor Vvertex Eedge graphs embeddable on a 2manifold M of Euler characteristic χ where C(M) is given by Heawood’s (minimax optimal) formula. Also we show how, in O(V + E) ..."
Abstract
 Add to MetaCart
We describe a lineartime algorithm for 4coloring planar graphs. We indeed give an O(V + E + χ  + 1)time algorithm to Ccolor Vvertex Eedge graphs embeddable on a 2manifold M of Euler characteristic χ where C(M) is given by Heawood’s (minimax optimal) formula. Also we show how, in O(V + E) time, to find the exact chromatic number of a maximal planar graph (one with E = 3V − 6) and a coloring achieving it. Finally, there is a lineartime algorithm to 5color a graph embedded on any fixed surface M except that an Mdependent constant number of vertices are left uncolored. All the algorithms are simple and practical and run on a deterministic pointer machine, except for planar graph 4coloring which involves enormous constant factors and requires an integer RAM with a random number generator. All of the algorithms mentioned so far are in the ultraparallelizable deterministic computational complexity class “NC. ” We also have more practical planar 4coloring algorithms that can run on pointer machines in O(V log V) randomized time and O(V) space, and a very simple deterministic O(V)time coloring algorithm for planar graphs which conjecturally uses 4 colors.
To
"... the direction of Dr.Injong Rhee.) Recent advances in technology have led to the widespread deployment of computational resources and networkenabled enddevices. This poses new challenges to network engineers: how to locate a particular service or device out of hundreds of thousands of accessible se ..."
Abstract
 Add to MetaCart
(Show Context)
the direction of Dr.Injong Rhee.) Recent advances in technology have led to the widespread deployment of computational resources and networkenabled enddevices. This poses new challenges to network engineers: how to locate a particular service or device out of hundreds of thousands of accessible services and devices. One of the major issues involved is the efficient storage, retrieval and dissemination of information about available services. Well known relational database techniques are not very efficient in these situations because our primary concern is the determination of availability of a service, not the retrieval of data. Also, database techniques involve additional overhead for indexing and query processing. We propose a novel scheme for efficient determination of the availability of services called SRDP(Summary Representation for service Discovery Protocols). SRDP makes use of a substring search algorithm based on hashing techniques. For this purpose, service descriptions are treated as strings and queries are treated as substrings. Information about each service and its attributes is stored as a 128 bit signature in a hash table. To exploit all bits of the signature, a signature creation scheme using the characteristics of the distribution of characters in English language is employed. For the hash table, a Fibonacci hash based scheme and a CRC hash based scheme using primitive polynomials are tested for their effectiveness as hash functions. Results are presented from tests performed using actual URL data obtained from the Internet. Finally we compare the performance and memory requirements of our scheme with a Bloomfilter based approach. Results show that SRDP executes twice as fast, consumes 80 % less memory and still provides false drop probabilities comparable to a Bloom filter based approach.
unknown title
"... It is well known that if n balls are inserted into n bins, with high probability, the bin with maximum load contains (1 + o(1)) log n / log log n balls. Azar, Broder, Karlin, and Upfal [1] showed that instead of choosing one bin, if d ≥ 2 bins are chosen at random and the ball inserted into the leas ..."
Abstract
 Add to MetaCart
It is well known that if n balls are inserted into n bins, with high probability, the bin with maximum load contains (1 + o(1)) log n / log log n balls. Azar, Broder, Karlin, and Upfal [1] showed that instead of choosing one bin, if d ≥ 2 bins are chosen at random and the ball inserted into the least loaded of the d bins, the maximum load reduces drastically to log log n / log d + O(1). In this paper, we study the two choice balls and bins process when balls are not allowed to choose any two random bins, but only bins that are connected by an edge in an underlying graph. We show that for n balls and n bins, if the graph is almost regular with degree n ɛ, where ɛ is not too small, the previous bounds on the maximum load continue to hold. Precisely, the maximum load is log log n+O(1/ɛ)+O(1). So even if the graph has degree
Multiprocess Time Queue
, 2001
"... . We show how to implement a bounded time queue for two di#erent processes. The time queue is a variant of a priority queue with elements from a discrete universe. The bounded time queue has elements from a discrete bounded universe. One process has time constraints and may only spend constant w ..."
Abstract
 Add to MetaCart
. We show how to implement a bounded time queue for two di#erent processes. The time queue is a variant of a priority queue with elements from a discrete universe. The bounded time queue has elements from a discrete bounded universe. One process has time constraints and may only spend constant worst case time on each operation while the other process may spend more time. The time constrained process only has to be able to perform some of the time queue operations while the other process has to be able to perform all operations. We show how to do a deamortization of the deleteMin cost and to provide mutual exclusion for the parts of the data structure that both processes maintain. 1