Results 1  10
of
14
Low Redundancy in Static Dictionaries with O(1) Worst Case Lookup Time
 IN PROCEEDINGS OF THE 26TH INTERNATIONAL COLLOQUIUM ON AUTOMATA, LANGUAGES AND PROGRAMMING (ICALP '99
, 1999
"... A static dictionary is a data structure for storing subsets of a nite universe U , so that membership queries can be answered efficiently. We study this problem in a unit cost RAM model with word size (log jU j), and show that for nelement subsets, constant worst case query time can be obtained us ..."
Abstract

Cited by 21 (5 self)
 Add to MetaCart
A static dictionary is a data structure for storing subsets of a nite universe U , so that membership queries can be answered efficiently. We study this problem in a unit cost RAM model with word size (log jU j), and show that for nelement subsets, constant worst case query time can be obtained using B +O(log log jU j) + o(n) bits of storage, where B = dlog 2 jUj n e is the minimum number of bits needed to represent all such subsets. For jU j = n log O(1) n the dictionary supports constant time rank queries.
Low Redundancy in Dictionaries with O(1) Worst Case Lookup Time
 IN PROC. 26TH INTERNATIONAL COLLOQUIUM ON AUTOMATA, LANGUAGES AND PROGRAMMING (ICALP
, 1998
"... A static dictionary is a data structure for storing subsets of a finite universe U , so that membership queries can be answered efficiently. We study this problem in a unit cost RAM model with word size ze jU j), and show that for nelement subsets, constant worst case query time can be obtain ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
A static dictionary is a data structure for storing subsets of a finite universe U , so that membership queries can be answered efficiently. We study this problem in a unit cost RAM model with word size ze jU j), and show that for nelement subsets, constant worst case query time can be obtained using B +O(log log jU j) + o(n) bits of storage, where B = dlog jU j e is the minimum number of bits needed to represent all such subsets. The solution for dense subsets uses B + O( jU j log log jU j log jU j ) bits of storage, and supports constant time rank queries. In a dynamic setting, allowing insertions and deletions, our techniques give an O(B) bit space usage.
Using Compression to Improve Chip Multiprocessor Performance
, 2006
"... Chip multiprocessors (CMPs) combine multiple processors on a single die, typically with private levelone caches and a shared leveltwo cache. However, the increasing number of processors cores on a single chip increases the demand on two critical resources: the shared L2 cache capacity and the off ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
Chip multiprocessors (CMPs) combine multiple processors on a single die, typically with private levelone caches and a shared leveltwo cache. However, the increasing number of processors cores on a single chip increases the demand on two critical resources: the shared L2 cache capacity and the offchip pin bandwidth. Demand on these critical resources is further exacerbated by latencyhiding techniques such as hardware prefetching. In this dissertation, we explore using compression to effectively increase cache and pin bandwidth resources and ultimately CMP performance. We identify two distinct and complementary designs where compression can help improve CMP performance: Cache Compression and Link Compression. Cache compression stores compressed lines in the cache, potentially increasing the effective cache size, reducing offchip misses and improving performance. On the downside, decompression overhead can slow down cache hit latencies, possibly degrading performance. Link (i.e., offchip interconnect) compression compresses communication messages before sending to or receiving from offchip system components, thereby increasing the effective offchip pin bandwidth, reducing contention and improving performance for bandwidthlimited configurations. While compression can have a positive impact on CMP performance, practical implementations of compression
Faster Deterministic Dictionaries
 In 11 th Annual ACM Symposium on Discrete Algorithms (SODA
, 1999
"... We consider static dictionaries over the universe U = on a unitcost RAM with word size w. Construction of a static dictionary with linear space consumption and constant lookup time can be done in linear expected time by a randomized algorithm. In contrast, the best previous deterministic a ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
We consider static dictionaries over the universe U = on a unitcost RAM with word size w. Construction of a static dictionary with linear space consumption and constant lookup time can be done in linear expected time by a randomized algorithm. In contrast, the best previous deterministic algorithm for constructing such a dictionary with n elements runs in time O(n ) for # > 0. This paper narrows the gap between deterministic and randomized algorithms exponentially, from the factor of to an O(log n) factor. The algorithm is weakly nonuniform, i.e. requires certain precomputed constants dependent on w. A byproduct of the result is a lookup time vs insertion time tradeo# for dynamic dictionaries, which is optimal for a certain class of deterministic hashing schemes.
Hash and displace: Efficient evaluation of minimal perfect hash functions
 In Workshop on Algorithms and Data Structures
, 1999
"... A new way of constructing (minimal) perfect hash functions is described. The technique considerably reduces the overhead associated with resolving buckets in twolevel hashing schemes. Evaluating a hash function requires just one multiplication and a few additions apart from primitive bit operations ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
A new way of constructing (minimal) perfect hash functions is described. The technique considerably reduces the overhead associated with resolving buckets in twolevel hashing schemes. Evaluating a hash function requires just one multiplication and a few additions apart from primitive bit operations. The number of accesses to memory is two, one of which is to a fixed location. This improves the probe performance of previous minimal perfect hashing schemes, and is shown to be optimal. The hash function description (“program”) for a set of size n occupies O(n) words, and can be constructed in expected O(n) time. 1
Experiments with Automata Compression
, 2000
"... Several compression methods of finitestate automata are presented and evaluated. Most compression methods used here are already described in the literature. However, their impact on the size of automata has not been described yet. We fill that gap, presenting results of experiments carried out on a ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Several compression methods of finitestate automata are presented and evaluated. Most compression methods used here are already described in the literature. However, their impact on the size of automata has not been described yet. We fill that gap, presenting results of experiments carried out on automata representing German, and Dutch morphological dictionaries.
Finite Automata for Compact Representation of Language Models in NLP
"... A technique for compact representation of language models in Natural Language Processing is presented. After a brief review of the motivations for a more compact representation of such language models, it is shown how finitestate automata can be used to compactly represent such language models. The ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
A technique for compact representation of language models in Natural Language Processing is presented. After a brief review of the motivations for a more compact representation of such language models, it is shown how finitestate automata can be used to compactly represent such language models. The technique can be seen as an application and extension of perfect hashing by means of finitestate automata. Preliminary practical experiments indicate that the technique yields considerable and important space savings of up to 90% in practice.