Results 1  10
of
13
On Data Structures and Asymmetric Communication Complexity
 JOURNAL OF COMPUTER AND SYSTEM SCIENCES
, 1994
"... In this paper we consider two party communication complexity when the input sizes of the two players differ significantly, the "asymmetric" case. Most of previous work on communication complexity only considers the total number of bits sent, but we study tradeoffs between the number of bits the ..."
Abstract

Cited by 84 (9 self)
 Add to MetaCart
In this paper we consider two party communication complexity when the input sizes of the two players differ significantly, the "asymmetric" case. Most of previous work on communication complexity only considers the total number of bits sent, but we study tradeoffs between the number of bits the first player sends and the number of bits the second sends. These
Optimal Bounds for the Predecessor Problem
 In Proceedings of the ThirtyFirst Annual ACM Symposium on Theory of Computing
"... We obtain matching upper and lower bounds for the amount of time to find the predecessor of a given element among the elements of a fixed efficiently stored set. Our algorithms are for the unitcost wordlevel RAM with multiplication and extend to give optimal dynamic algorithms. The lower bounds ar ..."
Abstract

Cited by 62 (0 self)
 Add to MetaCart
We obtain matching upper and lower bounds for the amount of time to find the predecessor of a given element among the elements of a fixed efficiently stored set. Our algorithms are for the unitcost wordlevel RAM with multiplication and extend to give optimal dynamic algorithms. The lower bounds are proved in a much stronger communication game model, but they apply to the cell probe and RAM models and to both static and dynamic predecessor problems.
Lower bounds for UnionSplitFind related problems on random access machines
, 1994
"... We prove \Omega\Gamma p log log n) lower bounds on the random access machine complexity of several dynamic, partially dynamic and static data structure problems, including the unionsplitfind problem, dynamic prefix problems and onedimensional range query problems. The proof techniques include a ..."
Abstract

Cited by 49 (3 self)
 Add to MetaCart
We prove \Omega\Gamma p log log n) lower bounds on the random access machine complexity of several dynamic, partially dynamic and static data structure problems, including the unionsplitfind problem, dynamic prefix problems and onedimensional range query problems. The proof techniques include a general technique using perfect hashing for reducing static data structure problems (with a restriction of the size of the structure) into partially dynamic data structure problems (with no such restriction), thus providing a way to transfer lower bounds. We use a generalization of a method due to Ajtai for proving the lower bounds on the static problems, but describe the proof in terms of communication complexity, revealing a striking similarity to the proof used by Karchmer and Wigderson for proving lower bounds on the monotone circuit depth of connectivity. 1 Introduction and summary of results In this paper we give lower bounds for the complexity of implementing several dynamic and sta...
Cell probe complexity  a survey
 In 19th Conference on the Foundations of Software Technology and Theoretical Computer Science (FSTTCS), 1999. Advances in Data Structures Workshop
"... The cell probe model is a general, combinatorial model of data structures. We give a survey of known results about the cell probe complexity of static and dynamic data structure problems, with an emphasis on techniques for proving lower bounds. 1 ..."
Abstract

Cited by 28 (0 self)
 Add to MetaCart
The cell probe model is a general, combinatorial model of data structures. We give a survey of known results about the cell probe complexity of static and dynamic data structure problems, with an emphasis on techniques for proving lower bounds. 1
SpaceTime Tradeoffs for Emptiness Queries
, 1997
"... We develop the first nontrivial lower bounds on the complexity of online hyperplane and halfspace emptiness queries. Our lower bounds apply to a general class... ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
We develop the first nontrivial lower bounds on the complexity of online hyperplane and halfspace emptiness queries. Our lower bounds apply to a general class...
On Searching Sorted Lists: A NearOptimal Lower Bound
, 1997
"... We obtain improved lower bounds for a class of static and dynamic data structure problems that includes several problems of searching sorted lists as special cases. These lower bounds nearly match the upper bounds given by recent striking improvements in searching algorithms given by Fredman and Wil ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We obtain improved lower bounds for a class of static and dynamic data structure problems that includes several problems of searching sorted lists as special cases. These lower bounds nearly match the upper bounds given by recent striking improvements in searching algorithms given by Fredman and Willard's fusion trees [9] and Andersson's search data structure [5]. Thus they show sharp limitations on the running time improvements obtainable using the unitcost wordlevel RAM operations that those algorithms employ. 1 Introduction Traditional analysis of problems such as sorting and searching is often schizophrenic in dealing with the operations one is permitted to perform on the input data. In one view, the elements being sorted are seen as abstract objects which may only be compared. In the other view, one is able to perform certain wordlevel operations, such as indirect addressing using the elements themselves, in algorithms like bucket and radix sorting. Traditionally, the second v...
Faster Compact Topk Document Retrieval
"... An optimal index solving topk document retrieval [Navarro and Nekrich, SODA’12] takes O(m + k) time for a pattern of length m, but its space is at least 80n bytes for a collection of n symbols. We reduce it to 1.5n– 3n bytes, with O(m+(k+log log n) log log n) time, on typical texts. The index is u ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
An optimal index solving topk document retrieval [Navarro and Nekrich, SODA’12] takes O(m + k) time for a pattern of length m, but its space is at least 80n bytes for a collection of n symbols. We reduce it to 1.5n– 3n bytes, with O(m+(k+log log n) log log n) time, on typical texts. The index is up to 25 times faster than the best previous compressed solutions, and requires at most 5 % more space in practice (and in some cases as little as one half). Apart from replacing classical by compressed data structures, our main idea is to replace suffix tree sampling by frequency thresholding to achieve compression.
Improved Data Structures for Predecessor Queries in Integer Sets
, 1996
"... We consider the problem of maintaining a dynamic ordered set of n integers in the range 0 : : 2^w  1, under the operations of insertion, deletion and predecessor queries, on a unitcost RAM with a word length of w bits. We show that all the operations above can be performed in O(min{log w, 1 log n/ ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We consider the problem of maintaining a dynamic ordered set of n integers in the range 0 : : 2^w  1, under the operations of insertion, deletion and predecessor queries, on a unitcost RAM with a word length of w bits. We show that all the operations above can be performed in O(min{log w, 1 log n/log w}) expected time, assuming the updates are oblivious, i.e., independent of the random choices made by the data structure. This improves upon the (deterministic) running time of O(min{log w, sqrt log n}) obtained by Fredman and Willard. We also give a very simple deterministic data structure which matches the bound of Fredman and Willard. Finally, from the randomized data structure we are able to derive improved deterministic data structures for the static version of this problem.
Compressed Dynamic Tries with Applications to LZCompression in Sublinear Time and Space
"... Abstract. The dynamic trie is a fundamental data structure which finds applications in many areas. This paper proposes a compressed version of the dynamic trie data structure. Our datastructure is not only space efficient, it also allows pattern searching in o(P) time and leaf insertion/deletion ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. The dynamic trie is a fundamental data structure which finds applications in many areas. This paper proposes a compressed version of the dynamic trie data structure. Our datastructure is not only space efficient, it also allows pattern searching in o(P) time and leaf insertion/deletion in o(log n) time, where P  is the length of the pattern and n is the size of the trie. To demonstrate the usefulness of the new data structure, we apply it to the LZcompression problem. For a string S of length s over an alphabet A of size σ, the previously best known algorithms for computing the ZivLempel encoding (lz78) ofS either run in: (1) O(s) timeandO(slog s) bits working space; or (2) O(sσ) time and O(sHk +slog σ/logσ s) bits working space, where Hk is the korder entropy of the text. No previous algorithm runs in sublinear time. Our new data structure implies a LZcompression algorithm which runs in sublinear time and uses optimal working space. More precisely, the LZcompression algorithm uses O(s(log σ +loglogσs)/logσ s)bitsworking space and runs in O(s(log log s) 2 /(logσ s log log log s)) worstcase time, log log log s o(log s which is sublinear when σ =2 (log log s) 2). 1