Results 1  10
of
42
On RAM priority queues
, 1996
"... Priority queues are some of the most fundamental data structures. They are used directly for, say, task scheduling in operating systems. Moreover, they are essential to greedy algorithms. We study the complexity of priority queue operations on a RAM with arbitrary word size. We present exponential i ..."
Abstract

Cited by 70 (9 self)
 Add to MetaCart
Priority queues are some of the most fundamental data structures. They are used directly for, say, task scheduling in operating systems. Moreover, they are essential to greedy algorithms. We study the complexity of priority queue operations on a RAM with arbitrary word size. We present exponential improvements over previous bounds, and we show tight relations to sorting. Our first result is a RAM priority queue supporting insert and extractmin operations in worst case time O(log log n) where n is the current number of keys in the queue. This is an exponential improvement over the O( p log n) bound of Fredman and Willard from STOC'90. Our algorithm is simple, and it only uses AC 0 operations, meaning that there is no hidden time dependency on the word size. Plugging this priority queue into Dijkstra's algorithm gives an O(m log log m) algorithm for the single source shortest path problem on a graph with m edges, as compared with the previous O(m p log m) bound based on Fredman...
Optimal Bounds for the Predecessor Problem
 In Proceedings of the ThirtyFirst Annual ACM Symposium on Theory of Computing
"... We obtain matching upper and lower bounds for the amount of time to find the predecessor of a given element among the elements of a fixed efficiently stored set. Our algorithms are for the unitcost wordlevel RAM with multiplication and extend to give optimal dynamic algorithms. The lower bounds ar ..."
Abstract

Cited by 63 (0 self)
 Add to MetaCart
We obtain matching upper and lower bounds for the amount of time to find the predecessor of a given element among the elements of a fixed efficiently stored set. Our algorithms are for the unitcost wordlevel RAM with multiplication and extend to give optimal dynamic algorithms. The lower bounds are proved in a much stronger communication game model, but they apply to the cell probe and RAM models and to both static and dynamic predecessor problems.
Lower bounds for high dimensional nearest neighbor search and related problems
, 1999
"... In spite of extensive and continuing research, for various geometric search problems (such as nearest neighbor search), the best algorithms known have performance that degrades exponentially in the dimension. This phenomenon is sometimes called the curse of dimensionality. Recent results [38, 37, 40 ..."
Abstract

Cited by 47 (2 self)
 Add to MetaCart
In spite of extensive and continuing research, for various geometric search problems (such as nearest neighbor search), the best algorithms known have performance that degrades exponentially in the dimension. This phenomenon is sometimes called the curse of dimensionality. Recent results [38, 37, 40] show that in some sense it is possible to avoid the curse of dimensionality for the approximate nearest neighbor search problem. But must the exact nearest neighbor search problem suffer this curse? We provide some evidence in support of the curse. Specifically we investigate the exact nearest neighbor search problem and the related problem of exact partial match within the asymmetric communication model first used by Miltersen [43] to study data structure problems. We derive nontrivial asymptotic lower bounds for the exact problem that stand in contrast to known algorithms for approximate nearest neighbor search. 1
Timespace tradeoffs for predecessor search
 In Proc. 38th ACM Sympos. Theory Comput
, 2006
"... We develop a new technique for proving cellprobe lower bounds for static data structures. Previous lower bounds used a reduction to communication games, which was known not to be tight by counting arguments. We give the first lower bound for an explicit problem which breaks this communication compl ..."
Abstract

Cited by 36 (4 self)
 Add to MetaCart
We develop a new technique for proving cellprobe lower bounds for static data structures. Previous lower bounds used a reduction to communication games, which was known not to be tight by counting arguments. We give the first lower bound for an explicit problem which breaks this communication complexity barrier. In addition, our bounds give the first separation between polynomial and near linear space. Such a separation is inherently impossible by communication complexity. Using our lower bound technique and new upper bound constructions, we obtain tight bounds for searching predecessors among a static set of integers. Given a set Y of n integers of ℓ bits each, the goal is to efficiently find predecessor(x) = max {y ∈ Y  y ≤ x}. For this purpose, we represent Y on a RAM with word length w using S words of space. Defining a = lg S n +lg w, we show that the optimal search time is, up to constant factors: logw n lg min ℓ−lg n
Cell probe complexity  a survey
 In 19th Conference on the Foundations of Software Technology and Theoretical Computer Science (FSTTCS), 1999. Advances in Data Structures Workshop
"... The cell probe model is a general, combinatorial model of data structures. We give a survey of known results about the cell probe complexity of static and dynamic data structure problems, with an emphasis on techniques for proving lower bounds. 1 ..."
Abstract

Cited by 29 (0 self)
 Add to MetaCart
The cell probe model is a general, combinatorial model of data structures. We give a survey of known results about the cell probe complexity of static and dynamic data structure problems, with an emphasis on techniques for proving lower bounds. 1
An optimal randomised cell probe lower bounds for approximate nearest neighbor searching
 In Proceedings of the Symposium on Foundations of Computer Science
"... Abstract We consider the approximate nearest neighbour search problem on the Hamming Cube {0, 1}d.We show that a randomised cell probe algorithm that uses polynomial storage and word size dO(1)requires a worst case query time of \Omega (log log d / log log log d). The approximation factor may beas l ..."
Abstract

Cited by 22 (2 self)
 Add to MetaCart
Abstract We consider the approximate nearest neighbour search problem on the Hamming Cube {0, 1}d.We show that a randomised cell probe algorithm that uses polynomial storage and word size dO(1)requires a worst case query time of \Omega (log log d / log log log d). The approximation factor may beas loose as 2log 1j d for any fixed j> 0. This generalises an earlier result [6] on the deterministic complexity of the same problem and, more importantly, fills a major gap in the study of thisproblem since all earlier lower bounds either did not allow randomisation [6, 19] or did not allow approximation [5, 2, 16]. We also give a cell probe algorithm which proves that our lower boundis optimal. Our proof uses a lower bound on the round complexity of the related communication problem.We show, additionally, that considerations of bit complexity alone cannot prove any nontrivial cell probe lower bound for the problem. This shows that the Richness Technique [20] used in a lot ofrecent research around this problem would not have helped here.
A Lower Bound on the Complexity of Approximate NearestNeighbor Searching on the Hamming Cube
 In Proc. 31th Annual ACM Symposium on Theory of Computing (STOC’99
, 1999
"... We consider the nearestneighbor problem over the dcube: given a collection of points in {0, 1} d , find the one nearest to a query point (in the L 1 sense). We establish a lower bound of###90 log d/ log log log d) on the worstcase query time. This result holds in the cell probe model with ( ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
We consider the nearestneighbor problem over the dcube: given a collection of points in {0, 1} d , find the one nearest to a query point (in the L 1 sense). We establish a lower bound of###90 log d/ log log log d) on the worstcase query time. This result holds in the cell probe model with (any amount of) polynomial storage and wordsize d O(1) . The same lower bound holds for the approximate version of the problem, where the answer may be any point further than the nearest neighbor by a factor as large as 2 #(log d) 1# # , for any fixed # > 0. 1 Introduction For a variety of practical reasons ranging from molecular biology to web searching, nearestneighbor searching has been a focus of attention lately [2][9], [11][21], [26]. In the applications considered, the dimension of the ambient space is usually high, and predictably, classical lines of attack based on space partitioning fail. To overcome the wellknown "curse of dimensionality," it is typical to relax the s...
On the Cell Probe Complexity of Polynomial Evaluation
, 1995
"... We consider the cell probe complexity of the polynomial evaluation problem with preprocessing of coefficients, for polynomials of degree at most n over a finite field K. We show that the trivial cell probe algorithm for the problem is optimal if K is sufficiently large compared to n. As an applicati ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
We consider the cell probe complexity of the polynomial evaluation problem with preprocessing of coefficients, for polynomials of degree at most n over a finite field K. We show that the trivial cell probe algorithm for the problem is optimal if K is sufficiently large compared to n. As an application, we give a new proof of the fact that P 6= incrTIME(o(log n= log log n)). 1 Introduction Let K be a field. We consider the polynomial evaluation problem with preprocessing of coefficients. This problem is as follows: Given a polynomial f(X) 2 K[X], preprocess it, so that later, for any field element a 2 K, f(a) can be computed efficiently. It is a classical problem in the theory of algebraic complexity and has been intensively investigated in the model of arithmetic straight line programs. In this model, a solution for the polynomials of degree at most n is given by two objects: ffl A map OE from the set of polynomials of degree at most n into K s , where s is any integer, called t...
Dynamic Algorithms for the Dyck Languages
 IN PROC. 4TH WORKSHOP ON ALGORITHMS AND DATA STRUCTURES (WADS
, 1995
"... We study dynamic membership problems for the Dyck languages, the class of strings of properly balanced parentheses. We also study the Dynamic Word problem for the free group. We present deterministic algorithms and data structures which maintain a string under replacements of symbols, insertions ..."
Abstract

Cited by 12 (9 self)
 Add to MetaCart
We study dynamic membership problems for the Dyck languages, the class of strings of properly balanced parentheses. We also study the Dynamic Word problem for the free group. We present deterministic algorithms and data structures which maintain a string under replacements of symbols, insertions, and deletions of symbols, and language membership queries. Updates and queries are handled in polylogarithmic time. We also give both Las Vegas and Monte Carlotype randomised algorithms to achieve better running times, and present lower bounds on the complexity for variants of the problems.