Results 1  10
of
21
Dynamic Ordered Sets with Exponential Search Trees
 Combination of results presented in FOCS 1996, STOC 2000 and SODA
, 2001
"... We introduce exponential search trees as a novel technique for converting static polynomial space search structures for ordered sets into fullydynamic linear space data structures. This leads to an optimal bound of O ( √ log n/log log n) for searching and updating a dynamic set of n integer keys i ..."
Abstract

Cited by 43 (2 self)
 Add to MetaCart
(Show Context)
We introduce exponential search trees as a novel technique for converting static polynomial space search structures for ordered sets into fullydynamic linear space data structures. This leads to an optimal bound of O ( √ log n/log log n) for searching and updating a dynamic set of n integer keys in linear space. Here searching an integer y means finding the maximum key in the set which is smaller than or equal to y. This problem is equivalent to the standard text book problem of maintaining an ordered set (see, e.g., Cormen, Leiserson, Rivest, and Stein: Introduction to Algorithms, 2nd ed., MIT Press, 2001). The best previous deterministic linear space bound was O(log n/log log n) due Fredman and Willard from STOC 1990. No better deterministic search bound was known using polynomial space.
Deterministic Dictionaries
, 2001
"... It is shown that a static dictionary that offers constanttime access to n elements with wbit keys and occupies O(n) words of memory can be constructed deterministically in O(n log n) time on a unitcost RAM with word length w and a standard instruction set including multiplication. Whereas a rando ..."
Abstract

Cited by 42 (4 self)
 Add to MetaCart
It is shown that a static dictionary that offers constanttime access to n elements with wbit keys and occupies O(n) words of memory can be constructed deterministically in O(n log n) time on a unitcost RAM with word length w and a standard instruction set including multiplication. Whereas a randomized construction working in linear expected time was known, the running time of the best previous deterministic algorithm was Ω(n²). Using a standard dynamization technique, the first deterministic dynamic dictionary with constant lookup time and sublinear update time is derived. The new algorithms are weakly nonuniform; i.e., they require access to a fixed number of precomputed constants dependent on w. The main technical tools employed are unitcost errorcorrecting codes, word parallelism, and derandomization using conditional expectations.
Integer Priority Queues with Decrease Key in . . .
 STOC'03
, 2003
"... We consider Fibonacci heap style integer priority queues supporting insert and decrease key operations in constant time. We present a deterministic linear space solution that with n integer keys support delete in O(log log n) time. If the integers are in the range [0,N), we can also support delete i ..."
Abstract

Cited by 32 (2 self)
 Add to MetaCart
(Show Context)
We consider Fibonacci heap style integer priority queues supporting insert and decrease key operations in constant time. We present a deterministic linear space solution that with n integer keys support delete in O(log log n) time. If the integers are in the range [0,N), we can also support delete in O(log log N) time. Even for the special case of monotone priority queues, where the minimum has to be nondecreasing, the best previous bounds on delete were O((log n) 1/(3−ε) ) and O((log N) 1/(4−ε)). These previous bounds used both randomization and amortization. Our new bounds a deterministic, worstcase, with no restriction to monotonicity, and exponentially faster. As a classical application, for a directed graph with n nodes and m edges with nonnegative integer weights, we get single source shortest paths in O(m + n log log n) time, or O(m + n log log C) ifC is the maximal edge weight. The later solves an open problem of Ahuja, Mehlhorn, Orlin, and
Subquadratic algorithms for 3SUM
 In Proc. 9th Worksh. Algorithms & Data Structures, LNCS 3608
, 2005
"... We obtain subquadratic algorithms for 3SUM on integers and rationals in several models. On a standard word RAM with wbit words, we obtain a running time of O(n 2 / max { w lg 2 w, lg 2 n (lg lg n) 2}). In the circuit RAM with one nonstandard AC0 operation, we obtain O(n2 / w2 lg2). In external w me ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
We obtain subquadratic algorithms for 3SUM on integers and rationals in several models. On a standard word RAM with wbit words, we obtain a running time of O(n 2 / max { w lg 2 w, lg 2 n (lg lg n) 2}). In the circuit RAM with one nonstandard AC0 operation, we obtain O(n2 / w2 lg2). In external w memory, we achieve O(n2 /(MB)), even under the standard assumption of data indivisibility. Cacheobliviously, we obtain a running time of O(n2 / MB lg2). In all cases, our speedup is almost M quadratic in the parallelism the model can afford, which may be the best possible. Our algorithms are Las Vegas randomized; time bounds hold in expectation, and in most cases, with high probability. 1
Randomized algorithms for geometric optimization problems
 Handbook of Randomized Computation
, 2001
"... This chapter reviews randomization algorithms developed in the last few years to solve a wide range of geometric optimization problems. We rst review a number of general techniques, including randomized binary search, randomized linearprogramming algorithms, and random sampling. Next, we describe s ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
This chapter reviews randomization algorithms developed in the last few years to solve a wide range of geometric optimization problems. We rst review a number of general techniques, including randomized binary search, randomized linearprogramming algorithms, and random sampling. Next, we describe several applications of these and other techniques, including facility location, proximity problems, statistical estimators, nearest neighbor searching, and Euclidean TSP.
Lower Bound Techniques for Data Structures
, 2008
"... We describe new techniques for proving lower bounds on datastructure problems, with the following broad consequences:
â¢ the first Î©(lgn) lower bound for any dynamic problem, improving on a bound that had been standing since 1989;
â¢ for static data structures, the first separation between linea ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
We describe new techniques for proving lower bounds on datastructure problems, with the following broad consequences:
â¢ the first Î©(lgn) lower bound for any dynamic problem, improving on a bound that had been standing since 1989;
â¢ for static data structures, the first separation between linear and polynomial space. Specifically, for some problems that have constant query time when polynomial space is allowed, we can show Î©(lg n/ lg lg n) bounds when the space is O(n Â· polylog n).
Using these techniques, we analyze a variety of central datastructure problems, and obtain improved lower bounds for the following:
â¢ the partialsums problem (a fundamental application of augmented binary search trees);
â¢ the predecessor problem (which is equivalent to IP lookup in Internet routers);
â¢ dynamic trees and dynamic connectivity;
â¢ orthogonal range stabbing;
â¢ orthogonal range counting, and orthogonal range reporting;
â¢ the partial match problem (searching with wildcards);
â¢ (1 + Îµ)approximate near neighbor on the hypercube;
â¢ approximate nearest neighbor in the lâ metric.
Our new techniques lead to surprisingly nontechnical proofs. For several problems, we obtain simpler proofs for bounds that were already known.
Generic Discrimination  Sorting and Partitioning Unshared Data in Linear Time
, 2008
"... We introduce the notion of discrimination as a generalization of both sorting and partitioning and show that worstcase lineartime discrimination functions (discriminators) can be defined generically, by (co)induction on an expressive language of order denotations. The generic definition yields di ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
We introduce the notion of discrimination as a generalization of both sorting and partitioning and show that worstcase lineartime discrimination functions (discriminators) can be defined generically, by (co)induction on an expressive language of order denotations. The generic definition yields discriminators that generalize both distributive sorting and multiset discrimination. The generic discriminator can be coded compactly using list comprehensions, with order denotations specified using Generalized Algebraic Data Types (GADTs). A GADTfree combinator formulation of discriminators is also given. We give some examples of the uses of discriminators, including a new mostsignificantdigit lexicographic sorting algorithm. Discriminators generalize binary comparison functions: They operate on n arguments at a time, but do not expose more information than the underlying equivalence, respectively ordering relation on the arguments. We argue that primitive types with equality (such as references in ML) and ordered types (such as the machine integer type), should expose their equality, respectively standard ordering relation, as discriminators: Having only a binary equality test on a type requires Θ(n 2) time to find all the occurrences of an element in a list of length n, for each element in the list, even if the equality test takes only constant time. A discriminator accomplishes this in linear time. Likewise, having only a (constanttime) comparison function requires Θ(n log n) time to sort a list of n elements. A discriminator can do this in linear time.
Approximate Nearest Neighbors: Towards Removing the Curse of Dimensionality
, 1999
"... (preliminary version) ..."
(Show Context)
Tight(er) Worstcase Bounds on Dynamic Searching andPriority Queues
"... PREPRINT. Proc. STOC 2000 ..."
(Show Context)