Results 1 
8 of
8
Dynamic Optimality–Almost
 Proc. 45th Annu. IEEE Sympos. Foundations Comput. Sci
"... We present an O(lg lg n)competitive online binary search tree, improving upon the best previous (trivial) competitive ratio of O(lg n). This is the first major progress on Sleator and Tarjan’s dynamic optimality conjecture of 1985 that O(1)competitive binary search trees exist. 1. ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
We present an O(lg lg n)competitive online binary search tree, improving upon the best previous (trivial) competitive ratio of O(lg n). This is the first major progress on Sleator and Tarjan’s dynamic optimality conjecture of 1985 that O(1)competitive binary search trees exist. 1.
6.897: Advanced data structures (Spring 2005), Lecture 3, February 8
, 2005
"... Recall from last lecture that we are looking at the documentretrieval problem. The problem can be stated as follows: Given a set of texts T1, T2,..., Tk and a pattern P, determine the distinct texts in which the patterns occurs. In particular, we are allowed to preprocess the texts in order to be a ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Recall from last lecture that we are looking at the documentretrieval problem. The problem can be stated as follows: Given a set of texts T1, T2,..., Tk and a pattern P, determine the distinct texts in which the patterns occurs. In particular, we are allowed to preprocess the texts in order to be able to answer the query faster. Our preprocessing choice was the use of a single suffix tree, in which all the suffixes of all the texts appear, each suffix ending with a distinct symbol that determines the text in which the suffix appears. In order to answer the query we reduced the problem to rangemin queries, which in turn was reduced to the least common ancestor (LCA) problem on the cartesian tree of an array of numbers. The cartesian tree is constructed recursively by setting its root to be the minimum element of the array and recursively constructing its two subtrees using the left and right partitions of the array. The rangemin query of an interval [i, j] is then equivalent to finding the LCA of the two nodes of the cartesian tree that correspond to i and j. In this lecture we continue to see how we can solve the LCA problem on any static tree. This will involve a reduction of the LCA problem back to the rangemin query problem (!) and then a
Queaps
, 2002
"... We present a new priority queue data structure, the queap, that executes insertion in O(1) amortized time and extractmin in O(log(k + 2)) amortized time if there are k items that have been in the heap longer than the item to be extracted. Thus if the operations on the queap are rstin rstout ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
We present a new priority queue data structure, the queap, that executes insertion in O(1) amortized time and extractmin in O(log(k + 2)) amortized time if there are k items that have been in the heap longer than the item to be extracted. Thus if the operations on the queap are rstin rstout, as on a queue, each operation will execute in constant time. This idea of trying to make operations on the least recently accessed items fast, which we call the queueish property, is a natural complement to the working set property of certain data structures, such as splay trees and pairing heaps, where operations on the most recently accessed data execute quickly. However, we show that the queueish property is in some sense more dicult than the working set property by demonstrating that it is impossible to create a queueish binary search tree, but that many search data structures can be made almost queueish with a O(log log n) amortized extra cost per operation.
Overview
, 2010
"... In the last lecture we discussed Binary Search Trees(BST) and introduced them as a model of computation. A quick recap: A search is conducted with a pointer starting at the root, which is free to move about the tree and perform rotations; however, the pointer must at some point in the operation visi ..."
Abstract
 Add to MetaCart
(Show Context)
In the last lecture we discussed Binary Search Trees(BST) and introduced them as a model of computation. A quick recap: A search is conducted with a pointer starting at the root, which is free to move about the tree and perform rotations; however, the pointer must at some point in the operation visit the item being searched. The cost of the search is simply the total number of distinct nodes in the trees that have been visited by the pointer during the operation. We measure the total cost of executing a sequence of searches S = �s1, s2, s3...�, where each search si is chosen from among the fixed set of n keys in the BST. We have witnessed that there are access sequences which require o(log(n)) time per operation. There are also some deterministic sequences on n queries (for example, the bit reversal permutation) which require a total running time of Ω(n log(n)) for any BST algorithm. This disparity however does not rule out the possibility of having an instance optimal BST. By this we mean: Let OPT(S) denote the minimal cost for executing the access sequence S in the BST model, or the cost of the best BST algorithm which has access to the sequence apriori. It is believed that splay trees are the “best BST”. However, they are not known to have o(log(n)) competitive ratio. Also, notice that we are only concerned with the cost of the specified operations on the BST and we are not
Upper Bounds for Maximally Greedy Binary Search Trees
"... Abstract. At SODA 2009, Demaine et al. presented a novel connection between binary search trees (BSTs) and subsets of points on the plane. This connection was independently discovered by Derryberry et al. As part of their results, Demaine et al. considered GreedyFuture, an offline BST algorithm that ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. At SODA 2009, Demaine et al. presented a novel connection between binary search trees (BSTs) and subsets of points on the plane. This connection was independently discovered by Derryberry et al. As part of their results, Demaine et al. considered GreedyFuture, an offline BST algorithm that greedily rearranges the search path to minimize the cost of future searches. They showed that GreedyFuture is actually an online algorithm in their geometric view, and that there is a way to turn GreedyFuture into an online BST algorithm with only a constant factor increase in total search cost. Demaine et al. conjectured this algorithm was dynamically optimal, but no upper bounds were given in their paper. We prove the first nontrivial upper bounds for the cost of search operations using GreedyFuture including giving an access lemma similar to that found in Sleator and Tarjan’s classic paper on splay trees. 1
6.897: Advanced Data Structures Spring 2003
, 2003
"... In the last lecture we considered the successor problem for a bounded universe of size u. We began looking at the van Emde Boas [3] data structure, which implements Insert, Delete, Successor, and Predecessor in O(lg lg u) time per operation. In this lecture we finish up van Emde Boas, and improve th ..."
Abstract
 Add to MetaCart
(Show Context)
In the last lecture we considered the successor problem for a bounded universe of size u. We began looking at the van Emde Boas [3] data structure, which implements Insert, Delete, Successor, and Predecessor in O(lg lg u) time per operation. In this lecture we finish up van Emde Boas, and improve the space complexity from our original O(u) to O(n). We also look at perfect hashing (first static, then dynamic), using it to improve the space complexity of van Emde Boas and to implement a simpler data structure with the same running time, yfast trees. 2 van Emde Boas 2.1 Pseudocode for vEB operations We start with pseudocode for Insert, Delete, and Successor. (Predecessor is symmetric to Successor.) Insert(x, S) if x <min[S]: swap x and min[S] if min[sub[S][high(x)]] = nil: / / was empty Insert(low(x), sub[S][high(x)]) else: min[sub[S][high(x)]] ← low(x) Insert(high(x), summary[S]) if x> max[S]: max[S] ← x Delete(x, S) if min[S] = nil or x < min[S]: return if min[S] = x: i ← min[summary[S]] x ← i � S+ min[sub[S][i]] 1 min[S] ← x Delete(low(x, sub[W][high(x)]) if min[sub[S][high(x)]] = nil: / / now empty Delete(high(x), summary[S]) / / in this case, the first recursive call was cheap Successor(x, S) if x < min[S]: return min[S] if low(x) < max[sub[S][high(x)]] else: return high(x) � S  + Successor(low(x), sub[S][high(x)]) i ← Successor(high(x), summary[S]) return i � S  + min[sub[S][i]] 2.2 Tree view of van Emde Boas The van Emde Boas data structure can be viewed as a tree of trees. The upper and lower “halves ” of the tree are of height 1 2 lg u, that is, we are cutting our tree in halves by level. The upper tree has √ u nodes, as does each of the subtrees hanging off its leaves. (These subtrees correspond to the sub[S] data structures in Section 2.1.)