Results 1  10
of
18
Selfadjusting binary search trees
, 1985
"... The splay tree, a selfadjusting form of binary search tree, is developed and analyzed. The binary search tree is a data structure for representing tables and lists so that accessing, inserting, and deleting items is easy. On an nnode splay tree, all the standard search tree operations have an am ..."
Abstract

Cited by 438 (19 self)
 Add to MetaCart
The splay tree, a selfadjusting form of binary search tree, is developed and analyzed. The binary search tree is a data structure for representing tables and lists so that accessing, inserting, and deleting items is easy. On an nnode splay tree, all the standard search tree operations have an amortized time bound of O(log n) per operation, where by “amortized time ” is meant the time per operation averaged over a worstcase sequence of operations. Thus splay trees are as efficient as balanced trees when total running time is the measure of interest. In addition, for sufficiently long access sequences, splay trees are as efficient, to within a constant factor, as static optimum search trees. The efftciency of splay trees comes not from an explicit structural constraint, as with balanced trees, but from applying a simple restructuring heuristic, called splaying, whenever the tree is accessed. Extensions of splaying give simplified forms of two other data structures: lexicographic or multidimensional search trees and link/ cut trees.
Making data structures persistent
, 1989
"... This paper is a study of persistence in data structures. Ordinary data structures are ephemeral in the sense that a change to the structure destroys the old version, leaving only the new version available for use. In contrast, a persistent structure allows access to any version, old or new, at any t ..."
Abstract

Cited by 287 (6 self)
 Add to MetaCart
(Show Context)
This paper is a study of persistence in data structures. Ordinary data structures are ephemeral in the sense that a change to the structure destroys the old version, leaving only the new version available for use. In contrast, a persistent structure allows access to any version, old or new, at any time. We develop simple, systematic, and efftcient techniques for making linked data structures persistent. We use our techniques to devise persistent forms of binary search trees with logarithmic access, insertion, and deletion times and O (1) space bounds for insertion and deletion.
A Locally Adaptive Data Compression Scheme
, 1986
"... A data compression scheme that exploits locality of reference, such as occurs when words are used frequently over short intervals and then fall into long periods of disuse, is described. The scheme is based on a simple heuristic for selforganizing sequential search and on variablelength encoding ..."
Abstract

Cited by 173 (2 self)
 Add to MetaCart
A data compression scheme that exploits locality of reference, such as occurs when words are used frequently over short intervals and then fall into long periods of disuse, is described. The scheme is based on a simple heuristic for selforganizing sequential search and on variablelength encodings of integers. We prove that it never performs much worse than Huffman coding and can perform substantially better; experiments on real files show that its performance is usually quite close to that of Huffman coding. Our scheme has many implementation advantages: it is simple, allows fast encoding and decoding, and requires only one pass over the data to be compressed (static Huffman coding takes two passes).
AN O(n log log n)TIME ALGORITHM FOR TRIANGULATING A SIMPLE POLYGON
, 1988
"... Given a simple nvertex polygon, the triangulation problem is to partition the interior of the polygon into n2 triangles by adding n3 nonintersecting diagonals. We propose an O(n log logn)time algorithm for this problem, improving on the previously best bound of O (n log n) and showing that tria ..."
Abstract

Cited by 39 (3 self)
 Add to MetaCart
Given a simple nvertex polygon, the triangulation problem is to partition the interior of the polygon into n2 triangles by adding n3 nonintersecting diagonals. We propose an O(n log logn)time algorithm for this problem, improving on the previously best bound of O (n log n) and showing that triangulation is not as hard as sorting. Improved algorithms for several other computational geometry problems, including testing whether a polygon is simple, follow from our result.
Fully persistent lists WITH CATENATION
, 1994
"... This paper considers the problem of represmrtirrg stacks with catenation so that any stack, old or new, is available for access or update operations. Th]s problem arises in the implementation of listbased and functional programming languages. A solution is proposed requiring constant time and spa ..."
Abstract

Cited by 23 (5 self)
 Add to MetaCart
This paper considers the problem of represmrtirrg stacks with catenation so that any stack, old or new, is available for access or update operations. Th]s problem arises in the implementation of listbased and functional programming languages. A solution is proposed requiring constant time and space for each stack operation except catenation, which requmes O(log log k) time and space. Here k is the number of stack operations done before the
Purely Functional Representations of Catenable Sorted Lists.
 In Proceedings of the 28th Annual ACM Symposium on Theory of Computing
, 1996
"... The power of purely functional programming in the construction of data structures has received much attention, not only because functional languages have many desirable properties, but because structures built purely functionally are automatically fully persistent: any and all versions of a structur ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
The power of purely functional programming in the construction of data structures has received much attention, not only because functional languages have many desirable properties, but because structures built purely functionally are automatically fully persistent: any and all versions of a structure can coexist indefinitely. Recent results illustrate the surprising power of pure functionality. One such result was the development of a representation of doubleended queues with catenation that supports all operations, including catenation, in worstcase constant time [19].
Purely Functional RandomAccess Lists
 In Functional Programming Languages and Computer Architecture
, 1995
"... We present a new data structure, called a randomaccess list, that supports array lookup and update operations in O(log n) time, while simultaneously providing O(1) time list operations (cons, head, tail). A closer analysis of the array operations improves the bound to O(minfi; log ng) in the wor ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
(Show Context)
We present a new data structure, called a randomaccess list, that supports array lookup and update operations in O(log n) time, while simultaneously providing O(1) time list operations (cons, head, tail). A closer analysis of the array operations improves the bound to O(minfi; log ng) in the worst case and O(log i) in the expected case, where i is the index of the desired element. Empirical evidence suggests that this data structure should be quite efficient in practice. 1 Introduction Lists are the primary data structure in every functional programmer 's toolbox. They are simple, convenient, and usually quite efficient. The main drawback of lists is that accessing the ith element requires O(i) time. In such situations, functional programmers often find themselves longing for the efficient random access of arrays. Unfortunately, arrays can be quite awkward to implement in a functional setting, where previous versions of the array must be available even after an update. Since arra...
Finger Search Trees with Constant Insertion Time
 In Proc. 9th Annual ACMSIAM Symposium on Discrete Algorithms
, 1997
"... We consider the problem of implementing finger search trees on the pointer machine, i.e., how to maintain a sorted list such that searching for an element x, starting the search at any arbitrary element f in the list, only requires logarithmic time in the distance between x and f in the list. We pr ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
(Show Context)
We consider the problem of implementing finger search trees on the pointer machine, i.e., how to maintain a sorted list such that searching for an element x, starting the search at any arbitrary element f in the list, only requires logarithmic time in the distance between x and f in the list. We present the first pointerbased implementation of finger search trees allowing new elements to be inserted at any arbitrary position in the list in worst case constant time. Previously, the best known insertion time on the pointer machine was O(log n), where n is the total length of the list. On a unitcost RAM, a constant insertion time has been achieved by Dietz and Raman by using standard techniques of packing small problem sizes into a constant number of machine words. Deletion of a list element is supported in O(log n) time, which matches the previous best bounds. Our data structure requires linear space. 1 Introduction A finger search tree is a data structure which stores a sorte...
Optimal Finger Search Trees in the Pointer Machine
, 2002
"... We develop a new finger search tree with worst case constant update time in the Pointer Machine (PM) model of computation. This was a major problem in the field of Data Structures and was tantalizingly open for over twenty years, while many attempts by researchers were made to solve it. The result c ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
We develop a new finger search tree with worst case constant update time in the Pointer Machine (PM) model of computation. This was a major problem in the field of Data Structures and was tantalizingly open for over twenty years, while many attempts by researchers were made to solve it. The result comes as a consequence of the innovative mechanism that guides the rebalancing operations, combined with incremental multiple splitting and fusion techniques over nodes.
Spaceefficient finger search on degreebalanced search trees
 In SODA
, 2003
"... We show how to support the finger search operation on degreebalanced search trees in a spaceefficient manner that retains a worstcase time bound of O(log d), where d is the difference in rank between successive search targets. While most existing treebased designs allocate linear extra storage i ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
We show how to support the finger search operation on degreebalanced search trees in a spaceefficient manner that retains a worstcase time bound of O(log d), where d is the difference in rank between successive search targets. While most existing treebased designs allocate linear extra storage in the nodes (e.g., for side links and parent pointers), our design maintains a compact auxiliary data structure called the “hand ” during the lifetime of the tree and imposes no other storage requirement within the tree. The hand requires O(log n) space for an nnode tree and has a relatively simple structure. It can be updated synchronously during insertions and deletions with time proportional to the number of structural changes in the tree. The auxiliary nature of the hand also makes it possible to introduce finger searches into any existing implementation without modifying the underlying data representation (e.g., any implementation of RedBlack trees can be used). Together these factors make finger searches more appealing in practice. Our design also yields a simple yet optimal inorder walk algorithm with worstcase O(1) work per increment (again without any extra storage requirement in the nodes), and we believe our algorithm can be used in database applications when the overall performance is very sensitive to retrieval latency. 1