Results 11 
19 of
19
Confluently Persistent Deques via DataStructural Bootstrapping
 J. of Algorithms
, 1993
"... We introduce datastructural bootstrapping, a technique to design data structures recursively, and use it to design confluently persistent deques. Our data structure requires O(log 3 k) worstcase time and space per deletion, where k is the total number of deque operations, and constant worstcase t ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
(Show Context)
We introduce datastructural bootstrapping, a technique to design data structures recursively, and use it to design confluently persistent deques. Our data structure requires O(log 3 k) worstcase time and space per deletion, where k is the total number of deque operations, and constant worstcase time and space for other operations. Further, the data structure allows a purely functional implementation, with no side effects. This improves a previous result of Driscoll, Sleator, and Tarjan. 1 An extended abstract of this paper was presented at the 4th ACMSIAM Symposium on Discrete Algorithms, 1993. 2 Supported by a Fannie and John Hertz Foundation fellowship, National Science Foundation Grant No. CCR8920505, and the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS) under NSFSTC8809648. 3 Also affiliated with NEC Research Institute, 4 Independence Way, Princeton, NJ 08540. Research at Princeton University partially supported by the National Science Foundatio...
Persistent data structures
 IN HANDBOOK ON DATA STRUCTURES AND APPLICATIONS, CRC PRESS 2001, DINESH MEHTA AND SARTAJ SAHNI (EDITORS) BOROUJERDI, A., AND MORET, B.M.E., &QUOT;PERSISTENCY IN COMPUTATIONAL GEOMETRY,&QUOT; PROC. 7TH CANADIAN CONF. COMP. GEOMETRY, QUEBEC
, 1995
"... ..."
Making Data Structures Confluently Persistent
, 2001
"... We address a longstanding open problem of [10, 9], and present a general transformation that transforms any pointer based data structure to be confluently persistent. Such transformations for fully persistent data structures are given in [10], greatly improving the performance compared to the naive ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
(Show Context)
We address a longstanding open problem of [10, 9], and present a general transformation that transforms any pointer based data structure to be confluently persistent. Such transformations for fully persistent data structures are given in [10], greatly improving the performance compared to the naive scheme of simply copying the inputs. Unlike fully persistent data structures, where both the naive scheme and the fully persistent scheme of [10] are feasible, we show that the naive scheme for confluently persistent data structures is itself infeasible (requires exponential space and time). Thus, prior to this paper there was no feasible method for implementing confluently persistent data structures at all. Our methods give an exponential reduction in space and time compared to the naive method, placing confluently persistent data structures in the realm of possibility.
Persistence, Offline Algorithms, and Space Compaction
, 1991
"... We consider dynamic data structures in which updates rebuild a static solution. Space bounds for persistent versions of these structures can often be reduced by using an offline persistent data structure in place of the static solution. We apply this technique to decomposable search problems, to dyn ..."
Abstract
 Add to MetaCart
We consider dynamic data structures in which updates rebuild a static solution. Space bounds for persistent versions of these structures can often be reduced by using an offline persistent data structure in place of the static solution. We apply this technique to decomposable search problems, to dynamic linear programming, and to maintaining the minimum spanning tree in a dynamic graph. Our algorithms admit tradeoffs of update time vs. query time, and of time vs. space.
Traceable Data Structures
, 2006
"... We consider the problem of tracking the history of a shared data structure so that a user can efficiently view any previous version of the structure (persistence), and efficiently recover information about all previous operations performed on the data structure, including both reads and writes (trac ..."
Abstract
 Add to MetaCart
We consider the problem of tracking the history of a shared data structure so that a user can efficiently view any previous version of the structure (persistence), and efficiently recover information about all previous operations performed on the data structure, including both reads and writes (traceability). We present a mechanism that works for any boundeddegree linked structure. The mechanism supports any sequence of m operations in O(m) time assuming a RAM, and in O(mα(m, m)) assuming a pointer machine. We show that the bound is tight for a pointer machine. Applications of traceable data structures are copious. For example, one could implement the technique for protecting privacy, for auditing, error notification and even to automatically dynamize static algorithms. In the case of privacy, the approach could be used for storing data structures with sensitive information. If some information is leaked or improperly used, then it is possible to go back and see who read that data or even to trace through how the data was read. This gives some protection, or at least a deterrent, against improper access to the data. With traceable structures it is also possible to track when an error was introduced into a data structure, or to identify the readers of any erroneous data so they can be notified. In algorithm dynamization, any change to the input of the algorithm changes the output only of the functions that read the change. A traceable data structure allows for all these functions to be found efficiently so they can be reexecuted on the new input. Classification: Data structures
University of Ulm,
"... This paper presents a dual approach to detect intersections of hyperplanes and convex polyhedra in arbitrary dimensions. In d dimensions, the time complexities of the dual algorithms are 0(2 d log n) for the hyperplanepolyhedron intersection problem, and O((2d) d 1 logd 1 n) for the polyhedron p ..."
Abstract
 Add to MetaCart
(Show Context)
This paper presents a dual approach to detect intersections of hyperplanes and convex polyhedra in arbitrary dimensions. In d dimensions, the time complexities of the dual algorithms are 0(2 d log n) for the hyperplanepolyhedron intersection problem, and O((2d) d 1 logd 1 n) for the polyhedron polyhedron intersection problem. These results are the first of their kind for d> 3. In two dimensions, these time bounds are achieved with linear space and preprocessing. In three dimensions, the hyperplanepolyhedron intersection problem is also solved with linear space and preprocessing; quadratic space and preprocessing, however, is required for the polyhedronpolyhedron intersection problem. For general d, the dual algorithms require O(n 2d) space and preprocessing. All of these results readily extend to unbounded polyhedra. CR categories: E. 1, F.2.2.
Persistent Linked Structures at Constant WorstCase Cost
"... We present a method for making linked structures with nodes of indegree not exceeding 1 partially persistent at a worstcase time cost of O(1) per access step and a worstcase time and space cost of O(1) per update step. The last two improve the best previous result, which gave O(1) amortized bo ..."
Abstract
 Add to MetaCart
(Show Context)
We present a method for making linked structures with nodes of indegree not exceeding 1 partially persistent at a worstcase time cost of O(1) per access step and a worstcase time and space cost of O(1) per update step. The last two improve the best previous result, which gave O(1) amortized bounds on time and space. Our results extend to full persistence. 1 Introduction Making a change to an ordinary data structure destroys the old version, leaving only the new one. Such a structure is said to be ephemeral. With a persistent data structure, on the other hand, old versions are not destroyed, making it possible to access or modify old versions as well as the newest one. A structure is said to be partially persistent if every version can be accessed but only the newest version can be modified and fully persistent if every version can be both accessed and modified. Researchers have devised partially or fully persistent forms for a number of data structures, including stacks [10],...
Thesis Summary The Diameter of Permutation Groups Fully Persistent Search Trees
, 1986
"... This thesis comprise two disjoint topics: the diameter of permutation groups and fully persistent search trees. The diameter of a permutation group is the length of the longest product of generators required to reach a group element. For example, the diameter of a permutation group puzzle like Rubik ..."
Abstract
 Add to MetaCart
(Show Context)
This thesis comprise two disjoint topics: the diameter of permutation groups and fully persistent search trees. The diameter of a permutation group is the length of the longest product of generators required to reach a group element. For example, the diameter of a permutation group puzzle like Rubik's Cube is the.,largest number of moves necessary to solve the puzzle. There are well known polynomialtime algorithims to determine if it is possible to reach a particular permutation with a given set of generators, but these algorithms can give a product exponentially longer than is required. We show that if the generators are constrained to be cycles with degree bounded by a constant then the diameter of the group is O(n2). Moreover, an O(n 2) length product expressing a given permutation can be found in polynomial time. A persistent search tree differs from an ordinary search tree in that after an insertion or deletion, the old version of the tree can still be searched. This thesis will describe lazy evaluation techniques for search trees that allow them to be made fully persistent. A fully persistent search tree supports insertions, deletions, and queries in any version, past or present. The time per query or update is O(log m) where m is the total number of updates, and the space needed is O(1) per update. These bounds are the best possible. Contents