Results 1  10
of
14
XML goes native: Runtime representations for Xtatic
 In 14th International Conference on Compiler Construction
, 2004
"... Abstract. Xtatic is a lightweight extension of C ♯ offering native support for statically typed XML processing. XML trees are builtin values in Xtatic, and static analysis of the trees manipulated by programs is part of the ordinary job of the typechecker. “Tree grep ” pattern matching is used to i ..."
Abstract

Cited by 16 (7 self)
 Add to MetaCart
(Show Context)
Abstract. Xtatic is a lightweight extension of C ♯ offering native support for statically typed XML processing. XML trees are builtin values in Xtatic, and static analysis of the trees manipulated by programs is part of the ordinary job of the typechecker. “Tree grep ” pattern matching is used to investigate and transform XML trees. Xtatic’s surface syntax and type system are tightly integrated with those of C ♯. Beneath the hood, however, an implementation of Xtatic must address a number of issues common to any language supporting a declarative style of XML processing (e.g., XQuery, XSLT, XDuce, CDuce, Xact, Xen, etc.). In particular, it must provide representations for XML tags, trees, and textual data that use memory efficiently, support efficient pattern matching, allow maximal sharing of common substructures, and permit separate compilation. We analyze these representation choices in detail and describe the solutions used by the Xtatic compiler. 1
Persistent data structures
 IN HANDBOOK ON DATA STRUCTURES AND APPLICATIONS, CRC PRESS 2001, DINESH MEHTA AND SARTAJ SAHNI (EDITORS) BOROUJERDI, A., AND MORET, B.M.E., &QUOT;PERSISTENCY IN COMPUTATIONAL GEOMETRY,&QUOT; PROC. 7TH CANADIAN CONF. COMP. GEOMETRY, QUEBEC
, 1995
"... ..."
Making Data Structures Confluently Persistent
, 2001
"... We address a longstanding open problem of [10, 9], and present a general transformation that transforms any pointer based data structure to be confluently persistent. Such transformations for fully persistent data structures are given in [10], greatly improving the performance compared to the naive ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
We address a longstanding open problem of [10, 9], and present a general transformation that transforms any pointer based data structure to be confluently persistent. Such transformations for fully persistent data structures are given in [10], greatly improving the performance compared to the naive scheme of simply copying the inputs. Unlike fully persistent data structures, where both the naive scheme and the fully persistent scheme of [10] are feasible, we show that the naive scheme for confluently persistent data structures is itself infeasible (requires exponential space and time). Thus, prior to this paper there was no feasible method for implementing confluently persistent data structures at all. Our methods give an exponential reduction in space and time compared to the naive method, placing confluently persistent data structures in the realm of possibility.
Extended Static Checking of CallbyValue Functional Programs
, 2007
"... We present a Hoare logic for a callbyvalue programming language equipped with recursive, higherorder functions, algebraic data types, and a polymorphic type system in the style of Hindley and Milner. It is the theoretical basis for a tool that extracts proof obligations out of programs annotated ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We present a Hoare logic for a callbyvalue programming language equipped with recursive, higherorder functions, algebraic data types, and a polymorphic type system in the style of Hindley and Milner. It is the theoretical basis for a tool that extracts proof obligations out of programs annotated with logical assertions. These proof obligations, expressed in a typed, higherorder logic, are discharged using offtheshelf automated or interactive theorem provers. Although the technical apparatus that we exploit is by now standard, its application to callbyvalue functional programming languages appears to be new, and (we claim) deserves attention. As a sample application, we check the partial correctness of a balanced binary search tree implementation.
Traceable Data Structures
, 2006
"... We consider the problem of tracking the history of a shared data structure so that a user can efficiently view any previous version of the structure (persistence), and efficiently recover information about all previous operations performed on the data structure, including both reads and writes (trac ..."
Abstract
 Add to MetaCart
We consider the problem of tracking the history of a shared data structure so that a user can efficiently view any previous version of the structure (persistence), and efficiently recover information about all previous operations performed on the data structure, including both reads and writes (traceability). We present a mechanism that works for any boundeddegree linked structure. The mechanism supports any sequence of m operations in O(m) time assuming a RAM, and in O(mα(m, m)) assuming a pointer machine. We show that the bound is tight for a pointer machine. Applications of traceable data structures are copious. For example, one could implement the technique for protecting privacy, for auditing, error notification and even to automatically dynamize static algorithms. In the case of privacy, the approach could be used for storing data structures with sensitive information. If some information is leaked or improperly used, then it is possible to go back and see who read that data or even to trace through how the data was read. This gives some protection, or at least a deterrent, against improper access to the data. With traceable structures it is also possible to track when an error was introduced into a data structure, or to identify the readers of any erroneous data so they can be notified. In algorithm dynamization, any change to the input of the algorithm changes the output only of the functions that read the change. A traceable data structure allows for all these functions to be found efficiently so they can be reexecuted on the new input. Classification: Data structures
Purely Functional Worst Case Constant Time Catenable Sorted Lists
"... Abstract. We present a purely functional implementation of search trees that requires O(log n) time for search and update operations and supports the join of two trees in worst case constant time. Hence, we solve an open problem posed by Kaplan and Tarjan as to whether it is possible to envisage a d ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. We present a purely functional implementation of search trees that requires O(log n) time for search and update operations and supports the join of two trees in worst case constant time. Hence, we solve an open problem posed by Kaplan and Tarjan as to whether it is possible to envisage a data structure supporting simultaneously the join operation in O(1) time and the search and update operations in O(log n) time.
Making Data Structures Confluently Persistent ( Extended Abstract)
"... Reality is merely an illusion, albeit a very persistent one. Albert Einstein (18751955) We address a longstanding open problem of [8, 7], and present a general transformation that takes any data structure and transforms it to a confluently persistent data structure. We model this general problem ..."
Abstract
 Add to MetaCart
(Show Context)
Reality is merely an illusion, albeit a very persistent one. Albert Einstein (18751955) We address a longstanding open problem of [8, 7], and present a general transformation that takes any data structure and transforms it to a confluently persistent data structure. We model this general problem using the concepts of a version DAG (Directed Acyclic Graph) and an instantiation of a version DAG. We introduce the concept of the effective depth of a vertex in the version DAG and use it to derive information theoretic lower bounds on the space expansion of any such transformation for this DAG. We then give a confluently persistent data structure, such that for any version DAG, the time slowdown and space expansion match the information theoretic lower bounds to within a factor of O(log2(IYl)). 1
A Complete Bibliography of Publications in Journal of Computational Chemistry: 1990–1999
"... Version 1.00 Title word crossreference ..."
Abstract Lightweight Semiformal Time Complexity Analysis for Purely Functional Data Structures
"... and others have demonstrated how purely functional data structures that are efficient even in the presence of persistence can be constructed. To achieve good time bounds essential use is often made of laziness. The associated complexity analysis is frequently subtle, requiring careful attention to d ..."
Abstract
 Add to MetaCart
(Show Context)
and others have demonstrated how purely functional data structures that are efficient even in the presence of persistence can be constructed. To achieve good time bounds essential use is often made of laziness. The associated complexity analysis is frequently subtle, requiring careful attention to detail, and hence formalising it is valuable. This paper describes a simple library which can be used to make the analysis of a class of purely functional data structures and algorithms almost fully formal. The basic idea is to use the type system to annotate every function with the time required to compute its result. An annotated monad is used to combine time complexity annotations. The library has been used to analyse some existing data structures, for instance the deque operations of Hinze and Paterson’s finger trees.
Reflection without Remorse Revealing a hidden sequence to speed up monadic reflection
"... A series of list appends or monadic binds for many monads performs algorithmically worse when leftassociated. Continuationpassing style (CPS) is wellknown to cure this severe dependence of performance on the association pattern. The advantage of CPS dwindles or disappears if we have to examine o ..."
Abstract
 Add to MetaCart
(Show Context)
A series of list appends or monadic binds for many monads performs algorithmically worse when leftassociated. Continuationpassing style (CPS) is wellknown to cure this severe dependence of performance on the association pattern. The advantage of CPS dwindles or disappears if we have to examine or modify the intermediate result of a series of appends or binds, before continuing the series. Such examination is frequently needed, for example, to control search in nondeterminism monads. We present an alternative approach that is just as general as CPS but more robust: it makes series of binds and other such operations efficient regardless of the association pattern – and also provides efficient access to intermediate results. The key is to represent such a conceptual sequence as an efficient sequence data structure. Efficient sequence data structures from the literature are homogeneous and cannot be applied as they are in a typesafe way to series of monadic binds. We generalize them to type aligned sequences and show how to construct their (assuredly orderpreserving) implementations. We demonstrate that our solution solves previously undocumented, severe performance problems in iteratees, LogicT transformers, free monads and extensible effects. Categories and Subject Descriptors D.3.2 [Programming Lan