Results 1  10
of
29
Statedependent representation independence
 In Proceedings of the 36th ACM SIGPLANSIGACT Symposium on Principles of Programming Languages
, 2009
"... Mitchell’s notion of representation independence is a particularly useful application of Reynolds ’ relational parametricity — two different implementations of an abstract data type can be shown contextually equivalent so long as there exists a relation between their type representations that is pre ..."
Abstract

Cited by 64 (19 self)
 Add to MetaCart
Mitchell’s notion of representation independence is a particularly useful application of Reynolds ’ relational parametricity — two different implementations of an abstract data type can be shown contextually equivalent so long as there exists a relation between their type representations that is preserved by their operations. There have been a number of methods proposed for proving representation independence in various pure extensions of System F (where data abstraction is achieved through existential typing), as well as in Algol or Javalike languages (where data abstraction is achieved through the use of local mutable state). However, none of these approaches addresses the interaction of existential type abstraction and local state. In particular, none allows one to prove representation independence results for generative ADTs — i.e., ADTs that both maintain some local state and define abstract types whose internal
An experimental analysis of selfadjusting computation
 In Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI
, 2006
"... Selfadjusting computation uses a combination of dynamic dependence graphs and memoization to efficiently update the output of a program as the input changes incrementally or dynamically over time. Related work showed various theoretical results, indicating that the approach can be effective for a r ..."
Abstract

Cited by 37 (20 self)
 Add to MetaCart
Selfadjusting computation uses a combination of dynamic dependence graphs and memoization to efficiently update the output of a program as the input changes incrementally or dynamically over time. Related work showed various theoretical results, indicating that the approach can be effective for a reasonably broad range of applications. In this article, we describe algorithms and implementation techniques to realize selfadjusting computation and present an experimental evaluation of the proposed approach on a variety of applications, ranging from simple list primitives to more sophisticated computational geometry algorithms. The results of the experiments show that the approach is effective in practice, often offering orders of magnitude speedup from recomputing the output from scratch. We believe this is the first experimental evidence that incremental computation of any type is effective in practice for a reasonably broad set of applications.
Provenance as dependency analysis
 Proceedings of the 11th International Symposium on Database Programming Languages (DBPL 2007), number 4797 in LNCS
, 2007
"... Abstract. Provenance is information recording the source, derivation, or history of some information. Provenance tracking has been studied in a variety of settings; however, although many design points have been explored, the mathematical or semantic foundations of data provenance have received comp ..."
Abstract

Cited by 36 (16 self)
 Add to MetaCart
Abstract. Provenance is information recording the source, derivation, or history of some information. Provenance tracking has been studied in a variety of settings; however, although many design points have been explored, the mathematical or semantic foundations of data provenance have received comparatively little attention. In this paper, we argue that dependency analysis techniques familiar from program analysis and program slicing provide a formal foundation for forms of provenance that are intended to show how (part of) the output of a query depends on (parts of) its input. We introduce a semantic characterization of such dependency provenance, show that this form of provenance is not computable, and provide dynamic and static approximation techniques. 1
CEAL: a Cbased language for selfadjusting computation
 In ACM SIGPLAN Conference on Programming Language Design and Implementation
, 2009
"... Selfadjusting computation offers a languagecentric approach to writing programs that can automatically respond to modifications to their data (e.g., inputs). Except for several domainspecific implementations, however, all previous implementations of selfadjusting computation assume mostly functi ..."
Abstract

Cited by 18 (9 self)
 Add to MetaCart
Selfadjusting computation offers a languagecentric approach to writing programs that can automatically respond to modifications to their data (e.g., inputs). Except for several domainspecific implementations, however, all previous implementations of selfadjusting computation assume mostly functional, higherorder languages such as Standard ML. Prior to this work, it was not known if selfadjusting computation can be made to work with lowlevel, imperative languages such as C without placing undue burden on the programmer. We describe the design and implementation of CEAL: a Cbased language for selfadjusting computation. The language is fully general and extends C with a small number of primitives to enable writing selfadjusting programs in a style similar to conventional C programs. We present efficient compilation techniques for translating CEAL programs into C that can be compiled with existing C compilers using primitives supplied by a runtime library for selfadjusting computation. We implement the proposed compiler and evaluate its effectiveness. Our experiments show that CEAL is effective in practice: compiled selfadjusting programs respond to small modifications to their data by orders of magnitude faster than recomputing from scratch while slowing down a fromscratch run by a moderate constant factor. Compared to previous work, we
Logical StepIndexed Logical Relations
"... We show how to reason about “stepindexed ” logical relations in an abstract way, avoiding the tedious, errorprone, and proofobscuring stepindex arithmetic that seems superficially to be an essential element of the method. Specifically, we define a logic LSLR, which is inspired by Plotkin and Aba ..."
Abstract

Cited by 16 (7 self)
 Add to MetaCart
We show how to reason about “stepindexed ” logical relations in an abstract way, avoiding the tedious, errorprone, and proofobscuring stepindex arithmetic that seems superficially to be an essential element of the method. Specifically, we define a logic LSLR, which is inspired by Plotkin and Abadi’s logic for parametricity, but also supports recursively defined relations by means of the modal “later ” operator from Appel et al.’s “very modal model” paper. We encode in LSLR a logical relation for reasoning (in)equationally about programs in callbyvalue System F extended with recursive types. Using this logical relation, we derive a useful set of rules with which we can prove contextual (in)equivalences without mentioning step indices. 1
A consistent semantics of selfadjusting computation
, 2006
"... Abstract. This paper presents a semantics of selfadjusting computation and proves that the semantics is correct and consistent. The semantics integrates change propagation with the classic idea of memoization to enable reuse of computations under mutation to memory. During evaluation, reuse of a co ..."
Abstract

Cited by 9 (8 self)
 Add to MetaCart
Abstract. This paper presents a semantics of selfadjusting computation and proves that the semantics is correct and consistent. The semantics integrates change propagation with the classic idea of memoization to enable reuse of computations under mutation to memory. During evaluation, reuse of a computation via memoization triggers a change propagation that adjusts the reused computation to reflect the mutated memory. Since the semantics combines memoization and changepropagation, it involves both nondeterminism and mutation. Our consistency theorem states that the nondeterminism is not harmful: any two evaluations of the same program starting at the same state yield the same result. Our correctness theorem states that mutation is not harmful: selfadjusting programs are consistent with purely functional programming. We formalized the semantics and its metatheory in the LF logical framework and machinechecked the proofs in Twelf. 1
Semantic foundations for typed assembly languages
 Prog. Languages and Systems (TOPLAS
, 2008
"... Typed Assembly Languages (TALs) are used to validate the safety of machinelanguage programs. The Foundational ProofCarrying Code project seeks to verify the soundness of TALs using the smallest possible set of axioms—the axioms of a suitably expressive logic plus a specification of machine semanti ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
Typed Assembly Languages (TALs) are used to validate the safety of machinelanguage programs. The Foundational ProofCarrying Code project seeks to verify the soundness of TALs using the smallest possible set of axioms—the axioms of a suitably expressive logic plus a specification of machine semantics. This paper proposes general semantic foundations that permit modular proofs of the soundness of TALs. These semantic foundations include Typed Machine Language (TML), a type theory for specifying properties of lowlevel data with powerful and orthogonal type constructors, and Lc, a compositional logic for specifying properties of machine instructions with simplified reasoning about unstructured control flow. Both of these components, whose semantics we specify using higherorder logic, are useful for proving the soundness of TALs. We demonstrate this by using TML and Lc to verify the soundness of a lowlevel, typed assembly language, LTAL, which is the target of our coreMLtosparc compiler. To prove the soundness of the TML type system we have successfully applied a new approach, that of stepindexed logical relations. This approach provides the first semantic model for a type system with updatable references to values of impredicative quantified types. Both impredicative polymorphism and mutable references are essential when representing function closures in compilers with typed closure conversion, or when compiling objects to simpler typed primitives.
Caching and Incrementalisation in the Java Query Language
"... Abstract. Many contemporary objectoriented programming languages support firstclass queries or comprehensions. These language extensions make it easier for programmers to write queries, but are generally implemented no more efficiently than the code using collections, iterators, and loops that the ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
Abstract. Many contemporary objectoriented programming languages support firstclass queries or comprehensions. These language extensions make it easier for programmers to write queries, but are generally implemented no more efficiently than the code using collections, iterators, and loops that they replace. Crucially, whenever a query is reexecuted, it is typically recomputed from scratch. We describe a general approach to optimising queries over mutable objects: query results are cached, and those caches are incrementally maintained whenever the collections and objects underlying those queries are updated. We hope that the performance benefits of our optimisations may encourage more general adoption of firstclass queries by objectoriented programmers. 1
Adaptive Inference on General Graphical Models
"... Many algorithms and applications involve repeatedly solving variations of the same inference problem; for example we may want to introduce new evidence to the model or perform updates to conditional dependencies. The goal of adaptive inference is to take advantage of what is preserved in the model a ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Many algorithms and applications involve repeatedly solving variations of the same inference problem; for example we may want to introduce new evidence to the model or perform updates to conditional dependencies. The goal of adaptive inference is to take advantage of what is preserved in the model and perform inference more rapidly than from scratch. In this paper, we describe techniques for adaptive inference on general graphs that support marginal computation and updates to the conditional probabilities and dependencies in logarithmic time. We give experimental results for an implementation of our algorithm, and demonstrate its potential performance benefit in the study of protein structure. 1
Robust kinetic convex hulls in 3D
 In Proceedings of the 16th Annual European Symposium on Algorithms
"... Abstract. Kinetic data structures provide a framework for computing combinatorial properties of continuously moving objects. Although kinetic data structures for many problems have been proposed, some difficulties remain in devising and implementing them, especially robustly. One set of difficulties ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Abstract. Kinetic data structures provide a framework for computing combinatorial properties of continuously moving objects. Although kinetic data structures for many problems have been proposed, some difficulties remain in devising and implementing them, especially robustly. One set of difficulties stems from the required update mechanisms used for processing certificate failures—devising efficient update mechanisms can be difficult, especially for sophisticated problems such as those in 3D. Another set of difficulties arises due to the strong assumption in the framework that the update mechanism is invoked with a single event. This assumption requires ordering the events precisely, which is generally expensive. This assumption also makes it difficult to deal with simultaneous events that arise due to degeneracies or due to intrinsic properties of the kinetized algorithms. In this paper, we apply advances on selfadjusting computation to provide a robust motion simulation technique that combines kinetic eventbased scheduling and the classic idea of fixedtime sampling. The idea is to divide time into a lattice of fixedsize intervals, and process events at the resolution of an interval. We apply the approach to the problem of kinetic maintenance of convex hulls in 3D, a problem that has been open since 90s. We evaluate the effectiveness of the proposal experimentally. Using the approach, we are able to run simulations consisting of tens of thousands of points robustly and efficiently. 1