Results 1  10
of
12
An experimental analysis of selfadjusting computation
 In Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI
, 2006
"... Selfadjusting computation uses a combination of dynamic dependence graphs and memoization to efficiently update the output of a program as the input changes incrementally or dynamically over time. Related work showed various theoretical results, indicating that the approach can be effective for a r ..."
Abstract

Cited by 50 (24 self)
 Add to MetaCart
(Show Context)
Selfadjusting computation uses a combination of dynamic dependence graphs and memoization to efficiently update the output of a program as the input changes incrementally or dynamically over time. Related work showed various theoretical results, indicating that the approach can be effective for a reasonably broad range of applications. In this article, we describe algorithms and implementation techniques to realize selfadjusting computation and present an experimental evaluation of the proposed approach on a variety of applications, ranging from simple list primitives to more sophisticated computational geometry algorithms. The results of the experiments show that the approach is effective in practice, often offering orders of magnitude speedup from recomputing the output from scratch. We believe this is the first experimental evidence that incremental computation of any type is effective in practice for a reasonably broad set of applications.
Imperative selfadjusting computation
 In POPL ’08: Proceedings of the 35th annual ACM SIGPLANSIGACT symposium on Principles of programming languages
, 2008
"... Recent work on selfadjusting computation showed how to systematically write programs that respond efficiently to incremental changes in their inputs. The idea is to represent changeable data using modifiable references, i.e., a special data structure that keeps track of dependencies between read an ..."
Abstract

Cited by 33 (15 self)
 Add to MetaCart
(Show Context)
Recent work on selfadjusting computation showed how to systematically write programs that respond efficiently to incremental changes in their inputs. The idea is to represent changeable data using modifiable references, i.e., a special data structure that keeps track of dependencies between read and writeoperations, and to let computations construct traces that later, after changes have occurred, can drive a change propagation algorithm. The approach has been shown to be effective for a variety of algorithmic problems, including some for which adhoc solutions had previously remained elusive. All previous work on selfadjusting computation, however, relied on a purely functional programming model. In this paper, we show that it is possible to remove this limitation and support modifiable references that can be written multiple times. We formalize this using a language AIL for which we define evaluation and changepropagation semantics. AIL closely resembles a traditional higherorder imperative programming language. For AIL we state and prove consistency, i.e., the property that although the semantics is inherently nondeterministic, different evaluation paths will still give observationally equivalent results. In the imperative setting where pointer graphs in the store can form cycles, our previous proof techniques do not apply. Instead, we make use of a novel form of a stepindexed logical relation that handles modifiable references. We show that AIL can be realized efficiently by describing implementation strategies whose overhead is provably constanttime per primitive. When the number of reads and writes per modifiable is bounded by a constant, we can show that change propagation becomes as efficient as it was in the pure case. The general case incurs a slowdown that is logarithmic in the maximum number of such operations. We use DFS and related algorithms on graphs as our running examples and prove that they respond to insertions and deletions of edges efficiently. 1.
A consistent semantics of selfadjusting computation
, 2006
"... Abstract. This paper presents a semantics of selfadjusting computation and proves that the semantics is correct and consistent. The semantics integrates change propagation with the classic idea of memoization to enable reuse of computations under mutation to memory. During evaluation, reuse of a co ..."
Abstract

Cited by 12 (9 self)
 Add to MetaCart
Abstract. This paper presents a semantics of selfadjusting computation and proves that the semantics is correct and consistent. The semantics integrates change propagation with the classic idea of memoization to enable reuse of computations under mutation to memory. During evaluation, reuse of a computation via memoization triggers a change propagation that adjusts the reused computation to reflect the mutated memory. Since the semantics combines memoization and changepropagation, it involves both nondeterminism and mutation. Our consistency theorem states that the nondeterminism is not harmful: any two evaluations of the same program starting at the same state yield the same result. Our correctness theorem states that mutation is not harmful: selfadjusting programs are consistent with purely functional programming. We formalized the semantics and its metatheory in the LF logical framework and machinechecked the proofs in Twelf. 1
A Proposal for Parallel SelfAdjusting Computation
, 2002
"... We present an overview of our ongoing work on parallelizing selfadjustingcomputation techniques. In selfadjusting computation, programs can respond to changes to their data (e.g., inputs, outcomes of comparisons) automatically by running a changepropagation algorithm. This ability is important i ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
We present an overview of our ongoing work on parallelizing selfadjustingcomputation techniques. In selfadjusting computation, programs can respond to changes to their data (e.g., inputs, outcomes of comparisons) automatically by running a changepropagation algorithm. This ability is important in applications where inputs change slowly over time. All previously proposed selfadjusting computation techniques assume a sequential execution model. We describe techniques for writing parallel selfadjusting programs and a change propagation algorithm that can update computations in parallel. We describe a prototype implementation and present preliminary experimental results.
Robust kinetic convex hulls in 3D
 In Proceedings of the 16th Annual European Symposium on Algorithms
"... Abstract. Kinetic data structures provide a framework for computing combinatorial properties of continuously moving objects. Although kinetic data structures for many problems have been proposed, some difficulties remain in devising and implementing them, especially robustly. One set of difficulties ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
(Show Context)
Abstract. Kinetic data structures provide a framework for computing combinatorial properties of continuously moving objects. Although kinetic data structures for many problems have been proposed, some difficulties remain in devising and implementing them, especially robustly. One set of difficulties stems from the required update mechanisms used for processing certificate failures—devising efficient update mechanisms can be difficult, especially for sophisticated problems such as those in 3D. Another set of difficulties arises due to the strong assumption in the framework that the update mechanism is invoked with a single event. This assumption requires ordering the events precisely, which is generally expensive. This assumption also makes it difficult to deal with simultaneous events that arise due to degeneracies or due to intrinsic properties of the kinetized algorithms. In this paper, we apply advances on selfadjusting computation to provide a robust motion simulation technique that combines kinetic eventbased scheduling and the classic idea of fixedtime sampling. The idea is to divide time into a lattice of fixedsize intervals, and process events at the resolution of an interval. We apply the approach to the problem of kinetic maintenance of convex hulls in 3D, a problem that has been open since 90s. We evaluate the effectiveness of the proposal experimentally. Using the approach, we are able to run simulations consisting of tens of thousands of points robustly and efficiently. 1
Dynamic mesh refinement with quad trees and offcenters
, 2008
"... Many algorithms exist for producing quality meshes when the input point cloud is known a priori. However, modern finite element simulations and graphics applications need to change the input set during the simulation dynamically. In this paper, we show a dynamic algorithm for building and maintainin ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Many algorithms exist for producing quality meshes when the input point cloud is known a priori. However, modern finite element simulations and graphics applications need to change the input set during the simulation dynamically. In this paper, we show a dynamic algorithm for building and maintaining a quadtree under insertions into and deletions from an input point set in any fixed dimension. This algorithm runs in O(lg L/s) time per update, where L/s is the spread of the input. The result of the dynamic quadtree can be combined with a postprocessing step to generate and maintain a simplicial mesh under dynamic changes in the same asymptotic runtime. The mesh output by the dynamic algorithm is of good quality (it has no small dihedral angle), and is optimal in size. This gives the first timeoptimal dynamic algorithm that outputs good quality meshes in any dimension. As a second result, we dynamize the quadtree postprocessing technique of HarPeled and Üngör for generating meshes in two dimensions. When composed with the dynamic quadtree algorithm, the resulting algorithm yields quality meshes that are the smallest known in practice, while guaranteeing the same asymptotic optimality guarantees.
SelfAdjusting Computation with Delta ML
"... Abstract. In selfadjusting computation, programs respond automatically and efficiently to modifications to their data by tracking the dynamic data dependences of the computation and incrementally updating the output as needed. In this tutorial, we describe the selfadjustingcomputation model and pr ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In selfadjusting computation, programs respond automatically and efficiently to modifications to their data by tracking the dynamic data dependences of the computation and incrementally updating the output as needed. In this tutorial, we describe the selfadjustingcomputation model and present the language ∆ML (Delta ML) for writing selfadjusting programs. 1
Programmable SelfAdjusting Computation
, 2010
"... and by donations from Intel Corporation. The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies, either expressed or implied, of any sponsoring institution, the U.S. government or any other entity. Keywords: se ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
and by donations from Intel Corporation. The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies, either expressed or implied, of any sponsoring institution, the U.S. government or any other entity. Keywords: selfadjusting computation, adaptivity, memoization, changepropagation, continuationpassing style, typed compilation, cost semantics, trace distance, traceable Selfadjusting computation is a paradigm for programming incremental computations that efficiently respond to input changes by updating the output in time proportional to the changes in the structure of the computation. This dissertation defends the thesis that highlevel programming abstractions improve the experience of reading, writing, and reasoning about and the efficiency of selfadjusting programs. We show that highlevel language constructs are suitable for writing readable selfadjusting programs and can be compiled into lowlevel primitives. In particular, language constructs such as MLstyle modifiable references and memoizing functions provide orthogonal mechanisms for identifying stale computation
Dynamic Mesh Refinement
, 2007
"... Mesh refinement is the problem to produce a triangulation (typically Delaunay) of an input set of points augmented by Steiner points, such that every triangle or tetrahedron has good quality (no small angles). The requirement arises from the applications: in scientific computing and in graphics, mes ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Mesh refinement is the problem to produce a triangulation (typically Delaunay) of an input set of points augmented by Steiner points, such that every triangle or tetrahedron has good quality (no small angles). The requirement arises from the applications: in scientific computing and in graphics, meshes are often used to discretely represent the value of a function over space. In addition to the quality requirement, the user often has input segments or polygons (generally, a piecewise linear complex) they would like see retained in the mesh; the mesh must respect these constraints. Finally, the mesh should be sizeconforming: the size of mesh elements should be related to a particular sizing function based on the distance between input features. The static meshing problem is increasingly wellunderstood: one can download software with provable guarantees that on reasonable input, the meshes will have good quality, will respect the input, and will be sizeconforming; more recently, these algorithms have started to come with optimal runtimes of O(n lg(L/s) +m), where L/s is the spread of the input. As a first result, I
Functional Programming for Dynamic and Large Data with SelfAdjusting Computation
"... Combining type theory, language design, and empirical work, we present techniques for computing with large and dynamically changing datasets. Based on lambda calculus, our techniques are suitable for expressing a diverse set of algorithms on large datasets and, via selfadjusting computation, enable ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Combining type theory, language design, and empirical work, we present techniques for computing with large and dynamically changing datasets. Based on lambda calculus, our techniques are suitable for expressing a diverse set of algorithms on large datasets and, via selfadjusting computation, enable computations to respond automatically to changes in their data. Compared to prior work, this work overcomes the main challenge of reducing the space usage of selfadjusting computation without disproportionately decreasing performance. To this end, we present a type system for precise dependency tracking that minimizes the time and space for storing dependency metadata. The type system eliminates an important assumption of prior work that can lead to recording of spurious dependencies. We give a new typedirected translation algorithm that generates correct selfadjusting programs without relying on this assumption. We then show a probabilistic chunking technique to further decrease space usage by controlling the fundamental spacetime tradeoff in selfadjusting computation. We implement and evaluate these techniques, showing very promising results on challenging benchmarks and large graphs. 1.