## Imperative self-adjusting computation (2008)

### Cached

### Download Links

- [people.cs.uchicago.edu]
- [www.cs.uchicago.edu]
- [ttic.uchicago.edu]
- [www.cs.uchicago.edu]
- DBLP

### Other Repositories/Bibliography

Venue: | In POPL ’08: Proceedings of the 35th annual ACM SIGPLAN-SIGACT symposium on Principles of programming languages |

Citations: | 27 - 16 self |

### BibTeX

@INPROCEEDINGS{Acar08imperativeself-adjusting,

author = {Umut A. Acar and Amal Ahmed and Matthias Blume},

title = {Imperative self-adjusting computation},

booktitle = {In POPL ’08: Proceedings of the 35th annual ACM SIGPLAN-SIGACT symposium on Principles of programming languages},

year = {2008},

pages = {309--322}

}

### Years of Citing Articles

### OpenURL

### Abstract

Recent work on self-adjusting computation showed how to systematically write programs that respond efficiently to incremental changes in their inputs. The idea is to represent changeable data using modifiable references, i.e., a special data structure that keeps track of dependencies between read and write-operations, and to let computations construct traces that later, after changes have occurred, can drive a change propagation algorithm. The approach has been shown to be effective for a variety of algorithmic problems, including some for which ad-hoc solutions had previously remained elusive. All previous work on self-adjusting computation, however, relied on a purely functional programming model. In this paper, we show that it is possible to remove this limitation and support modifiable references that can be written multiple times. We formalize this using a language AIL for which we define evaluation and change-propagation semantics. AIL closely resembles a traditional higher-order imperative programming language. For AIL we state and prove consistency, i.e., the property that although the semantics is inherently non-deterministic, different evaluation paths will still give observationally equivalent results. In the imperative setting where pointer graphs in the store can form cycles, our previous proof techniques do not apply. Instead, we make use of a novel form of a step-indexed logical relation that handles modifiable references. We show that AIL can be realized efficiently by describing implementation strategies whose overhead is provably constant-time per primitive. When the number of reads and writes per modifiable is bounded by a constant, we can show that change propagation becomes as efficient as it was in the pure case. The general case incurs a slowdown that is logarithmic in the maximum number of such operations. We use DFS and related algorithms on graphs as our running examples and prove that they respond to insertions and deletions of edges efficiently. 1.

### Citations

8530 |
Introduction to Algorithms
- Cormen, Leiserson, et al.
- 1990
(Show Context)
Citation Context ...t of a graph using a visitor that concatenates its argument lists (if any), and then returns the result with the node being visited inserted at the head. This visitor follows the standard description =-=[14]-=-. As an example, consider the graphs in Figure 4. Each node is annotated with two time stamps: the first and the last time they were visited. For node A these are 1 and 16, respectively. The topologic... |

2611 | Dynamic Programming - Bellman - 1957 |

257 | The revised report on the syntactic theories of sequential control and state. Theoretical Computer Science 103
- Felleisen, Hieb
- 1992
(Show Context)
Citation Context ...n-adjusting version of DFS. 3. The Language Since our consistency proof does not depend on type safety, we leave our language untyped. For simplicity, we assume all expressions to be in A-normal form =-=[17]-=-. Unlike in our previous work where it was necessary to enforce a write-once policy for modifiable references, we do not distinguish between stable and changeable computations. This simplifies the syn... |

246 | Making data structures persistent
- Driscoll, Sarnak, et al.
- 1989
(Show Context)
Citation Context ...utable modifiables we make the computation dynamically persistent by keeping track of different versions of each modifiable. This techniques is inspired by previous work on persistent data structures =-=[16]-=-. 5.1 Data Structures Our algorithms require several data structures including an ordermaintenance data structure and searchable ordered sets described below, as well as standard priority queues. Orde... |

202 | A Basis for a Mathematical Theory of Computation - McCarthy - 1963 |

155 |
Two algorithms for maintaining order in a list
- Dietz, Sleator
(Show Context)
Citation Context ...me stamps while supporting all of the following operations in constant time: create a new time stamp, insert a newly created time stamp after another, delete a time stamp, and compare two time stamps =-=[15]-=-. Searchable Time-Ordered Sets. We assume a time-ordered sets data structure that supports the following operations. • new: return a new empty set. • build X: allocate and return a new data structure ... |

135 | An indexed model of recursive types for foundational proof-carrying code
- Appel, McAllester
- 2001
(Show Context)
Citation Context ...that is, relations based on the operational semantics) where relations are indexed not by types, but by a natural number that, intuitively, records the number of steps available for future evaluation =-=[10, 8]-=-. This stratification is essential for modeling the recursive functions (available via encoding fix) and cyclic stores present in the language. Another difficulty is realizing change propagation in an... |

118 | Observable properties of higher order functions that dynamically create local names, or What’s new
- Pitts, Stark
- 1993
(Show Context)
Citation Context ...ms in the presence of mutable state has long been recognized as a difficult problem (even for Algol-like languages [19, 26, 22]), which gets significantly harder in the presence of dynamic allocation =-=[23, 27, 11]-=-, and harder still in the presence of cyclic stores. We know of only two recent results for proving equivalence of programs with mutable references and cyclic stores, a proof method based on bisimulat... |

102 | Parametricity and local variables
- O’Hearn, Tennent
- 2008
(Show Context)
Citation Context ...bles leads to the possibility of cyclic stores. Reasoning about equivalence of programs in the presence of mutable state has long been recognized as a difficult problem (even for Algol-like languages =-=[19, 26, 22]-=-), which gets significantly harder in the presence of dynamic allocation [23, 27, 11], and harder still in the presence of cyclic stores. We know of only two recent results for proving equivalence of ... |

90 | Memo” functions and machine learning - Michie - 1968 |

72 | Step-indexed syntactic logical relations for recursive and quantified types
- Ahmed
- 2006
(Show Context)
Citation Context ...that is, relations based on the operational semantics) where relations are indexed not by types, but by a natural number that, intuitively, records the number of steps available for future evaluation =-=[10, 8]-=-. This stratification is essential for modeling the recursive functions (available via encoding fix) and cyclic stores present in the language. Another difficulty is realizing change propagation in an... |

70 | Analysis and caching of dependencies - Abadi, Lampson, et al. - 1996 |

69 | Incremental computation via function caching - Pugh, Teitelbaum - 1989 |

64 | Adaptive functional programming - Acar, Blelloch, et al. |

57 | Relational reasoning in a nominal semantics for storage
- Benton, Leperchley
- 2005
(Show Context)
Citation Context ...ms in the presence of mutable state has long been recognized as a difficult problem (even for Algol-like languages [19, 26, 22]), which gets significantly harder in the presence of dynamic allocation =-=[23, 27, 11]-=-, and harder still in the presence of cyclic stores. We know of only two recent results for proving equivalence of programs with mutable references and cyclic stores, a proof method based on bisimulat... |

55 | Names and Higher-Order Functions - Stark - 1994 |

46 |
Small bisimulations for reasoning about higher-order imperative programs
- Koutavas, Wand
- 2006
(Show Context)
Citation Context ...d harder still in the presence of cyclic stores. We know of only two recent results for proving equivalence of programs with mutable references and cyclic stores, a proof method based on bisimulation =-=[18]-=- and another on (denotational) logical relations [12] (neither of which is immediately applicable to proving the consistency of AIL). In this paper, we prove equivalence of imperative self-adjusting p... |

46 | Static caching for incremental computation - Liu, Stoller, et al. - 1998 |

45 | Caching Function Calls Using Precise Dependencies - Heydon, Levin, et al. - 2000 |

44 | Selective memoization - Acar, Blelloch, et al. - 2003 |

43 | Reps. A categorized bibliography on incremental computation - Ramalingam, W - 1993 |

41 | Dynamizing static algorithms with applications to dynamic trees and history independence - Acar, Blelloch, et al. - 2004 |

41 | Incremental Evaluation for Attribute Grammars with Application to Syntax-directed Editors - Demers, Reps, et al. - 1981 |

34 | An experimental analysis of selfadjusting computation
- Acar, Blelloch, et al.
(Show Context)
Citation Context ...nce graphs and a particular form of memoization can be combined to achieve efficient update times for a reasonably broad range of applications including those that involve continuously moving objects =-=[3, 4]-=-. These results demonstrate that the approach can yield orders of magnitude of speedup over recomputing from scratch when the computation data changes slowly over time. Other implementations of self-a... |

32 | Reasoning about local variables with operationally-based logical relations. LICS
- Pitts
- 1996
(Show Context)
Citation Context ...bles leads to the possibility of cyclic stores. Reasoning about equivalence of programs in the presence of mutable state has long been recognized as a difficult problem (even for Algol-like languages =-=[19, 26, 22]-=-), which gets significantly harder in the presence of dynamic allocation [23, 27, 11], and harder still in the presence of cyclic stores. We know of only two recent results for proving equivalence of ... |

26 | Optimal-time incremental semantic analysis for syntax-directed editors - REPS |

24 | Relational reasoning for recursive types and references
- Bohr, Birkedal
- 2006
(Show Context)
Citation Context ...now of only two recent results for proving equivalence of programs with mutable references and cyclic stores, a proof method based on bisimulation [18] and another on (denotational) logical relations =-=[12]-=- (neither of which is immediately applicable to proving the consistency of AIL). In this paper, we prove equivalence of imperative self-adjusting programs using syntactic logical relations (that is, r... |

23 | Incremental reduction in the lambda calculus - Field, Teitelbaum - 1990 |

20 |
Monads for incremental computing
- Carlsson
(Show Context)
Citation Context ... yield orders of magnitude of speedup over recomputing from scratch when the computation data changes slowly over time. Other implementations of self-adjusting-computation have been given by Carlsson =-=[13]-=- in the Haskell language and more recently by Shankar and Bodik [25] in Java. The purpose of Shankar and Bodik’s work was incremental invariant checking. It proved to be effective in delivering orders... |

18 | A step-indexed model of substructural state
- Ahmed, Fluet, et al.
- 2005
(Show Context)
Citation Context ...alid if it does not allocate locations reachable from the initial expression e. Our technique for identifying the locations reachable from an expression is based on the technique used by Ahmed et al. =-=[9]-=- in their work on substructural state. Let FL(e) be the free locations of e, i.e., those locations that are subexpressions of e. The locations FL(e) are said to be directly accessible from e. The stor... |

17 | Pure versus impure Lisp
- Pippenger
- 1997
(Show Context)
Citation Context ...ions cannot be written more than once. 1 Although purely functional programming is fully general (i.e., Turing complete), it is asymptotically slower than the Random-Access Model (RAM) of computation =-=[21]-=-. For some applications, for example those that utilize irregular data structures such as graphs, imperative programming is more natural. The requirement of purity is inherent to the algorithmic appro... |

16 | A dynamic topological sort algorithm for directed acyclic graphs
- Pearce, Kelly
(Show Context)
Citation Context ...rogram exhibits algorithmic and asymptotic complexity behavior similar to previously proposed algorithms for topological sorting of incrementally changing graphs, for example that by Pearce and Kelly =-=[20]-=-. The main difference compared to their work is, of course, that we obtained our code by simply annotating a standard, non-adjusting version of DFS. 3. The Language Since our consistency proof does no... |

15 | Ditto: Automatic incrementalization of data structure invariant checks (in Java
- Shankar, Bodík
- 2007
(Show Context)
Citation Context ...when the computation data changes slowly over time. Other implementations of self-adjusting-computation have been given by Carlsson [13] in the Haskell language and more recently by Shankar and Bodik =-=[25]-=- in Java. The purpose of Shankar and Bodik’s work was incremental invariant checking. It proved to be effective in delivering orders of magnitude speedups compared to non-incremental approaches [25]. ... |

12 | Kinetic algorithms via self-adjusting computation
- Acar, Blelloch, et al.
- 2006
(Show Context)
Citation Context ...nce graphs and a particular form of memoization can be combined to achieve efficient update times for a reasonably broad range of applications including those that involve continuously moving objects =-=[3, 4]-=-. These results demonstrate that the approach can yield orders of magnitude of speedup over recomputing from scratch when the computation data changes slowly over time. Other implementations of self-a... |

10 | Incremental compilation via partial evaluation - Sundaresh, Hudak - 1991 |

9 | A consistent semantics of self-adjusting computation
- Acar, Blume, et al.
- 2007
(Show Context)
Citation Context ...n and non-deterministic allocation—is harmless, by showing that any two evaluations of the same program in the same store yield observationally (or contextually) equivalent results. In our prior work =-=[6]-=-, we proved a similar consistency property for selfadjusting computation with only write-once modifiables. But, our earlier proof method fundamentally relied on the absence of cycles in the store. Thu... |

8 |
New steps towards full abstraction for local variables
- Sieber
- 1993
(Show Context)
Citation Context ...bles leads to the possibility of cyclic stores. Reasoning about equivalence of programs in the presence of mutable state has long been recognized as a difficult problem (even for Algol-like languages =-=[19, 26, 22]-=-), which gets significantly harder in the presence of dynamic allocation [23, 27, 11], and harder still in the presence of cyclic stores. We know of only two recent results for proving equivalence of ... |

3 | Ramgopal Mettu, and Özgür Sümer. Adaptive Bayesian Inference - Acar, Ihler - 2007 |

2 | Optimal-time dynamic mesh refinement with quad trees and off centers. Submitted
- Acar, Hudson
- 2007
(Show Context)
Citation Context ...g, and implementing incremental programs afforded by the self-adjusting computation framework, it was possible to give solutions to problems that previously had resisted ad-hoc algorithmic approaches =-=[5, 24, 1, 7]-=-. All previous work on self-adjusting computation, however, has one crucial limitation: it applies only to programs that are “purely functional” because locations cannot be written more than once. 1 A... |

2 |
Kinetic 3d convex hulls via self-adjusting computation (an illustration
- Acar, Blelloch, et al.
- 2007
(Show Context)
Citation Context ...g, and implementing incremental programs afforded by the self-adjusting computation framework, it was possible to give solutions to problems that previously had resisted ad-hoc algorithmic approaches =-=[5, 24, 1, 7]-=-. All previous work on self-adjusting computation, however, has one crucial limitation: it applies only to programs that are “purely functional” because locations cannot be written more than once. 1 A... |

2 |
Bayesian inference for dynamically changing graphs
- Acar, Ihler, et al.
- 2007
(Show Context)
Citation Context ...g, and implementing incremental programs afforded by the self-adjusting computation framework, it was possible to give solutions to problems that previously had resisted ad-hoc algorithmic approaches =-=[5, 24, 1, 7]-=-. All previous work on self-adjusting computation, however, has one crucial limitation: it applies only to programs that are “purely functional” because locations cannot be written more than once. 1 A... |

1 | Kanat Tangwongsan. A library for self-adjusting computation - Acar, Blelloch, et al. - 2005 |

1 | Ogrenci Memik, Umut A. Acar, and Donatella Sciuto. A novel soc design methodolofy for combined adaptive software descripton and reconfigurable hardware - Santambrogio, Rana, et al. - 2007 |

1 |
Imperative self-adjusting computation. http://tti.uchicago.edu/~umut/imperative/tr.pdf
- Acar, Ahmed, et al.
(Show Context)
Citation Context ... T and σ, T � j2 ′ ′ σ , T , then σ, e ⇓ ≤j1+j2 ′ ′ v, σ , T . 4.7 Consistency For lack of space we have omitted proof details here. More detailed proofs can be found in our extended technical report =-=[2]-=-. Theorem 4.4. If Γ=FV (e), then Γ ⊢ e ≈ e. Proof sketch: By induction on the structure of e. As explained above, in the memo case we use Lemma 4.3 before we can appeal to the induction hypothesis. Ot... |

1 | A novel SoC design methodology for combined adaptive software descripton and reconfigurable hardware
- Santambrogio, Rana, et al.
- 2007
(Show Context)
Citation Context |