## Imperative self-adjusting computation (2008)

### Cached

### Download Links

- [people.cs.uchicago.edu]
- [www.cs.uchicago.edu]
- [ttic.uchicago.edu]
- [www.cs.uchicago.edu]
- DBLP

### Other Repositories/Bibliography

Venue: | In POPL |

Citations: | 29 - 17 self |

### BibTeX

@TECHREPORT{Acar08imperativeself-adjusting,

author = {Umut A. Acar and Amal Ahmed and Matthias Blume},

title = {Imperative self-adjusting computation},

institution = {In POPL},

year = {2008}

}

### OpenURL

### Abstract

Self-adjusting computation enables writing programs that can automatically and efficiently respond to changes to their data (e.g., inputs). The idea behind the approach is to store all data that can change over time in modifiable references and to let computations construct traces that can drive change propagation. After changes have occurred, change propagation updates the result of the computation by re-evaluating only those expressions that depend on the changed data. Previous approaches to selfadjusting computation require that modifiable references be written at most once during execution—this makes the model applicable only in a purely functional setting. In this paper, we present techniques for imperative self-adjusting computation where modifiable references can be written multiple times. We define a language SAIL (Self-Adjusting Imperative Language) and prove consistency, i.e., that change propagation and from-scratch execution are observationally equivalent. Since SAIL programs are imperative, they can create cyclic data structures. To prove equivalence in the presence of cycles in the store, we formulate and use an untyped, step-indexed logical relation, where step indices are used to ensure well-foundedness. We show that SAIL accepts an asymptotically efficient implementation by presenting algorithms and data structures for its implementation. When the number of operations (reads and writes) per modifiable is bounded by a constant, we show that change propagation becomes as efficient as in the non-imperative case. The general case incurs a slowdown that is logarithmic in the maximum number of such operations. We describe a prototype implementation of SAIL as a Standard ML library. 1

### Citations

9061 | Introduction to Algorithms
- Cormen, Leiserson, et al.
- 2001
(Show Context)
Citation Context ...t of a graph using a visitor that concatenates its argument lists (if any), and then returns the result with the node being visited inserted at the head. This visitor follows the standard description =-=[14]-=-. As an example, consider the graphs in Figure 4. Each node is annotated with two time stamps: the first and the last time they were visited. For node A these are 1 and 16, respectively. The topologic... |

2901 | Dynamic programming - Bellman - 1957 |

269 | The revised report on the syntactic theories of sequential control and state
- Felleisen, Hieb
- 1989
(Show Context)
Citation Context ...n-adjusting version of DFS. 3. The Language Since our consistency proof does not depend on type safety, we leave our language untyped. For simplicity, we assume all expressions to be in A-normal form =-=[17]-=-. Unlike in our previous work where it was necessary to enforce a write-once policy for modifiable references, we do not distinguish between stable and changeable computations. This simplifies the syn... |

256 | Making data structures persistent
- Driscoll, Sarnak, et al.
- 1989
(Show Context)
Citation Context ...utable modifiables we make the computation dynamically persistent by keeping track of different versions of each modifiable. This techniques is inspired by previous work on persistent data structures =-=[16]-=-. 5.1 Data Structures Our algorithms require several data structures including an ordermaintenance data structure and searchable ordered sets described below, as well as standard priority queues. Orde... |

212 | A basis for a mathematical theory of computation - McCarthy - 1963 |

156 |
Two algorithms for maintaining order in a list
- DIETZ, SLEATOR
- 1987
(Show Context)
Citation Context ...me stamps while supporting all of the following operations in constant time: create a new time stamp, insert a newly created time stamp after another, delete a time stamp, and compare two time stamps =-=[15]-=-. Searchable Time-Ordered Sets. We assume a time-ordered sets data structure that supports the following operations. • new: return a new empty set. • build X: allocate and return a new data structure ... |

137 | An indexed model of recursive types for foundational proofcarrying code
- Appel, McAllester
- 2001
(Show Context)
Citation Context ...that is, relations based on the operational semantics) where relations are indexed not by types, but by a natural number that, intuitively, records the number of steps available for future evaluation =-=[10, 8]-=-. This stratification is essential for modeling the recursive functions (available via encoding fix) and cyclic stores present in the language. Another difficulty is realizing change propagation in an... |

121 | On the observable properties of higher order functions that dynamically create local names (preliminary report
- Pitts, Stark
- 1993
(Show Context)
Citation Context ...ms in the presence of mutable state has long been recognized as a difficult problem (even for Algol-like languages [19, 26, 22]), which gets significantly harder in the presence of dynamic allocation =-=[23, 27, 11]-=-, and harder still in the presence of cyclic stores. We know of only two recent results for proving equivalence of programs with mutable references and cyclic stores, a proof method based on bisimulat... |

105 | Parametricity and local variables
- O'Hearn, Tennent
- 1995
(Show Context)
Citation Context ...bles leads to the possibility of cyclic stores. Reasoning about equivalence of programs in the presence of mutable state has long been recognized as a difficult problem (even for Algol-like languages =-=[19, 26, 22]-=-), which gets significantly harder in the presence of dynamic allocation [23, 27, 11], and harder still in the presence of cyclic stores. We know of only two recent results for proving equivalence of ... |

93 | Memo functions and machine learning - Michie - 1968 |

72 | Incremental computation via function caching - Pugh, Teitelbaum - 1989 |

71 | Analysis and Caching of Dependencies - Abadi, Lampson, et al. - 1996 |

71 | Step-indexed syntactic logical relations for recursive and quantified types
- Ahmed
- 2006
(Show Context)
Citation Context ...that is, relations based on the operational semantics) where relations are indexed not by types, but by a natural number that, intuitively, records the number of steps available for future evaluation =-=[10, 8]-=-. This stratification is essential for modeling the recursive functions (available via encoding fix) and cyclic stores present in the language. Another difficulty is realizing change propagation in an... |

67 | Adaptive functional programming - Acar, Blelloch, et al. |

59 | Relational reasoning in a nominal semantics for storage
- Benton, Leperchey
- 2005
(Show Context)
Citation Context ...ms in the presence of mutable state has long been recognized as a difficult problem (even for Algol-like languages [19, 26, 22]), which gets significantly harder in the presence of dynamic allocation =-=[23, 27, 11]-=-, and harder still in the presence of cyclic stores. We know of only two recent results for proving equivalence of programs with mutable references and cyclic stores, a proof method based on bisimulat... |

56 | Names and Higher-Order Functions - Stark - 1994 |

50 | Static caching for incremental computation - Liu, Stoller, et al. - 1998 |

49 |
Small bisimulations for reasoning about higherorder imperative programs
- Koutavas, Wand
- 2006
(Show Context)
Citation Context ...d harder still in the presence of cyclic stores. We know of only two recent results for proving equivalence of programs with mutable references and cyclic stores, a proof method based on bisimulation =-=[18]-=- and another on (denotational) logical relations [12] (neither of which is immediately applicable to proving the consistency of AIL). In this paper, we prove equivalence of imperative self-adjusting p... |

48 | Caching function calls using precise dependencies - Heydon, Levin, et al. - 2000 |

47 | A categorized bibliography on incremental computation - Ramalingam, Reps - 1993 |

45 | Incremental evaluation of attribute grammars with application to syntax-directed editors - Demers, Reps, et al. - 1981 |

44 | Selective memoization - Acar, Blelloch, et al. - 2003 |

40 | Dynamizing static algorithms, with applications to dynamic trees and history independence - Acar, Blelloch, et al. - 2004 |

37 | An experimental analysis of self-adjusting computation
- Acar, Blelloch, et al.
(Show Context)
Citation Context ...nce graphs and a particular form of memoization can be combined to achieve efficient update times for a reasonably broad range of applications including those that involve continuously moving objects =-=[3, 4]-=-. These results demonstrate that the approach can yield orders of magnitude of speedup over recomputing from scratch when the computation data changes slowly over time. Other implementations of self-a... |

33 | Reasoning about local variables with operationally-based logical relations
- Pitts
- 1997
(Show Context)
Citation Context ...bles leads to the possibility of cyclic stores. Reasoning about equivalence of programs in the presence of mutable state has long been recognized as a difficult problem (even for Algol-like languages =-=[19, 26, 22]-=-), which gets significantly harder in the presence of dynamic allocation [23, 27, 11], and harder still in the presence of cyclic stores. We know of only two recent results for proving equivalence of ... |

28 | Optimal-time incremental semantic analysis for syntaxdirected editors - Reps - 1982 |

26 | Relational reasoning for recursive types and references
- Bohr, Birkedal
- 2006
(Show Context)
Citation Context ...now of only two recent results for proving equivalence of programs with mutable references and cyclic stores, a proof method based on bisimulation [18] and another on (denotational) logical relations =-=[12]-=- (neither of which is immediately applicable to proving the consistency of AIL). In this paper, we prove equivalence of imperative self-adjusting programs using syntactic logical relations (that is, r... |

25 | Incremental reduction in the lambda calculus - Field, Teitelbaum - 1990 |

23 |
Monads for incremental computing
- Carlsson
- 2002
(Show Context)
Citation Context ... yield orders of magnitude of speedup over recomputing from scratch when the computation data changes slowly over time. Other implementations of self-adjusting-computation have been given by Carlsson =-=[13]-=- in the Haskell language and more recently by Shankar and Bodik [25] in Java. The purpose of Shankar and Bodik’s work was incremental invariant checking. It proved to be effective in delivering orders... |

19 | A step-indexed model of substructural state
- Ahmed, Fluet, et al.
- 2005
(Show Context)
Citation Context ...alid if it does not allocate locations reachable from the initial expression e. Our technique for identifying the locations reachable from an expression is based on the technique used by Ahmed et al. =-=[9]-=- in their work on substructural state. Let FL(e) be the free locations of e, i.e., those locations that are subexpressions of e. The locations FL(e) are said to be directly accessible from e. The stor... |

17 | Pure versus Impure Lisp
- Pippenger
- 1997
(Show Context)
Citation Context ...ions cannot be written more than once. 1 Although purely functional programming is fully general (i.e., Turing complete), it is asymptotically slower than the Random-Access Model (RAM) of computation =-=[21]-=-. For some applications, for example those that utilize irregular data structures such as graphs, imperative programming is more natural. The requirement of purity is inherent to the algorithmic appro... |

16 | A dynamic topological sort algorithm for directed acyclic graphs
- Pearce, Kelly
- 2006
(Show Context)
Citation Context ...rogram exhibits algorithmic and asymptotic complexity behavior similar to previously proposed algorithms for topological sorting of incrementally changing graphs, for example that by Pearce and Kelly =-=[20]-=-. The main difference compared to their work is, of course, that we obtained our code by simply annotating a standard, non-adjusting version of DFS. 3. The Language Since our consistency proof does no... |

16 | DITTO: automatic incrementalization of data structure invariant checks (in Java
- Shankar, Bodı́k
- 2007
(Show Context)
Citation Context ...when the computation data changes slowly over time. Other implementations of self-adjusting-computation have been given by Carlsson [13] in the Haskell language and more recently by Shankar and Bodik =-=[25]-=- in Java. The purpose of Shankar and Bodik’s work was incremental invariant checking. It proved to be effective in delivering orders of magnitude speedups compared to non-incremental approaches [25]. ... |

12 | Kinetic Algorithms via Self-Adjusting Computation
- Acar, Blelloch, et al.
- 2006
(Show Context)
Citation Context ...nce graphs and a particular form of memoization can be combined to achieve efficient update times for a reasonably broad range of applications including those that involve continuously moving objects =-=[3, 4]-=-. These results demonstrate that the approach can yield orders of magnitude of speedup over recomputing from scratch when the computation data changes slowly over time. Other implementations of self-a... |

11 | Incremental compilation via partial evaluation - Sundaresh, Hudak - 1991 |

9 | A Cost Semantics for Self-Adjusting Computation
- Ley-Wild, Acar, et al.
- 2008
(Show Context)
Citation Context ...n and non-deterministic allocation—is harmless, by showing that any two evaluations of the same program in the same store yield observationally (or contextually) equivalent results. In our prior work =-=[6]-=-, we proved a similar consistency property for selfadjusting computation with only write-once modifiables. But, our earlier proof method fundamentally relied on the absence of cycles in the store. Thu... |

8 |
New steps towards full abstraction for local variables
- Sieber
- 1993
(Show Context)
Citation Context ...bles leads to the possibility of cyclic stores. Reasoning about equivalence of programs in the presence of mutable state has long been recognized as a difficult problem (even for Algol-like languages =-=[19, 26, 22]-=-), which gets significantly harder in the presence of dynamic allocation [23, 27, 11], and harder still in the presence of cyclic stores. We know of only two recent results for proving equivalence of ... |

4 | A novel soc design methodology combining adaptive software and reconfigurable hardware
- Santambrogio, Memik, et al.
(Show Context)
Citation Context ...g, and implementing incremental programs afforded by the self-adjusting computation framework, it was possible to give solutions to problems that previously had resisted ad-hoc algorithmic approaches =-=[5, 24, 1, 7]-=-. All previous work on self-adjusting computation, however, has one crucial limitation: it applies only to programs that are “purely functional” because locations cannot be written more than once. 1 A... |

3 | Ramgopal Mettu, and Özgür Sümer. Adaptive Bayesian Inference - Acar, Ihler - 2007 |

2 | Kanat Tangwongsan. A library for self-adjusting computation - Acar, Blelloch, et al. |

2 | Optimal-time dynamic mesh refinement with quad trees and off centers. Submitted
- Acar, Hudson
- 2007
(Show Context)
Citation Context ...g, and implementing incremental programs afforded by the self-adjusting computation framework, it was possible to give solutions to problems that previously had resisted ad-hoc algorithmic approaches =-=[5, 24, 1, 7]-=-. All previous work on self-adjusting computation, however, has one crucial limitation: it applies only to programs that are “purely functional” because locations cannot be written more than once. 1 A... |

2 |
Kinetic 3d convex hulls via self-adjusting computation (an illustration
- Acar, Blelloch, et al.
- 2007
(Show Context)
Citation Context ...g, and implementing incremental programs afforded by the self-adjusting computation framework, it was possible to give solutions to problems that previously had resisted ad-hoc algorithmic approaches =-=[5, 24, 1, 7]-=-. All previous work on self-adjusting computation, however, has one crucial limitation: it applies only to programs that are “purely functional” because locations cannot be written more than once. 1 A... |

2 |
Bayesian inference for dynamically changing graphs
- Acar, Ihler, et al.
- 2007
(Show Context)
Citation Context |

1 | Ogrenci Memik, Umut A. Acar, and Donatella Sciuto. A novel soc design methodolofy for combined adaptive software descripton and reconfigurable hardware - Santambrogio, Rana, et al. - 2007 |

1 |
Imperative self-adjusting computation. http://tti.uchicago.edu/~umut/imperative/tr.pdf
- Acar, Ahmed, et al.
(Show Context)
Citation Context ... T and σ, T � j2 ′ ′ σ , T , then σ, e ⇓ ≤j1+j2 ′ ′ v, σ , T . 4.7 Consistency For lack of space we have omitted proof details here. More detailed proofs can be found in our extended technical report =-=[2]-=-. Theorem 4.4. If Γ=FV (e), then Γ ⊢ e ≈ e. Proof sketch: By induction on the structure of e. As explained above, in the memo case we use Lemma 4.3 before we can appeal to the induction hypothesis. Ot... |