Results 1  10
of
24
Imperative selfadjusting computation
 In POPL ’08: Proceedings of the 35th annual ACM SIGPLANSIGACT symposium on Principles of programming languages
, 2008
"... Recent work on selfadjusting computation showed how to systematically write programs that respond efficiently to incremental changes in their inputs. The idea is to represent changeable data using modifiable references, i.e., a special data structure that keeps track of dependencies between read an ..."
Abstract

Cited by 27 (16 self)
 Add to MetaCart
Recent work on selfadjusting computation showed how to systematically write programs that respond efficiently to incremental changes in their inputs. The idea is to represent changeable data using modifiable references, i.e., a special data structure that keeps track of dependencies between read and writeoperations, and to let computations construct traces that later, after changes have occurred, can drive a change propagation algorithm. The approach has been shown to be effective for a variety of algorithmic problems, including some for which adhoc solutions had previously remained elusive. All previous work on selfadjusting computation, however, relied on a purely functional programming model. In this paper, we show that it is possible to remove this limitation and support modifiable references that can be written multiple times. We formalize this using a language AIL for which we define evaluation and changepropagation semantics. AIL closely resembles a traditional higherorder imperative programming language. For AIL we state and prove consistency, i.e., the property that although the semantics is inherently nondeterministic, different evaluation paths will still give observationally equivalent results. In the imperative setting where pointer graphs in the store can form cycles, our previous proof techniques do not apply. Instead, we make use of a novel form of a stepindexed logical relation that handles modifiable references. We show that AIL can be realized efficiently by describing implementation strategies whose overhead is provably constanttime per primitive. When the number of reads and writes per modifiable is bounded by a constant, we can show that change propagation becomes as efficient as it was in the pure case. The general case incurs a slowdown that is logarithmic in the maximum number of such operations. We use DFS and related algorithms on graphs as our running examples and prove that they respond to insertions and deletions of edges efficiently. 1.
Incoop: Mapreduce for incremental computations
 In Proceedings of the 2nd ACM Symposium on Cloud Computing
"... Many online data sets evolve over time as new entries are slowly added and existing entries are deleted or modified. Taking advantage of this, systems for incremental bulk data processing, such as Google’s Percolator, can achieve efficient updates. To achieve this efficiency, however, these systems ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
Many online data sets evolve over time as new entries are slowly added and existing entries are deleted or modified. Taking advantage of this, systems for incremental bulk data processing, such as Google’s Percolator, can achieve efficient updates. To achieve this efficiency, however, these systems lose compatibility with the simple programming models offered by nonincremental systems, e.g., MapReduce, and more importantly, requires the programmer to implement applicationspecific dynamic algorithms, ultimately increasing algorithm and code complexity. In this paper, we describe the architecture, implementation, and evaluation of Incoop, a generic MapReduce framework for incremental computations. Incoop detects changes to the input and automatically updates the output by employing an efficient, finegrained result reuse mechanism. To achieve efficiency without sacrificing transparency, we adopt recent advances in the area of programming languages to identify the shortcomings of tasklevel memoization approaches, andtoaddressthese shortcomings byusingseveral novel techniques: a storage system, a contraction phase for Reduce tasks, and an affinitybased scheduling algorithm. WehaveimplementedIncoopbyextendingtheHadoopframework, and evaluated it by considering several applications and case studies. Our results show significant performance improvements without changing a single line of application code.
DITTO: Automatic Incrementalization of Data Structure . . .
 IN PLDI
, 2007
"... We present DITTO, an automatic incrementalizer for dynamic, sideeffectfree data structure invariant checks. Incrementalization speeds up the execution of a check by reusing its previous executions, checking the invariant anew only on the changed parts of the data structure. DITTO exploits propertie ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
We present DITTO, an automatic incrementalizer for dynamic, sideeffectfree data structure invariant checks. Incrementalization speeds up the execution of a check by reusing its previous executions, checking the invariant anew only on the changed parts of the data structure. DITTO exploits properties specific to the domain of invariant checks to automate and simplify the process without restricting what mutations the program can perform. Our incrementalizer works for modern imperative languages such as Java and C#. It can incrementalize, for example, verification of redblack tree properties and the consistency of the hash code in a hash table bucket. Our sourcetosource implementation for Java is automatic, portable, and efficient. DITTO provides speedups on data structures with as few as 100 elements; on larger data structures, its speedups are characteristic of nonautomatic incrementalizers: roughly 5fold at 5,000 elements, and growing linearly with data structure size.
Kinetic algorithms via selfadjusting computation
 In Proceedings of the 14th Annual European Symposium on Algorithms (ESA 2006
, 2006
"... Abstract. Define a static algorithm as an algorithm that computes some combinatorial property of its input consisting of static, i.e., nonmoving, objects. In this paper, we describe a technique for syntactically transforming static algorithms into kinetic algorithms, which compute properties of mov ..."
Abstract

Cited by 12 (9 self)
 Add to MetaCart
Abstract. Define a static algorithm as an algorithm that computes some combinatorial property of its input consisting of static, i.e., nonmoving, objects. In this paper, we describe a technique for syntactically transforming static algorithms into kinetic algorithms, which compute properties of moving objects. The technique o«ers capabilities for composing kinetic algorithms, for integrating dynamic and kinetic changes, and for ensuring robustness even with fixedprecision floatingpoint arithmetic. To evaluate the e«ectiveness of the approach, we implement a library for performing the transformation, transform a number of algorithms, and give an experimental evaluation. The results show that the technique performs well in practice. 1
A consistent semantics of selfadjusting computation
, 2006
"... Abstract. This paper presents a semantics of selfadjusting computation and proves that the semantics is correct and consistent. The semantics integrates change propagation with the classic idea of memoization to enable reuse of computations under mutation to memory. During evaluation, reuse of a co ..."
Abstract

Cited by 9 (8 self)
 Add to MetaCart
Abstract. This paper presents a semantics of selfadjusting computation and proves that the semantics is correct and consistent. The semantics integrates change propagation with the classic idea of memoization to enable reuse of computations under mutation to memory. During evaluation, reuse of a computation via memoization triggers a change propagation that adjusts the reused computation to reflect the mutated memory. Since the semantics combines memoization and changepropagation, it involves both nondeterminism and mutation. Our consistency theorem states that the nondeterminism is not harmful: any two evaluations of the same program starting at the same state yield the same result. Our correctness theorem states that mutation is not harmful: selfadjusting programs are consistent with purely functional programming. We formalized the semantics and its metatheory in the LF logical framework and machinechecked the proofs in Twelf. 1
Adaptive Bayesian inference
 In Proc. NIPS
, 2008
"... Motivated by stochastic systems in which observed evidence and conditional dependencies between states of the network change over time, and certain quantities of interest (marginal distributions, likelihood estimates etc.) must be updated, we study the problem of adaptive inference in treestructure ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
Motivated by stochastic systems in which observed evidence and conditional dependencies between states of the network change over time, and certain quantities of interest (marginal distributions, likelihood estimates etc.) must be updated, we study the problem of adaptive inference in treestructured Bayesian networks. We describe an algorithm for adaptive inference that handles a broad range of changes to the network and is able to maintain marginal distributions, MAP estimates, and data likelihoods in all expected logarithmic time. We give an implementation of our algorithm and provide experiments that show that the algorithm can yield up to two orders of magnitude speedups on answering queries and responding to dynamic changes over the sumproduct algorithm. 1
A Proposal for Parallel SelfAdjusting Computation
, 2002
"... We present an overview of our ongoing work on parallelizing selfadjustingcomputation techniques. In selfadjusting computation, programs can respond to changes to their data (e.g., inputs, outcomes of comparisons) automatically by running a changepropagation algorithm. This ability is important i ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
We present an overview of our ongoing work on parallelizing selfadjustingcomputation techniques. In selfadjusting computation, programs can respond to changes to their data (e.g., inputs, outcomes of comparisons) automatically by running a changepropagation algorithm. This ability is important in applications where inputs change slowly over time. All previously proposed selfadjusting computation techniques assume a sequential execution model. We describe techniques for writing parallel selfadjusting programs and a change propagation algorithm that can update computations in parallel. We describe a prototype implementation and present preliminary experimental results.
Traceable Data Types for SelfAdjusting Computation
"... Selfadjusting computation provides an evaluation model where computations can respond automatically to modifications to their data by using a mechanism for propagating modifications through the computation. Current approaches to selfadjusting computation guarantee correctness by recording dependen ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Selfadjusting computation provides an evaluation model where computations can respond automatically to modifications to their data by using a mechanism for propagating modifications through the computation. Current approaches to selfadjusting computation guarantee correctness by recording dependencies in a trace at the granularity of individual memory operations. Tracing at the granularity of memory operations, however, has some limitations: it can be asymptotically inefficient (e.g., compared to optimal solutions) because it cannot take advantage of problemspecific structure, it requires keeping a large computation trace (often proportional to the runtime of the program on the current input), and it introduces moderately large constant factors in practice. In this paper, we extend dependencetracing to work at the granularity of the query and update operations of arbitrary (abstract)
Dynamic wellspaced point sets
 In SCG ’10: Proceedings of the 26th Annual Symposium on Computational Geometry
, 2010
"... In a wellspaced point set the Voronoi cells all have bounded aspect ratio, i.e., the distance from the Voronoi site to the farthest point in the Voronoi cell divided by the distance to the nearest neighbor in the set is bounded by a small constant. Wellspaced point sets satisfy some important geom ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
In a wellspaced point set the Voronoi cells all have bounded aspect ratio, i.e., the distance from the Voronoi site to the farthest point in the Voronoi cell divided by the distance to the nearest neighbor in the set is bounded by a small constant. Wellspaced point sets satisfy some important geometric properties and yield quality Voronoi or simplicial meshes that can be important in scientific computations. In this paper, we consider the dynamic wellspaced pointsets problem, which requires computing the wellspaced superset of a dynamically changing input set, e.g., as points are inserted or deleted. We present a dynamic algorithm that allows inserting/deleting points into/from the input in worstcase O(log ∆) time, where ∆ is the geometric spread, a natural measure that is bounded by O(log n) when input points are represented by logsize words. We show that the runtime of the dynamic update algorithm is optimal in the worst case by showing that there exists inputs and modifications that require Ω(log ∆) Steiner points to be inserted to the output. Our algorithm generates sizeoptimal outputs: the resulting output sets are never more than a constant factor larger than the minimum size necessary. A preliminary implementation indicates that the algorithm is indeed fast in practice. To the best of our knowledge, this is the first time and sizeoptimal dynamic algorithm for wellspaced point sets.
Optimaltime dynamic mesh refinement: preliminary results
, 2006
"... We present early results on a dynamic mesh refinement algorithm. Using a variant of the Sparse Voronoi Refinement algorithm and applying the technique of SelfAdjusting Computation, we find that we expect to run in O(polylog n) time per update on points sets in arbitrary dimension. This is based on ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
We present early results on a dynamic mesh refinement algorithm. Using a variant of the Sparse Voronoi Refinement algorithm and applying the technique of SelfAdjusting Computation, we find that we expect to run in O(polylog n) time per update on points sets in arbitrary dimension. This is based on some theoretical results, along with experimental results from an implementation.