Results 1  10
of
22
Adaptive Functional Programming
 IN PROCEEDINGS OF THE 29TH ANNUAL ACM SYMPOSIUM ON PRINCIPLES OF PROGRAMMING LANGUAGES
, 2001
"... An adaptive computation maintains the relationship between its input and output as the input changes. Although various techniques for adaptive computing have been proposed, they remain limited in their scope of applicability. We propose a general mechanism for adaptive computing that enables one to ..."
Abstract

Cited by 78 (27 self)
 Add to MetaCart
An adaptive computation maintains the relationship between its input and output as the input changes. Although various techniques for adaptive computing have been proposed, they remain limited in their scope of applicability. We propose a general mechanism for adaptive computing that enables one to make any purelyfunctional program adaptive. We show
Static caching for incremental computation
 ACM Trans. Program. Lang. Syst
, 1998
"... A systematic approach is given for deriving incremental programs that exploit caching. The cacheandprune method presented in the article consists of three stages: (I) the original program is extended to cache the results of all its intermediate subcomputations as well as the nal result, (II) the e ..."
Abstract

Cited by 52 (21 self)
 Add to MetaCart
(Show Context)
A systematic approach is given for deriving incremental programs that exploit caching. The cacheandprune method presented in the article consists of three stages: (I) the original program is extended to cache the results of all its intermediate subcomputations as well as the nal result, (II) the extended program is incrementalized so that computation on a new input can use all intermediate results on an old input, and (III) unused results cached by the extended program and maintained by the incremental program are pruned away, l e a ving a pruned extended program that caches only useful intermediate results and a pruned incremental program that uses and maintains only the useful results. All three stages utilize static analyses and semanticspreserving transformations. Stages I and III are simple, clean, and fully automatable. The overall method has a kind of optimality with respect to the techniques used in Stage II. The method can be applied straightforwardly to provide a systematic approach to program improvement via caching.
Finite differencing of logical formulas for static analysis
 IN PROC. 12TH ESOP
, 2003
"... This paper concerns mechanisms for maintaining the value of an instrumentationpredicate (a.k.a. derived predicate or view), defined via a logical formula over core predicates, in response to changes in the values of the core predicates. It presents an algorithm fortransforming the instrumentation p ..."
Abstract

Cited by 35 (18 self)
 Add to MetaCart
This paper concerns mechanisms for maintaining the value of an instrumentationpredicate (a.k.a. derived predicate or view), defined via a logical formula over core predicates, in response to changes in the values of the core predicates. It presents an algorithm fortransforming the instrumentation predicate's defining formula into a predicatemaintenance formula that captures what the instrumentation predicate's new value should be.This technique applies to programanalysis problems in which the semantics of statements is expressed using logical formulas that describe changes to corepredicate values,and provides a way to reflect those changes in the values of the instrumentation predicates.
Dynamic programming via static incrementalization
 In Proceedings of the 8th European Symposium on Programming
, 1999
"... Dynamic programming is an important algorithm design technique. It is used for solving problems whose solutions involve recursively solving subproblems that share subsubproblems. While a straightforward recursive program solves common subsubproblems repeatedly and often takes exponential time, a dyn ..."
Abstract

Cited by 30 (14 self)
 Add to MetaCart
(Show Context)
Dynamic programming is an important algorithm design technique. It is used for solving problems whose solutions involve recursively solving subproblems that share subsubproblems. While a straightforward recursive program solves common subsubproblems repeatedly and often takes exponential time, a dynamic programming algorithm solves every subsubproblem just once, saves the result, reuses it when the subsubproblem is encountered again, and takes polynomial time. This paper describes a systematic method for transforming programs written as straightforward recursions into programs that use dynamic programming. The method extends the original program to cache all possibly computed values, incrementalizes the extended program with respect to an input increment to use and maintain all cached results, prunes out cached results that are not used in the incremental computation, and uses the resulting incremental program to form an optimized new program. Incrementalization statically exploits semantics of both control structures and data structures and maintains as invariants equalities characterizing cached results. The principle underlying incrementalization is general for achieving drastic program speedups. Compared with previous methods that perform memoization or tabulation, the method based on incrementalization is more powerful and systematic. It has been implemented and applied to numerous problems and succeeded on all of them. 1
Loop optimization for aggregate array computations
"... An aggregate array computation is a loop that computes accumulated quantities over array elements. Such computations are common in programs that use arrays, and the array elements involved in such computations often overlap, especially across iterations of loops, resulting in signi cant redundancy ..."
Abstract

Cited by 15 (7 self)
 Add to MetaCart
(Show Context)
An aggregate array computation is a loop that computes accumulated quantities over array elements. Such computations are common in programs that use arrays, and the array elements involved in such computations often overlap, especially across iterations of loops, resulting in signi cant redundancy in the overall computation. This paper presents a method and algorithms that eliminate such overlapping aggregate array redundancies and shows both analytical and experimental performance improvements. The method is based on incrementalization, i.e., updating the values of aggregate array computations from iteration to iteration rather than computing them from scratch in each iteration. This involves maintaining additional information not maintained in the original program. We reduce various analysis problems to solving inequality constraints on loop variables and array subscripts, and we apply results from work on array data dependence analysis. Incrementalizing aggregate array computations produces drastic program speedup compared to previous optimizations. Previous methods for loop optimizations of arrays do not perform incrementalization, and previous techniques for loop incrementalization do not handle arrays.
Eliminating dead code on recursive data
 Science of Computer Programming
, 1999
"... Abstract. This paper describes a general and powerful method for dead code analysis and elimination in the presence of recursive data constructions. We represent partially dead recursive data using liveness patterns based on general regular tree grammars extended with the notion of live and dead, an ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
Abstract. This paper describes a general and powerful method for dead code analysis and elimination in the presence of recursive data constructions. We represent partially dead recursive data using liveness patterns based on general regular tree grammars extended with the notion of live and dead, and we formulate the analysis as computing liveness patterns at all program points based on program semantics. This analysis yields a most precise liveness pattern for the data at each program point, which is signi cantly more precise than results from previous methods. The analysis algorithm takes cubic time in terms of the size of the program in the worst case but is very e cient in practice, as shown by our prototype implementation. The analysis results are used to identify and eliminate dead code. The general framework for representing and analyzing properties of recursive data structures using general regular tree grammars applies to other analyses as well. 1
Staged simulation: A general technique for improving simulation scale and performance
 ACM TMACS
, 2004
"... This article describes staged simulation, a technique for improving the run time performance and scale of discrete event simulators. Typical network simulations are limited in speed and scale due to redundant computations encountered both within a single simulation run and between successive runs. S ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
This article describes staged simulation, a technique for improving the run time performance and scale of discrete event simulators. Typical network simulations are limited in speed and scale due to redundant computations encountered both within a single simulation run and between successive runs. Staged simulation proposes to restructure discrete event simulators to operate in stages that precompute, cache, and reuse partial results to drastically reduce redundant computation within and across simulations. We present a general and flexible framework for staging, and identify the advantages and tradeoffs of its application to wireless network simulations, a particularly challenging simulation domain. Experience with applying staged simulation to the ns2 simulator shows that staging can improve execution time by an order of magnitude or more and enable the simulation of wireless networks with tens of thousands of nodes.
Principled Strength Reduction
 Algorithmic Languages and Calculi
, 1996
"... This paper presents a principled approach for optimizing iterative (or recursive) programs. The approach formulates a loop body as a function f and a change operation \Phi, incrementalizes f with respect to \Phi, and adopts an incrementalized loop body to form a new loop that is more efficient. Thre ..."
Abstract

Cited by 11 (10 self)
 Add to MetaCart
(Show Context)
This paper presents a principled approach for optimizing iterative (or recursive) programs. The approach formulates a loop body as a function f and a change operation \Phi, incrementalizes f with respect to \Phi, and adopts an incrementalized loop body to form a new loop that is more efficient. Three general optimizations are performed as part of the adoption; they systematically handle initializations, termination conditions, and final return values on exits of loops. These optimizations are either omitted, or done in implicit, limited, or ad hoc ways in previous methods. The new approach generalizes classical loop optimization techniques, notably strength reduction, in optimizing compilers, and it unifies and systematizes various optimization strategies in transformational programming. Such principled strength reduction performs drastic program efficiency improvement via incrementalization and appreciably reduces code size via associated optimizations. We give examples where this app...
Program Optimization Using Indexed and Recursive Data Structures
, 2002
"... This paper describes a systematic method for optimizing recursive functions using both indexed and recursive data structures. The method is based on two critical ideas: first, determining a minimal input increment operation so as to compute a function on repeatedly incremented input; second, determi ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
(Show Context)
This paper describes a systematic method for optimizing recursive functions using both indexed and recursive data structures. The method is based on two critical ideas: first, determining a minimal input increment operation so as to compute a function on repeatedly incremented input; second, determining appropriate additional values to maintain in appropriate data structures, based on what values are needed in computation on an incremented input and how these values can be established and accessed. Once these two are determined, the method extends the original program to return the additional values, derives an incremental version of the extended program, and forms an optimized program that repeatedly calls the incremental program. The method can derive all dynamic programming algorithms found in standard algorithm textbooks. There are many previous methods for deriving efficient algorithms, but none is as simple, general, and systematic as ours.