Results 1  10
of
14
Program specialization via program slicing
 Proceedings of the Dagstuhl Seminar on Partial Evaluation, volume 1110 of Lecture Notes in Computer Science
, 1996
"... This paper concerns the use of program slicing to perform a certain kind of programspecialization operation. The specialization operation that slicing performs is different from the specialization operations performed by algorithms for partial evaluation, supercompilation, bifurcation, and deforest ..."
Abstract

Cited by 55 (4 self)
 Add to MetaCart
This paper concerns the use of program slicing to perform a certain kind of programspecialization operation. The specialization operation that slicing performs is different from the specialization operations performed by algorithms for partial evaluation, supercompilation, bifurcation, and deforestation. In particular, we present an example in which the specialized program that we create via slicing could not be created as the result of applying partial evaluation, supercompilation, bifurcation, or deforestation to the original unspecialized program. Specialization via slicing also possesses an interesting property that partial evaluation, supercompilation, and bifurcation do not possess: The latter operations are somewhat limited in the sense that they support tailoring of existing software only according to the ways in which parameters of functions and procedures are used in a program. Because parameters to functions and procedures represent the range of usage patterns that the designer of a piece of software has anticipated, partial evaluation, supercompilation, and bifurcation support specialization only in ways that have already been “foreseen ” by the software’s author. In contrast, the specialization operation that slicing supports permits programs to be specialized in ways
Principled Strength Reduction
 Algorithmic Languages and Calculi
, 1996
"... This paper presents a principled approach for optimizing iterative (or recursive) programs. The approach formulates a loop body as a function f and a change operation \Phi, incrementalizes f with respect to \Phi, and adopts an incrementalized loop body to form a new loop that is more efficient. Thre ..."
Abstract

Cited by 10 (9 self)
 Add to MetaCart
This paper presents a principled approach for optimizing iterative (or recursive) programs. The approach formulates a loop body as a function f and a change operation \Phi, incrementalizes f with respect to \Phi, and adopts an incrementalized loop body to form a new loop that is more efficient. Three general optimizations are performed as part of the adoption; they systematically handle initializations, termination conditions, and final return values on exits of loops. These optimizations are either omitted, or done in implicit, limited, or ad hoc ways in previous methods. The new approach generalizes classical loop optimization techniques, notably strength reduction, in optimizing compilers, and it unifies and systematizes various optimization strategies in transformational programming. Such principled strength reduction performs drastic program efficiency improvement via incrementalization and appreciably reduces code size via associated optimizations. We give examples where this app...
CACHET: An interactive, incrementalattributionbased program transformation system for deriving incremental programs
 IN PROCEEDINGS OF THE 10TH KNOWLEDGEBASED SOFTWARE ENGINEERING CONFERENCE
, 1995
"... This paper describes the design and implementation of an interactive, incrementalattributionbased program transformation system, CACHET, that derives incremental programs from nonincremental programs written in a functional language. CACHET is designed as a programming environment and implemented ..."
Abstract

Cited by 9 (8 self)
 Add to MetaCart
This paper describes the design and implementation of an interactive, incrementalattributionbased program transformation system, CACHET, that derives incremental programs from nonincremental programs written in a functional language. CACHET is designed as a programming environment and implemented using a languagebased editor generator, the Synthesizer Generator, with extensions that support complex transformations. Transformations directly manipulate the program tree and take into consideration information obtained from program analyses. Program analyses are performed via attribute evaluation, which is done incrementally as transformations change the program tree. The overall approach also explores a general framework for describing dynamic program semantics using annotations, which allows interleaving transformations with external input, such as user input. Designing CACHET as a programming environment also facilitates the integration of program derivation and validation with inte...
Strengthening invariants for efficient computation
 in Conference Record of the 23rd Annual ACM Symposium on Principles of Programming Languages
, 2001
"... This paper presents program analyses and transformations for strengthening invariants for the purpose of efficient computation. Finding the stronger invariants corresponds to discovering a general class of auxiliary information for any incremental computation problem. Combining the techniques with p ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
This paper presents program analyses and transformations for strengthening invariants for the purpose of efficient computation. Finding the stronger invariants corresponds to discovering a general class of auxiliary information for any incremental computation problem. Combining the techniques with previous techniques for caching intermediate results, we obtain a systematic approach that transforms nonincremental programs into ecient incremental programs that use and maintain useful auxiliary information as well as useful intermediate results. The use of auxiliary information allows us to achieve a greater degree of incrementality than otherwise possible. Applications of the approach include strength reduction in optimizing compilers and finite differencing in transformational programming.
Solving Regular Tree Grammar Based Constraints
 In Proceedings of the 8th International Static Analysis Symposium
, 2000
"... This paper describes the precise specification, design, analysis, implementation, and measurements of an efficient algorithm for solving regular tree grammar based constraints. The particular constraints are for deadcode elimination on recursive data, but the method used for the algorithm design an ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
This paper describes the precise specification, design, analysis, implementation, and measurements of an efficient algorithm for solving regular tree grammar based constraints. The particular constraints are for deadcode elimination on recursive data, but the method used for the algorithm design and complexity analysis is general and applies to other program analysis problems as well. The method is centered around Paige's finite differencing, i.e., computing expensive set expressions incrementally, and allows the algorithm to be derived and analyzed formally and implemented easily. We study higherlevel transformations that make the derived algorithm concise and allow its complexity to be analyzed accurately. Although a rough analysis shows that the worstcase time complexity is cubic in program size, an accurate analysis shows that it is linear in the number of live program points and in other parameters, including mainly the arity of data constructors and the number of selector applications into whose arguments the value constructed at a program point might flow. These parameters explain the performance of the analysis in practice. Our implementation also runs two to ten times as fast as a previous implementation of an informally designed algorithm.
Synchronisation Analysis to Stop Tupling
 Lecture Notes in Computer Science
, 1998
"... . Tupling transformation strategy can be used to merge loops together by combining recursive calls and also to eliminate redundant calls for a class of programs. In the latter case, this transformation can produce superlinear speedup. Existing works in deriving a safe and automatic tupling only ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
. Tupling transformation strategy can be used to merge loops together by combining recursive calls and also to eliminate redundant calls for a class of programs. In the latter case, this transformation can produce superlinear speedup. Existing works in deriving a safe and automatic tupling only apply to a very limited class of programs. In this paper, we present a novel parameter analysis, called synchronisation analysis, to solve the termination problem for tupling. With it, we can perform tupling on functions with multiple recursion and accumulative arguments without the risk of nontermination. This significantly widens the scope for tupling, and potentially enhances its usefulness. The analysis is shown to be of polynomial complexity; this makes tupling suitable as a compiler optimisation. 1 Introduction Sourcetosource transformation can achieve global optimisation through specialisation for recursive functions. Two wellknown techniques are partial evaluation [9] a...
Partial Memoization of Concurrency and Communication
"... Memoization is a wellknown optimization technique used to eliminate redundant calls for pure functions. If a call to a function f with argument v yields result r, a subsequent call to f with v can be immediately reduced to r without the need to reevaluate f ’s body. Understanding memoization in th ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Memoization is a wellknown optimization technique used to eliminate redundant calls for pure functions. If a call to a function f with argument v yields result r, a subsequent call to f with v can be immediately reduced to r without the need to reevaluate f ’s body. Understanding memoization in the presence of concurrency and communication is significantly more challenging. For example, if f communicates with other threads, it is not sufficient to simply record its input/output behavior; we must also track interthread dependencies induced by these communication actions. Subsequent calls to f can be elided only if we can identify an interleaving of actions from these callsites that lead to states in which these dependencies are satisfied. Similar issues arise if f spawns additional threads. In this paper, we consider the memoization problem for a higherorder concurrent language whose threads may communicate through synchronous messagebased communication. To avoid the need to perform unbounded state space search that may be necessary to determine if all communication dependencies manifest in an earlier call can be satisfied in a later one, we introduce a weaker notion of memoization called partial memoization that gives implementations the freedom to avoid performing some part, if not all, of a previously memoized call. To validate the effectiveness of our ideas, we consider the benefits of memoization for reducing the overhead of recomputation for streaming, serverbased, and transactional applications executed on a multicore machine. We show that on a variety of workloads, memoization can lead to substantial performance improvements without incurring high memory costs.
Incremental computation for transformational software development
"... Given a program f and an input change, w e wish to obtain an incremental program that computes f (x y) e ciently by making use of the value of f (x), the intermediate results computed in computing f (x), and auxiliary information about x that can be inexpensively maintained. Obtaining such increment ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Given a program f and an input change, w e wish to obtain an incremental program that computes f (x y) e ciently by making use of the value of f (x), the intermediate results computed in computing f (x), and auxiliary information about x that can be inexpensively maintained. Obtaining such incremental programs is an essential part of the transformationalprogramming approach to software development and enhancement. This paper presents a systematic approach that discovers a general class of useful auxiliary information, combines it with useful intermediate results, and obtains an e cient incremental program that uses and maintains these intermediate results and auxiliary information. We g i v e a n umbe r of examples from list processing, VLSI circuit design, image processing, etc. 1
Coalescing Executions for Fast Uncertainty Analysis
"... Uncertain data processing is critical in a wide range of applications such as scientific computation handling data with inevitableerrorsandfinancialdecisionmakingrelyingonhuman provided parameters. While increasingly studied in the area of databases, uncertain data processing is often carried out by ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Uncertain data processing is critical in a wide range of applications such as scientific computation handling data with inevitableerrorsandfinancialdecisionmakingrelyingonhuman provided parameters. While increasingly studied in the area of databases, uncertain data processing is often carried out by software, and thus software based solutions are attractive. In particular, Monte Carlo (MC) methods execute software with many samples from the uncertain inputs and observe the statistical behavior of the output. In this paper, we propose a technique to improve the costeffectiveness of MC methods. Assuming only part of the input is uncertain, the certain part of the input always leads to the same execution across multiple sample runs. We remove such redundancy by coalescing multiple sample runs in a single run. In the coalesced run, the program operates on a vector of values if uncertainty is present and a single value otherwise. We handle cases where control flow and pointers are uncertain. Our results show that we can speed up the execution time of 30 sample runs by an average factor of 2.3 without precision lost or by up to 3.4 with negligible precision lost.
Computational Divided Differencing and DividedDifference Arithmetics
, 2000
"... Tools for computational differentiation transform a program that computes a numerical function F (x) into a related program that computes F 0 (x) (the derivative of F ). This paper describes how techniques similar to those used in computationaldifferentiation tools can be used to implement other pr ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Tools for computational differentiation transform a program that computes a numerical function F (x) into a related program that computes F 0 (x) (the derivative of F ). This paper describes how techniques similar to those used in computationaldifferentiation tools can be used to implement other program transformations  in particular, a variety of transformations for computational divided differencing . The specific technical contributions of the paper are as follows: It presents a program transformation that, given a numerical function F (x) de ned by a program, creates a program that computes F [x0 ; x1 ], the first divided difference of F(x), where F [x0 ; x1 ] def = F (x 0 ) F (x 1 ) x 0 x 1 if x0 6= x1 d dz F (z); evaluated at z = x0 if x0 = x1 It shows how computational first divided differencing generalizes computational differentiation. It presents a second program transformation that permits the creation of higherorder divided differences of a numerical function de ...