Results 1  10
of
12
Discovering auxiliary information for incremental computation
 In Symp. on Princ. of Prog. Lang
, 1996
"... This paper presents program analyses and transformations that discover a general class of auxiliary information for any incremental computation problem. Combining these techniques with previous techniques for caching intermediate results, we obtain a systematic approach that transforms nonincrementa ..."
Abstract

Cited by 22 (12 self)
 Add to MetaCart
This paper presents program analyses and transformations that discover a general class of auxiliary information for any incremental computation problem. Combining these techniques with previous techniques for caching intermediate results, we obtain a systematic approach that transforms nonincremental programs into e cient incremental programs that use and maintain useful auxiliary information as well as useful intermediate results. The use of auxiliary information allows us to achieve a greater degree of incrementality than otherwise possible. Applications of the approach i nclude strength reduction in optimizing compilers and nite di erencing in transformational programming. 1
Loop optimization for aggregate array computations
"... An aggregate array computation is a loop that computes accumulated quantities over array elements. Such computations are common in programs that use arrays, and the array elements involved in such computations often overlap, especially across iterations of loops, resulting in signi cant redundancy ..."
Abstract

Cited by 15 (7 self)
 Add to MetaCart
An aggregate array computation is a loop that computes accumulated quantities over array elements. Such computations are common in programs that use arrays, and the array elements involved in such computations often overlap, especially across iterations of loops, resulting in signi cant redundancy in the overall computation. This paper presents a method and algorithms that eliminate such overlapping aggregate array redundancies and shows both analytical and experimental performance improvements. The method is based on incrementalization, i.e., updating the values of aggregate array computations from iteration to iteration rather than computing them from scratch in each iteration. This involves maintaining additional information not maintained in the original program. We reduce various analysis problems to solving inequality constraints on loop variables and array subscripts, and we apply results from work on array data dependence analysis. Incrementalizing aggregate array computations produces drastic program speedup compared to previous optimizations. Previous methods for loop optimizations of arrays do not perform incrementalization, and previous techniques for loop incrementalization do not handle arrays.
Incremental Computation: A SemanticsBased Systematic Transformational Approach
, 1996
"... ion of a function f adds an extra cache parameter to f . Simplification simplifies the definition of f given the added cache parameter. However, as to how the cache parameter should be used in the simplification to provide incrementality, KIDS provides only the observation that distributive laws can ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
ion of a function f adds an extra cache parameter to f . Simplification simplifies the definition of f given the added cache parameter. However, as to how the cache parameter should be used in the simplification to provide incrementality, KIDS provides only the observation that distributive laws can often be applied. The Munich CIP project [BMPP89,Par90] has a strategy for finite differencing that captures similar ideas. It first "defines by a suitable embedding a function f 0 ", and then "derives a recursive version of f 0 using generalized unfold/fold strategy", but it provides no special techniques for discovering incrementality. We believe that both works provide only general strategies with no precise procedure to follow and therefore are less automatable than ours. Chapter 4 Caching intermediate results The value of f 0 (x \Phi y) may often be computed faster by using not only the return value of f 0 (x), as discussed in Chapter 3, but also the values of some subcomputation...
Principled Strength Reduction
 Algorithmic Languages and Calculi
, 1996
"... This paper presents a principled approach for optimizing iterative (or recursive) programs. The approach formulates a loop body as a function f and a change operation \Phi, incrementalizes f with respect to \Phi, and adopts an incrementalized loop body to form a new loop that is more efficient. Thre ..."
Abstract

Cited by 11 (10 self)
 Add to MetaCart
This paper presents a principled approach for optimizing iterative (or recursive) programs. The approach formulates a loop body as a function f and a change operation \Phi, incrementalizes f with respect to \Phi, and adopts an incrementalized loop body to form a new loop that is more efficient. Three general optimizations are performed as part of the adoption; they systematically handle initializations, termination conditions, and final return values on exits of loops. These optimizations are either omitted, or done in implicit, limited, or ad hoc ways in previous methods. The new approach generalizes classical loop optimization techniques, notably strength reduction, in optimizing compilers, and it unifies and systematizes various optimization strategies in transformational programming. Such principled strength reduction performs drastic program efficiency improvement via incrementalization and appreciably reduces code size via associated optimizations. We give examples where this app...
Experiences with Software Specification and Verification Using LP, the Larch Proof Assistant
 LP, the Larch Proof Assistant. TR 93, Digital Systems Research
, 1992
"... We sketch a method for deductionoriented software and system development The method incorporates formal machinesupported specification and verification as activities in software and systems development. We describe experiences in applying this method. These experiences have been gained by using th ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
We sketch a method for deductionoriented software and system development The method incorporates formal machinesupported specification and verification as activities in software and systems development. We describe experiences in applying this method. These experiences have been gained by using the LP, the Larch proof assistant, as a tool for a number of small and medium size case studies for the formal development of software and systems. LP is used for the verification of the development steps. These case studies include z quicksort, z the majority vote problem, z code generation by a compiler and its correctness, z an interactive queue and its refinement into a network. The developments range over levels of requirement specifications, designs and abstract implementations. The main issues are questions of a development method and how to make good use of a formal tool like LP in a goaldirected way within the development. We further discuss of the value of advanced specific...
Optimizing aggregate array computations in loops
 ACM Transactions on Programming Languages and Systems
, 2005
"... An aggregate array computation is a loop that computes accumulated quantities over array elements. Such computations are common in programs that use arrays, and the array elements involved in such computations often overlap, especially across iterations of loops, resulting in significant redundancy ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
An aggregate array computation is a loop that computes accumulated quantities over array elements. Such computations are common in programs that use arrays, and the array elements involved in such computations often overlap, especially across iterations of loops, resulting in significant redundancy in the overall computation. This paper presents a method and algorithms that eliminate such overlapping aggregate array redundancies and shows analytical and experimental performance improvements. The method is based on incrementalization, i.e., updating the values of aggregate array computations from iteration to iteration rather than computing them from scratch in each iteration. This involves maintaining additional values not maintained in the original program. We reduce various analysis problems to solving inequality constraints on loop variables and array subscripts, and we apply results from work on array data dependence analysis. For aggregate array computations that have significant redundancy, incrementalization produces drastic speedup compared to previous optimizations; when there is little redundancy, the benefit might be offset by cache effects and other factors. Previous methods for loop optimizations of arrays do not perform incrementalization, and previous techniques for loop incrementalization do not handle arrays. 1
Strengthening invariants for efficient computation
 in Conference Record of the 23rd Annual ACM Symposium on Principles of Programming Languages
, 2001
"... This paper presents program analyses and transformations for strengthening invariants for the purpose of efficient computation. Finding the stronger invariants corresponds to discovering a general class of auxiliary information for any incremental computation problem. Combining the techniques with p ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
This paper presents program analyses and transformations for strengthening invariants for the purpose of efficient computation. Finding the stronger invariants corresponds to discovering a general class of auxiliary information for any incremental computation problem. Combining the techniques with previous techniques for caching intermediate results, we obtain a systematic approach that transforms nonincremental programs into ecient incremental programs that use and maintain useful auxiliary information as well as useful intermediate results. The use of auxiliary information allows us to achieve a greater degree of incrementality than otherwise possible. Applications of the approach include strength reduction in optimizing compilers and finite differencing in transformational programming.
Efficient Computation via Incremental Computation
, 1999
"... Incremental computation takes advantage of repeated computations on inputs that differ slightly from one another, computing each output efficiently by exploiting the previous output. This paper gives an overview of a general and systematic approach to incrementalization: given a program f and an ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Incremental computation takes advantage of repeated computations on inputs that differ slightly from one another, computing each output efficiently by exploiting the previous output. This paper gives an overview of a general and systematic approach to incrementalization: given a program f and an operation \Phi, the approach yields an incremental program that computes f(x \Phi y) efficiently by using the result of f(x), the intermediate results of f(x), and auxiliary information of f(x) that can be inexpensively maintained. Since every nontrivial computation proceeds by iteration or recursion, the approach can be used for achieving efficient computation by computing each iteration incrementally using an appropriate incremental program. This approach has applications in interactive systems, optimizing compilers, transformational programming, and many other areas, where problems were previously solved mostly in ad hoc and often errorprone ways. The design and implementation ...
Transformational Programming and the . . .
"... The transformational programming, method of algorithm derivation starts with a formal specification of the result to be achieved (which provides no indication of how the result is to be achieved), plus some informal ideas as to what techniques will be used in the implementation. The formal specific ..."
Abstract
 Add to MetaCart
The transformational programming, method of algorithm derivation starts with a formal specification of the result to be achieved (which provides no indication of how the result is to be achieved), plus some informal ideas as to what techniques will be used in the implementation. The formal specification is then transformed into an implementation, by means of correctnesspreserving refinement and transformation steps. The informal ideas are used to guide the selection of transformations to apply: since they only guide the selection of valid transformations, the ideas do not themselves have to be formalised. At any stage in the process, subspecifications can be extracted and transformed separately. The main difference between this approach and the invariant based programming approach (and similar stepwise refinement methods) is that loops can be introduced and manipulated while maintaining program correctness and with no need to derive loop invariants. Another difference is that at every stage in the process we are working with a correct program: there is never any need for a separate “verification” step. These factors help to ensure that the method is capable of scaling up to the dvelopment of large and complex software systems.
93 Experiences with Software Specification and Verification Using LP, the Larch Proof Assistant
, 1992
"... DEC’s business and technology objectives require a strong research program. The Systems Research Center (SRC) and three other research laboratories are committed to filling that need. SRC began recruiting its first research scientists in l984—their charter, to advance the state of knowledge in all a ..."
Abstract
 Add to MetaCart
DEC’s business and technology objectives require a strong research program. The Systems Research Center (SRC) and three other research laboratories are committed to filling that need. SRC began recruiting its first research scientists in l984—their charter, to advance the state of knowledge in all aspects of computer systems research. Our current work includes exploring highperformance personal computing, distributed computing, programming environments, system modelling techniques, specification technology, and tightlycoupled multiprocessors. Our approach to both hardware and software research is to create and use real systems so that we can investigate their properties fully. Complex systems cannot be evaluated solely in the abstract. Based on this belief, our strategy is to demonstrate the technical and practical feasibility of our ideas by building prototypes and using them as daily tools. The experience we gain is useful in the short term in enabling us to refine our designs, and invaluable in the long term in helping us to advance the state of knowledge about those systems. Most of the major advances in information systems have come through this strategy, including timesharing, the ArpaNet, and distributed personal computing. SRC also performs work of a more mathematical flavor which complements our systems research. Some of this work is in established fields of theoretical computer science, such as the analysis of algorithms, computational geometry, and logics of programming. The rest of this work explores new ground motivated by problems that arise in our systems research. DEC has a strong commitment to communicating the results and experience gained through pursuing these activities. The Company values the improved understanding that comes with exposing and testing our ideas within the research community. SRC will therefore report results in conferences, in professional journals, and in our research report series. We will seek users for our prototype systems among those with whom we have common research interests, and we will encourage collaboration with university researchers.