Results 11  20
of
74
Discovering auxiliary information for incremental computation
 In Symp. on Princ. of Prog. Lang
, 1996
"... This paper presents program analyses and transformations that discover a general class of auxiliary information for any incremental computation problem. Combining these techniques with previous techniques for caching intermediate results, we obtain a systematic approach that transforms nonincrementa ..."
Abstract

Cited by 22 (12 self)
 Add to MetaCart
(Show Context)
This paper presents program analyses and transformations that discover a general class of auxiliary information for any incremental computation problem. Combining these techniques with previous techniques for caching intermediate results, we obtain a systematic approach that transforms nonincremental programs into e cient incremental programs that use and maintain useful auxiliary information as well as useful intermediate results. The use of auxiliary information allows us to achieve a greater degree of incrementality than otherwise possible. Applications of the approach i nclude strength reduction in optimizing compilers and nite di erencing in transformational programming. 1
Advanced Logic Program Specialisation
 In this volume
"... Declarative programming languages, are highlevel programming languages in which one only has to state what is to be computed and not necessarily how it is to be computed. Logic programming and functional programming are two prominent members of this class of programming languages. While functional ..."
Abstract

Cited by 20 (11 self)
 Add to MetaCart
(Show Context)
Declarative programming languages, are highlevel programming languages in which one only has to state what is to be computed and not necessarily how it is to be computed. Logic programming and functional programming are two prominent members of this class of programming languages. While functional programming is based on the calculus, logic
Improving Control in Functional Logic Program Specialization
, 1998
"... We have recently defined a framework for Narrowingdriven Partial Evaluation (NPE) of functional logic programs. This method is as powerful as partial deduction of logic programs and positive supercompilation of functional programs. Although it is possible to treat complex terms containing primitive ..."
Abstract

Cited by 19 (11 self)
 Add to MetaCart
We have recently defined a framework for Narrowingdriven Partial Evaluation (NPE) of functional logic programs. This method is as powerful as partial deduction of logic programs and positive supercompilation of functional programs. Although it is possible to treat complex terms containing primitive functions (e.g. conjunctions or equations) in the NPE framework, its basic control mechanisms do not allow for effective polygenetic specialization of these complex expressions. We introduce a sophisticated unfolding rule endowed with a dynamic narrowing strategy which permits flexible scheduling of the elements (in conjunctions) which are reduced during specialization. We also present a novel abstraction operator which carefully considers primitive functions and is the key to achieving accurate polygenetic specialization. The abstraction operator extends some recent partitioning techniques defined in the framework of conjunctive partial deduction. We provide experimental results obtained from an implementation using the INDY system which demonstrate that the control refinements produce better specializations.
A Transformation System for Lazy Functional Logic Programs
, 1999
"... Needed narrowing is a complete operational principle for modern declarative languages which integrate the best features of (lazy) functional and logic programming. We define a transformation methodology for functional logic programs based on needed narrowing. We provide (strong) correctness results ..."
Abstract

Cited by 19 (12 self)
 Add to MetaCart
Needed narrowing is a complete operational principle for modern declarative languages which integrate the best features of (lazy) functional and logic programming. We define a transformation methodology for functional logic programs based on needed narrowing. We provide (strong) correctness results for the transformation system w.r.t. the set of computed values and answer substitutions and show that the prominent properties of needed narrowing  namely, the optimality w.r.t. the length of derivations and the number of computed solutions  carry over to the transformation process and the transformed programs. We illustrate the power of the system by taking on in our setting two wellknown transformation strategies (composition and tupling). We also provide an implementation of the transformation system which, by means of some experimental results, highlights the benefits of our approach.
Parallelization via Context Preservation
 In IEEE Intl Conference on Computer Languages
, 1998
"... Abstract program schemes, such as scan or homomorphism, can capture a wide range of data parallel programs. While versatile, these schemes are of limited practical use on their own. A key problem is that the more natural sequential specifications may not have associative combine operators required b ..."
Abstract

Cited by 18 (16 self)
 Add to MetaCart
(Show Context)
Abstract program schemes, such as scan or homomorphism, can capture a wide range of data parallel programs. While versatile, these schemes are of limited practical use on their own. A key problem is that the more natural sequential specifications may not have associative combine operators required by these schemes. As a result, they often fail to be immediately identified. To resolve this problem, we propose a method to systematically derive parallel programs from sequential definitions. This method is special in that it can automatically invent auxiliary functions needed by associative combine operators. Apart from a formalisation, we also provide new theorems, based on the notion of context preservation, to guarantee parallelization for a precise class of sequential programs. 1 Introduction It is wellrecognised that a key problem of parallel computing remains the development of efficient and correct parallel software. This task is further complicated by the variety of parallel arc...
Incremental Computation: A SemanticsBased Systematic Transformational Approach
, 1996
"... ion of a function f adds an extra cache parameter to f . Simplification simplifies the definition of f given the added cache parameter. However, as to how the cache parameter should be used in the simplification to provide incrementality, KIDS provides only the observation that distributive laws can ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
ion of a function f adds an extra cache parameter to f . Simplification simplifies the definition of f given the added cache parameter. However, as to how the cache parameter should be used in the simplification to provide incrementality, KIDS provides only the observation that distributive laws can often be applied. The Munich CIP project [BMPP89,Par90] has a strategy for finite differencing that captures similar ideas. It first "defines by a suitable embedding a function f 0 ", and then "derives a recursive version of f 0 using generalized unfold/fold strategy", but it provides no special techniques for discovering incrementality. We believe that both works provide only general strategies with no precise procedure to follow and therefore are less automatable than ours. Chapter 4 Caching intermediate results The value of f 0 (x \Phi y) may often be computed faster by using not only the return value of f 0 (x), as discussed in Chapter 3, but also the values of some subcomputation...
A Transformation Method for DynamicSized Tabulation
, 1995
"... Tupling is a transformation tactic to obtain new functions, without redundant calls and/or multiple traversals of common inputs. It achieves this feat by allowing each set (tuple) of function calls to be computed recursively from its previous set. In previous works by Chin and Khoo [8, 9], a safe (t ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
Tupling is a transformation tactic to obtain new functions, without redundant calls and/or multiple traversals of common inputs. It achieves this feat by allowing each set (tuple) of function calls to be computed recursively from its previous set. In previous works by Chin and Khoo [8, 9], a safe (terminating) fold/unfold transformation algorithm was developed for some classes of functions which are guaranteed to be successfully tupled. However, these classes of functions currently use staticsized tables for eliminating the redundant calls. As shown by Richard Bird in [3], there are also other classes of programs whose redundant calls could only be eliminated by using dynamicsized tabulation. This paper proposes a new solution to dynamicsized tabulation by an extension to the tupling tactic. Our extension uses lambda abstractions which can be viewed as either dynamicsized tables or applications of the higherorder generalisation technique to facilitate tupling. Significant speedups could be obtained after the transformed programs were vectorised, as confirmed by experiment.
Optimizing Ackermann's Function by Incrementalization
, 2001
"... This paper describes a formal derivation of an optimized Ackermann's function following a general and systematic method based on incrementalization. The method identifies an appropriate input increment operation and computes the function by repeatedly performing an incremental computation at th ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
This paper describes a formal derivation of an optimized Ackermann's function following a general and systematic method based on incrementalization. The method identifies an appropriate input increment operation and computes the function by repeatedly performing an incremental computation at the step of the increment. This eliminates repeated subcomputations in executions that follow the straightforward recursive definition of Ackermann's function, yielding an optimized program that is drastically faster and takes extremely little space. This case study uniquely shows the power and limitation of the incrementalization method, as well as both the iterative and recursive nature of computation underlying the optimized Ackermann's function.
Possibilities and limitations of callbyneed space improvement
 In Proceedings of the sixth ACM SIGPLAN International Conference on Functional Programming
, 2001
"... ..."
(Show Context)
Strengthening invariants for efficient computation
 in Conference Record of the 23rd Annual ACM Symposium on Principles of Programming Languages
, 2001
"... This paper presents program analyses and transformations for strengthening invariants for the purpose of efficient computation. Finding the stronger invariants corresponds to discovering a general class of auxiliary information for any incremental computation problem. Combining the techniques with p ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
This paper presents program analyses and transformations for strengthening invariants for the purpose of efficient computation. Finding the stronger invariants corresponds to discovering a general class of auxiliary information for any incremental computation problem. Combining the techniques with previous techniques for caching intermediate results, we obtain a systematic approach that transforms nonincremental programs into ecient incremental programs that use and maintain useful auxiliary information as well as useful intermediate results. The use of auxiliary information allows us to achieve a greater degree of incrementality than otherwise possible. Applications of the approach include strength reduction in optimizing compilers and finite differencing in transformational programming.