Results 21  30
of
36
Fast Strictness Analysis Via Symbolic Fixpoint Iteration
, 1991
"... . Strictness analysis (at least for flat domains) is well understood. For a few years the main concern was efficiency, since the standard analysis was shown to be exponential in the worst case [9]. Thus lots of research evolved to find efficient averagecase algorithms. In Yale Haskell we have imple ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
. Strictness analysis (at least for flat domains) is well understood. For a few years the main concern was efficiency, since the standard analysis was shown to be exponential in the worst case [9]. Thus lots of research evolved to find efficient averagecase algorithms. In Yale Haskell we have implemented a strictness analyzer that computes fixpoints via symbolic manipulation of boolean functions. This extremely simple approach also is extremely fast  the strictness analysis phase of our compiler typically takes about 1% of the overall compilation time. 1 Introduction The goal of strictness analysis is to determine, for every function in a program, the parameters in which it is strict. Strictness information is crucial to the implementation of a nonstrict language such as Haskell, since conventional machines are best suited to strict, or eager evaluation. Knowing that a function is strict in a given argument allows one to evaluate that argument eargerly and thus avoid creating dela...
Abstract Program Transformation by Solving Recurrences
"... Recursive programs may require large numbers of procedure calls and stack operations, and many such recursive programs exhibit exponential time complexity, due to the time spent recalculating already computed subproblems. As a result, methods which transform a given recursive program to an iterati ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Recursive programs may require large numbers of procedure calls and stack operations, and many such recursive programs exhibit exponential time complexity, due to the time spent recalculating already computed subproblems. As a result, methods which transform a given recursive program to an iterative one have been intensively studied. We propose here a new framework for transforming programs by removing recursion. The framework includes a unified method of deriving low timecomplexity programs by solving recurrences extracted from the program sources. Our prototype system, ������, is an initial implementation of the framework, automatically finding simpler “closed form ” versions of a class of recursive programs. Though in general the solution of recurrences is easier if the functions have only a single recursion parameter, we show a practical technique for solving those with multiple recursion parameters.
Redundant Call Elimination via Tupling
 FUNDAMENTA INFORMATICAE
, 2005
"... Redundant call elimination has been an important program optimisation process as it can produce superlinear speedup in optimised programs. In this paper, we investigate use of the tupling transformation in achieving this optimisation over a firstorder functional language. Standard tupling techniqu ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Redundant call elimination has been an important program optimisation process as it can produce superlinear speedup in optimised programs. In this paper, we investigate use of the tupling transformation in achieving this optimisation over a firstorder functional language. Standard tupling technique, as described in [6], works excellently in a restricted variant of the language; namely, functions with single recursion argument. We provide a semantic understanding of call redundancy, upon which we construct an analysis for handling the tupling of functions with multiple recursion arguments. The analysis provides a means to ensure termination of the tupling transformation. As the analysis is of polynomial complexity, it makes the tupling suitable as a step in compiler optimisation.
A New Means of Ensuring Termination of Deforestation With an Application to Logic Programming
 In Workshop of the Global Compilation Workshop in conjunction with the International Logic Programming Symposium
, 1993
"... Wadler's deforestation algorithm eliminates intermediate data structures from functional programs, but is only guaranteed to terminate for a certain class of programs. Chin has shown how one can apply deforestation to all firstorder programs. We develop a new technique of ensuring termination ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Wadler's deforestation algorithm eliminates intermediate data structures from functional programs, but is only guaranteed to terminate for a certain class of programs. Chin has shown how one can apply deforestation to all firstorder programs. We develop a new technique of ensuring termination of deforestation for all firstorder programs which strictly extends Chin's technique in a sense we make precise. We also show how suitable modifications may render our technique applicable to ensure termination of transformers of logic programs such as partial evaluation as studied by Gallagher and others and the elimination procedure studied by Proietti and Pettorossi. 1 Introduction Modern functional programming languages such as Miranda 1 [Tur90] lend themselves to a certain elegant style of programming which exploits higherorder functions, lazy evaluation and intermediate data structures; Hughes [Hug90] gives illuminating examples. While this programming style makes it easy to read and w...
Computer Laboratory Callbyneed supercompilation
, 2013
"... This thesis shows how supercompilation, a powerful technique for transformation and analysis of functional programs, can be effectively applied to a callbyneed language. Our setting will be core calculi suitable for use as intermediate languages when compiling higherorder, lazy functional program ..."
Abstract
 Add to MetaCart
(Show Context)
This thesis shows how supercompilation, a powerful technique for transformation and analysis of functional programs, can be effectively applied to a callbyneed language. Our setting will be core calculi suitable for use as intermediate languages when compiling higherorder, lazy functional programming languages such as Haskell. We describe a new formulation of supercompilation which is more closely connected to operational semantics than the standard presentation. As a result of this connection, we are able to exploit a standard Sestoftstyle operational semantics to build a supercompiler which, for the first time, is able to supercompile a callbyneed language with unrestricted recursive let bindings. We give complete descriptions of all of the (surprisingly tricky) components of the resulting supercompiler, showing in detail how standard formulations of supercompilation have to be adapted for the callbyneed setting. We show how the standard technique of generalisation can be extended to the callbyneed setting. We also describe a novel generalisation scheme which is simpler to implement than standard generalisation techniques, and describe a completely new form of generalisation which can be used when supercompiling a typed language to ameliorate
Deriving Efficient Divide & Conquer Algorithms from Sequential Specification
"... We propose an inductive method to synthesize parallel divideandconquer programs from sequential recursive functions. Traditionally, such parallelization methods are based on schematic rules which attempt to match each given sequential program to a prescribed set of program schemes that have p ..."
Abstract
 Add to MetaCart
We propose an inductive method to synthesize parallel divideandconquer programs from sequential recursive functions. Traditionally, such parallelization methods are based on schematic rules which attempt to match each given sequential program to a prescribed set of program schemes that have parallel counterparts. Instead of relying on specialized program schemes, we propose a new approach to parallelization based on elementary transformation rules. Our approach requires an induction to recover parallellism from sequential programs. To achieve this, we apply a secondorder generalisation step to selected instances of sequential equations, before an inductive derivation procedure. The new approach is systematic enough to be semiautomated, and shall be shown to be widely applicable using a range of examples.
Transformations for NonStrict Functional Languages
, 2000
"... In functional languages intermediate data structures are often used as glue to connect separate parts of a program together. These intermediate data structures are useful because they allow modularity, but they are also a cause of inefficiency: each element need to be allocated, to be examined, and ..."
Abstract
 Add to MetaCart
In functional languages intermediate data structures are often used as glue to connect separate parts of a program together. These intermediate data structures are useful because they allow modularity, but they are also a cause of inefficiency: each element need to be allocated, to be examined, and to be deallocated. Warm fusion is a program transformation technique which aims to eliminate intermediate data structures. Functions in a program are first transformed into the so called buildcata form, then fused via a onestep rewrite rule, the catabuild rule. In the process of the transformation to buildcata form we attempt to replace explicit recursion with a fixed pattern of recursion (catamorphism). We analyse in detail the problem of removing — possibly mutually recursive sets of — polynomial datatypes. We have implemented the warm fusion method in the Glasgow Haskell Compiler, which has allowed practical feedback. One important conclusion is that catamorphisms and fusion in general deserve a more prominent role in the compilation process. We give a detailed
Isomorphisms between Two Groupoids : An Experiment in Program Synthesis and Transformation
, 1994
"... The research work which comes under the heading `program transformation' started in the seventies. The term coveres both the conversion of specifications to runnable programs (though the term `program synthesis' is sometimes preferred here) and also the conversion of existing programs i ..."
Abstract
 Add to MetaCart
(Show Context)
The research work which comes under the heading `program transformation' started in the seventies. The term coveres both the conversion of specifications to runnable programs (though the term `program synthesis' is sometimes preferred here) and also the conversion of existing programs into equivalent and more efficient ones. Our main concern in this paper is on transforming programs, in particular finding efficient programs. The problem of finding the isomorphisms between two finite groupoids has been considered. Initially, by using unfold/fold method, a program for this problem is derived from its definition which then leads to the efficient program by using promotion (fusion), unfold/fold method and necessary proved lemmas. We show that during the process of both synthesis and transformation there are some stages for which we need some lemmas to proceed on. Then generalizations of these lemmas are proved. 1 Introduction The research work which comes under the heading `prog...
Under consideration for publication in J. Functional Programming 1 Compilation of a Specialized Functional Language for Massively Parallel Computers
"... We propose a parallel specialized language that ensures portable and costpredictable implementations on parallel computers. The language is basically a firstorder, recursionless, strict functional language equipped with a collection of higherorder functions or skeletons. These skeletons apply on ..."
Abstract
 Add to MetaCart
(Show Context)
We propose a parallel specialized language that ensures portable and costpredictable implementations on parallel computers. The language is basically a firstorder, recursionless, strict functional language equipped with a collection of higherorder functions or skeletons. These skeletons apply on (nested) vectors and can be grouped in four classes: computation, reorganization, communication, and mask skeletons. The compilation process is described as a series of transformations and analyses leading to spmdlike functional programs which can be directly translated into real parallel code. The language restrictions enforce a programming discipline whose benefit is to allow a static, symbolic, and accurate cost analysis. The parallel cost takes into account both load balancing and communications, and can be statically evaluated even when the actual size of vectors or the number of processors are unknown. It is used to automatically select the best data distribution among a set of standard distributions. Interestingly, this work can be seen as a cross fertilization between techniques developed within the Fortran parallelization, skeleton, and functional programming communities. 1
Recursive Program Optimization Through Inductive Synthesis Proof Transformation
, 1999
"... The research described in this paper involved developing transformation techniques which increase the efficiency of the noriginal program, the source, by transforming its synthesis proof into one, the target, which yields a computationally more efficient algorithm. We describe a working proof tran ..."
Abstract
 Add to MetaCart
The research described in this paper involved developing transformation techniques which increase the efficiency of the noriginal program, the source, by transforming its synthesis proof into one, the target, which yields a computationally more efficient algorithm. We describe a working proof transformation system which, by exploiting the duality between mathematical induction and recursion, employs the novel strategy of optimizing recursive programs by transforming inductive proofs. We compare and contrast this approach with the more traditional approaches to program transformation, and highlight the benefits of proof transformation with regards to search, correctness, automatability and generality.