Results 1  10
of
22
Secrets of the Glasgow Haskell Compiler inliner
 Journal of Functional Programming
, 1999
"... Higherorder languages, such as Haskell, encourage the programmer to build abstractions by composing functions. A good compiler must inline many of these calls to recover an efficiently executable program. In principle, inlining is dead simple: just replace the call of a function by an instance of i ..."
Abstract

Cited by 47 (6 self)
 Add to MetaCart
Higherorder languages, such as Haskell, encourage the programmer to build abstractions by composing functions. A good compiler must inline many of these calls to recover an efficiently executable program. In principle, inlining is dead simple: just replace the call of a function by an instance of its body. But any compilerwriter will tell you that inlining is a black art, full of delicate compromises that work together to give good performance without unnecessary code bloat. The purpose of this paper is, therefore, to articulate the key lessons we learned from a fullscale "production" inliner, the one used in the Glasgow Haskell compiler. We focus mainly on the algorithmic aspects, but we also provide some indicative measurements to substantiate the importance of various aspects of the inliner. 1 Introduction One of the trickiest aspects of a compiler for a functional language is the handling of inlining. In a functionallanguage compiler, inlining subsumes several other optimisatio...
Compilation by Transformation in NonStrict Functional Languages
, 1995
"... In this thesis we present and analyse a set of automatic sourcetosource program transformations that are suitable for incorporation in optimising compilers for lazy functional languages. These transformations improve the quality of code in many different respects, such as execution time and memory ..."
Abstract

Cited by 32 (1 self)
 Add to MetaCart
In this thesis we present and analyse a set of automatic sourcetosource program transformations that are suitable for incorporation in optimising compilers for lazy functional languages. These transformations improve the quality of code in many different respects, such as execution time and memory usage. The transformations presented are divided in two sets: global transformations, which are performed once (or sometimes twice) during the compilation process; and a set of local transformations, which are performed before and after each of the global transformations, so that they can simplify the code before applying the global transformations and also take advantage of them afterwards. Many of the local transformations are simple, well known, and do not have major effects on their own. They become important as they interact with each other and with global transformations, sometimes in nonobvious ways. We present how and why they improve the code, and perform extensive experiments wit...
Towards generic refactoring
, 2008
"... We study program refactoring while considering the language or even the programming paradigm as a parameter. We use typed functional programs, namely Haskell programs, as the specification medium for a corresponding refactoring framework. In order to detach ourselves from language syntax, our specif ..."
Abstract

Cited by 29 (10 self)
 Add to MetaCart
We study program refactoring while considering the language or even the programming paradigm as a parameter. We use typed functional programs, namely Haskell programs, as the specification medium for a corresponding refactoring framework. In order to detach ourselves from language syntax, our specifications adhere to the following style. (I) As for primitive algorithms for program analysis and transformation, we employ generic function combinators supporting generic traversal and polymorphic functions refined by adhoc cases. (II) As for the language abstractions involved in refactorings, we design a dedicated multiparameter class. This class can be instantiated for abstractions as present in various languages, e.g., Java, Prolog or Haskell.
TypeBased Useless Variable Elimination
, 1999
"... We show a typebased method for useless variable elimination, i.e., transformation that eliminates variables whose values contribute nothing to the final outcome of a computation, and prove its correctness. The algorithm is a surprisingly simple extension of the usual type reconstruction algorithm. ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
We show a typebased method for useless variable elimination, i.e., transformation that eliminates variables whose values contribute nothing to the final outcome of a computation, and prove its correctness. The algorithm is a surprisingly simple extension of the usual type reconstruction algorithm. Our method seems more attractive than Wand and Siveroni's 0CFAbased method in many respects. First, it is efficient: it runs in time almost linear in the size of an input expression for a simplytyped calculus, while the 0CFAbased method may require a cubic time. Second, our transformation can be shown to be optimal among those that preserve welltypedness, both for the simplytyped language and for an MLstyle polymorphicallytyped language. On the other hand, the 0CFAbased method is not optimal for the polymophicallytyped language. ANY OTHER IDENTIFYING INFORMATION OF THIS REPORT Summary has been submitted for publication. Uptodate version of this report will be available through ...
The abstraction and instantiation of stringmatching programs
, 2001
"... We consider a naive, quadratic string matcher testing whether a pattern occurs in a text; we equip it with a cache mediating its access to the text; and we abstract the traversal policy of the pattern, the cache, and the text. We then specialize this abstracted program with respect to a pattern, usi ..."
Abstract

Cited by 14 (7 self)
 Add to MetaCart
We consider a naive, quadratic string matcher testing whether a pattern occurs in a text; we equip it with a cache mediating its access to the text; and we abstract the traversal policy of the pattern, the cache, and the text. We then specialize this abstracted program with respect to a pattern, using the offtheshelf partial evaluator Similix. Instantiating the abstracted program with a lefttoright traversal policy yields the lineartime behavior of Knuth, Morris and Pratt’s string matcher. Instantiating it with a righttoleft policy yields the lineartime
Inherited limits
 In Partial Evaluation: Practice and Theory
, 1999
"... Abstract. We study the evolution of partial evaluators over the past fifteen years from a particular perspective: The attempt to prevent structural bounds in the original programs from imposing limits on the structure of residual programs. It will often be the case that a language allows unbounded n ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
Abstract. We study the evolution of partial evaluators over the past fifteen years from a particular perspective: The attempt to prevent structural bounds in the original programs from imposing limits on the structure of residual programs. It will often be the case that a language allows unbounded numbers or sizes of particular features, but each program (being finite) will only have a finite number or size of these features. If the residual programs cannot overcome the bounds given in the original program, that can be seen as a weakness in the partial evaluator, as it potentially limits the effectiveness of residual programs. We show how historical developments in partial evaluators have removed inherited limits, and suggest how this principle can be used as a guideline for further development. 1
LambdaLifting in Quadratic Time
, 2004
"... Lambdalifting is a program transformation that is used in compilers, partial evaluators, and program transformers. In this article, we show how to reduce its complexity from cubic time to quadratic time, and we present a flowsensitive lambdalifter that also works in quadratic time. Lambdalift ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
Lambdalifting is a program transformation that is used in compilers, partial evaluators, and program transformers. In this article, we show how to reduce its complexity from cubic time to quadratic time, and we present a flowsensitive lambdalifter that also works in quadratic time. Lambdalifting transforms
Supercompiler HOSC: proof of correctness
, 2010
"... The paper presents the proof of correctness of an experimental supercompiler HOSC dealing with higherorder functions. ..."
Abstract

Cited by 6 (6 self)
 Add to MetaCart
The paper presents the proof of correctness of an experimental supercompiler HOSC dealing with higherorder functions.
Specification and Correctness of Lambda Lifting
"... We present a formal and general specification of lambda lifting and prove its correctness with respect to a callbyname operational semantics. We use this specification to prove the correctness of a lambda lifting algorithm similar to the one proposed by Johnsson. Lambda lifting is a program transf ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We present a formal and general specification of lambda lifting and prove its correctness with respect to a callbyname operational semantics. We use this specification to prove the correctness of a lambda lifting algorithm similar to the one proposed by Johnsson. Lambda lifting is a program transformation that eliminates free variables from functions by introducing additional formal parameters to function definitions and additional actual parameters to function calls. This operation supports the transformation from a lexicallystructured functional program into a set of recursive equations. Existing results provide specific algorithms and only limited correctness results. Our work provides a more general specification of lambda lifting (and related operations) that supports flexible translation strategies, which may result in new implementation techniques. Our work also supports a simple framework in which the interaction of lambda lifting and other optimizations can be studied and from which new algorithms might be obtained.