Results 1 
6 of
6
Benchmarking implementations of lazy functional languages II  Two years later
 In 6th Functional programming languages and computer architecture
, 1993
"... Six implementations of different lazy functional languages are compared using a common benchmark of a dozen mediumsized programs. The experiments that were carried out two years ago have been repeated to chart progress in the development of these compilers. The results have been extended to include ..."
Abstract

Cited by 34 (5 self)
 Add to MetaCart
Six implementations of different lazy functional languages are compared using a common benchmark of a dozen mediumsized programs. The experiments that were carried out two years ago have been repeated to chart progress in the development of these compilers. The results have been extended to include all three major Haskell compilers. Over the last two years, the Glasgow Haskell compiler has been improved considerably. The other compilers have also been improved, but to a lesser extent. The Yale Haskell compiler is slower than the Glasgow and Chalmers Haskell compilers. The compilation speed of the Clean compiler is still unrivalled. Another extension is a comparison of results on different architectures so as to look at architectural influences on the benchmarking procedure. A highend architecture should be avoided for benchmarking activities, as its behaviour is uneven. It is better to use a midrange machine if possible. 1 Introduction In the previous benchmark paper [10], which wi...
Constraints to Stop HigherOrder Deforestation
 In 24th ACM Symposium on Principles of Programming Languages
, 1997
"... Wadler's deforestation algorithm eliminates intermediate data structures from functional programs. To be suitable for inclusion in a compiler, it must terminate on all programs. Several techniques to ensure termination of deforestation on all firstorder programs are known, but a technique for ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
Wadler's deforestation algorithm eliminates intermediate data structures from functional programs. To be suitable for inclusion in a compiler, it must terminate on all programs. Several techniques to ensure termination of deforestation on all firstorder programs are known, but a technique for higherorder programs was only recently introduced by Hamilton, and elaborated and implemented in the Glasgow Haskell compiler by Marlow. We introduce a new technique for ensuring termination of deforestation on all higherorder programs that allows useful transformation steps prohibited in Hamilton's and Marlowe's techniques. 1 Introduction Lazy, higherorder, functional programming languages lend themselves to a certain style of programming which uses intermediate data structures [28]. Example 1 Consider the following program. letrec a = x; y:case x of [] ! y (h : t) ! h : a t y in u; v; w: a (a u v) w The term u; v; w:a (a u v) w appends the three lists u, v, and w. Appending u and v ...
Integer Constraints to Stop Deforestation
, 1996
"... . Deforestation is a transformation of functional programs to remove intermediate data structures. It is based on outermost unfolding of function calls where folding occurs when unfolding takes place within the same nested function call. Since unrestricted unfolding may encounter arbitrarily man ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
. Deforestation is a transformation of functional programs to remove intermediate data structures. It is based on outermost unfolding of function calls where folding occurs when unfolding takes place within the same nested function call. Since unrestricted unfolding may encounter arbitrarily many terms, a termination analysis has to determine those subterms where unfolding is possibly dangerous. We show that such an analysis can be obtained from a control flow analysis by an extension with integer constraints  essentially at no loss in efficiency. 1 Introduction The key idea of flow analysis for functional languages is to define an abstract meaning in terms of program points , i.e., subexpressions of the program possibly evaluated during program execution [Pa95]. Such analysises have been invented for tasks like type recovery [Sh91], binding time analysis [Co93], or safety analysis [PS95]. Conceptually, these are closely related to A. Deutsch's storebased alias analysis [D...
Guaranteed Optimization: Proving Nullspace Properties of Compilers
 In Proceedings of the 2002 Static Analysis Symposium (SAS’02
, 2002
"... Writing performancecritical programs can be frustrating because optimizing compilers for imperative languages tend to be unpredictable. For a subset of optimizations  those that simplify rather than reorder code  it would be useful to prove that a compiler reliably performs optimizations. We sh ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Writing performancecritical programs can be frustrating because optimizing compilers for imperative languages tend to be unpredictable. For a subset of optimizations  those that simplify rather than reorder code  it would be useful to prove that a compiler reliably performs optimizations. We show that adopting a ``superanalysis'' approach to optimization enables such a proof. By analogy with linear algebra, we define the nullspace of an optimizer as those programs it reduces to the empty program. To span the nullspace, we define rewrite rules that deoptimize programs by introducing abstraction. For a model compiler we prove that any sequence of deoptimizing rewrite rule applications is undone by the optimizer. Thus, we are able to give programmers a clear mental model of what simplifications the compiler is guaranteed to perform, and make progress on the problem of ``abstraction penalty'' in imperative languages.
An Algorithm for Composing Pointer Tree Automata
, 1996
"... Pointer Tree Automata (PTA's) are a new formalism invented for the ease of data conversion. Each PTA takes an input tree and produces an output tree. The desired format of the data is obtained by a sequence of applications of PTA's. The multiple passes through the data limits the performan ..."
Abstract
 Add to MetaCart
Pointer Tree Automata (PTA's) are a new formalism invented for the ease of data conversion. Each PTA takes an input tree and produces an output tree. The desired format of the data is obtained by a sequence of applications of PTA's. The multiple passes through the data limits the performance of this conversion. In this paper, we describe an algorithm for eliminating this cost by transforming the composition of the PTA's to an equivalent PTA.
Abstract Automating Proofs of Guaranteed Optimization
"... Guaranteed optimization is a technique for building compilers that have proven guarantees of what optimizations they perform. Such compilers optimize predictably and thoroughly, finding optimal forms of programs with respect to an approximate program equivalence. Guaranteed optimization is a “design ..."
Abstract
 Add to MetaCart
Guaranteed optimization is a technique for building compilers that have proven guarantees of what optimizations they perform. Such compilers optimize predictably and thoroughly, finding optimal forms of programs with respect to an approximate program equivalence. Guaranteed optimization is a “designbyproof” technique: in attempting to verify a compiler has a certain property one uncovers failures in its design, and when the proof finally succeeds the compiler has the desired property. The proof technique is somewhat cumbersome, so maintaining the proof as the compiler evolves can be tedious. We describe a specialized theorem prover for guaranteed optimization that has been successfully used to verify a nontrivial compiler having 8 simultaneous program analyses. Key words: Program analysis, optimizing compilers, compiler verification, guaranteed optimization.