Results 1 
7 of
7
Intensional Polymorphism in TypeErasure Semantics
, 2002
"... Intensional polymorphism, the ability to dispatch to di#erent routines based on types at run time, enables a variety of advanced implementation techniques for polymorphic languages, including tagfree garbage collection, unboxed function arguments, polymorphic marshalling, and flattened data structu ..."
Abstract

Cited by 142 (39 self)
 Add to MetaCart
Intensional polymorphism, the ability to dispatch to di#erent routines based on types at run time, enables a variety of advanced implementation techniques for polymorphic languages, including tagfree garbage collection, unboxed function arguments, polymorphic marshalling, and flattened data structures. To date, languages that support intensional polymorphism have required a typepassing (as opposed to typeerasure) interpretation where types are constructed and passed to polymorphic functions at run time. Unfortunately, typepassing su#ers from a number of drawbacks: it requires duplication of runtime constructs at the term and type levels, it prevents abstraction, and it severely complicates polymorphic closure conversion.
Implementing Typeful Program Transformations
"... The notion of program transformation is ubiquitous in programming language studies on interpreters, compilers, partial evaluators, etc. In order to implement a program transformation, we need to choose a representation in the meta language, that is, the programming language in which we construct p ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
The notion of program transformation is ubiquitous in programming language studies on interpreters, compilers, partial evaluators, etc. In order to implement a program transformation, we need to choose a representation in the meta language, that is, the programming language in which we construct programs, for representing object programs, that is, the programs in the object language on which the program transformation is to be performed. In practice, most representations chosen for typed...
On the unusual effectiveness of Logic in computer science
 Bulletin of Symbolic Logic
"... Effectiveness of Mathematics in the Natural Sciences [Wig60]. This paper can be construed as an examination and affirmation of Galileo’s tenet that “The book of nature is written in the language of mathematics”. To this effect, Wigner presented a large number of examples that demonstrate the effecti ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Effectiveness of Mathematics in the Natural Sciences [Wig60]. This paper can be construed as an examination and affirmation of Galileo’s tenet that “The book of nature is written in the language of mathematics”. To this effect, Wigner presented a large number of examples that demonstrate the effectiveness of
On \Piconversion in the lambdacube and the combination with abbreviations
, 1997
"... Typed calculus uses two abstraction symbols ( and \Pi) which are usually treated in different ways: x: :x has as type the abstraction \Pi x: :, yet \Pi x: : has type 2 rather than an abstraction; moreover, ( x:A :B)C is allowed and fireduction evaluates it, but (\Pi x:A :B)C is rarely allowed. Fu ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Typed calculus uses two abstraction symbols ( and \Pi) which are usually treated in different ways: x: :x has as type the abstraction \Pi x: :, yet \Pi x: : has type 2 rather than an abstraction; moreover, ( x:A :B)C is allowed and fireduction evaluates it, but (\Pi x:A :B)C is rarely allowed. Furthermore, there is a general consensus that and \Pi are different abstraction operators. While we agree with this general consensus, we find it nonetheless important to allow \Pi to act as an abstraction operator. Moreover, experience with AUTOMATH and the recent revivals of \Pireduction as in [KN 95b, PM 97], illustrate the elegance of giving \Piredexes a status similar to redexes. However, \Pireduction in the cube faces serious problems as shown in [KN 95b, PM 97]: it is not safe as regards subject reduction, it does not satisfy type correctness, it loses the property that the type of an expression is wellformed and it fails to make any expression that contains a \Piredex wellfor...
Common subexpressions are uncommon in lazy functional languages
 Implementation of Functional Languages, 9th International Workshop, IFL’97
"... Abstract. Common subexpression elimination is a wellknown compiler optimisation that saves time by avoiding the repetition of the same computation. In lazy functional languages, referential transparency renders the identification of common subexpressions very simple. More common subexpressions can ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. Common subexpression elimination is a wellknown compiler optimisation that saves time by avoiding the repetition of the same computation. In lazy functional languages, referential transparency renders the identification of common subexpressions very simple. More common subexpressions can be recognised because they can be of arbitrary type whereas standard common subexpression elimination only shares primitive values. However, because lazy functional languages decouple program structure from data space allocation and control flow, analysing its effects and deciding under which conditions the elimination of a common subexpression is beneficial proves to be quite difficult. We developed and implemented the transformation for the language Haskell by extending the Glasgow Haskell compiler. On realworld programs the transformation showed nearly no effect. The reason is that common subexpressions whose elimination could speed up programs are uncommon in lazy functional languages.
Exploring Henk
"... ion j x : E:E Quantication Syntax of Pure Type System Expressions A nice example of this eect is the extension of Haskell with generics. For implementing a generic programming extension for Haskell in the style of Ralf Hinze[2], we need abstraction at the type level. Consider for example the func ..."
Abstract
 Add to MetaCart
ion j x : E:E Quantication Syntax of Pure Type System Expressions A nice example of this eect is the extension of Haskell with generics. For implementing a generic programming extension for Haskell in the style of Ralf Hinze[2], we need abstraction at the type level. Consider for example the function size(t) :: t a > Int which counts the number of values of type a in a given structure of type t a. Note that the type parameter of size ranges over types of kind * > *. To be able to dene size generically over all type constructors of this kind we need type abstraction. Using type abstractions size(t) can be dened as: size(t) :: t a > Int size(\a.1) x = 0 size(\a.Char) x = 0 size(\a. f a + g a) (Left x) = size(f) x size(\a. f a + g a) (Right x) = size(g) x size(\a. f a * g a) (x,y) = size(f) x + size(g) y 3 Why Henk? Using Henk as an intermediate language has a lot of advantages compared to contemporary intermediate languages: Henk is based on a solid mathematical th...
Consultant
, 2004
"... Abstract. For functional programs, unboxing aggregate data structures such as tuples removes memory indirections and frees dead components of the decoupled structures. To explore the consequences of such optimizations in a wholeprogram compiler, this paper presents a tuple flattening transformation ..."
Abstract
 Add to MetaCart
Abstract. For functional programs, unboxing aggregate data structures such as tuples removes memory indirections and frees dead components of the decoupled structures. To explore the consequences of such optimizations in a wholeprogram compiler, this paper presents a tuple flattening transformation and a framework that allows the formal study and comparison of different flattening schemes. We present our transformation over functional SSA, a simplytyped, monomorphic language and show that the transformation is typesafe. The flattening algorithm defined by our transformation has been incorporated into MLton, a wholeprogram, optimizing compiler for SML. Experimental results indicate that aggressive tuple flattening can lead to substantial improvements in runtime performance, a reduction in code size, and a decrease in total allocation without a significant increase in compilation time.