Results 1 - 10
of
119
MetaML and Multi-Stage Programming with Explicit Annotations
- Theoretical Computer Science
, 1999
"... . We introduce MetaML, a practically-motivated, staticallytyped multi-stage programming language. MetaML is a "real" language. We have built an implementation and used it to solve multi-stage problems. MetaML allows the programmer to construct, combine, and execute code fragments in a ..."
Abstract
-
Cited by 301 (32 self)
- Add to MetaCart
. We introduce MetaML, a practically-motivated, staticallytyped multi-stage programming language. MetaML is a "real" language. We have built an implementation and used it to solve multi-stage problems. MetaML allows the programmer to construct, combine, and execute code fragments in a type-safe manner. Code fragments can contain free variables, but they obey the static-scoping principle. MetaML performs typechecking for all stages once and for all before the execution of the first stage. Certain anomalies with our first MetaML implementation led us to formalize an illustrative subset of the MetaML implementation. We present both a big-step semantics and type system for this subset, and prove the type system's soundness with respect to a big-step semantics. From a software engineering point of view, this means that generators written in the MetaML subset never generate unsafe programs. A type system and semantics for full MetaML is still ongoing work. We argue that multi-...
Separate Compilation for Standard ML
, 1994
"... Languages that support abstraction and modular structure, such as Standard ML, Modula, Ada, and (more or less) C++, may have deeply nested dependency hierarchies among source files. In ML the problem is particularly severe because ML's powerful parameterized module (functor) facility entails de ..."
Abstract
-
Cited by 142 (21 self)
- Add to MetaCart
Languages that support abstraction and modular structure, such as Standard ML, Modula, Ada, and (more or less) C++, may have deeply nested dependency hierarchies among source files. In ML the problem is particularly severe because ML's powerful parameterized module (functor) facility entails dependencies among implementation modules, not just among interfaces.
A Modular Module System
- Journal of Functional Programming
, 2000
"... A simple implementation of an SML-like module system is presented as a module parameterized by a base language and its type-checker. This implementation is useful both as a detailed tutorial on the Harper-Lillibridge-Leroy module system and its implementation, and as a constructive demonstration of ..."
Abstract
-
Cited by 85 (0 self)
- Add to MetaCart
(Show Context)
A simple implementation of an SML-like module system is presented as a module parameterized by a base language and its type-checker. This implementation is useful both as a detailed tutorial on the Harper-Lillibridge-Leroy module system and its implementation, and as a constructive demonstration of the applicability of that module system to a wide range of programming languages.
Deriving a Lazy Abstract Machine
- Journal of Functional Programming
, 1997
"... Machine Peter Sestoft Department of Mathematics and Physics Royal Veterinary and Agricultural University Thorvaldsensvej 40, DK-1871 Frederiksberg C, Denmark E-mail: sestoft@dina.kvl.dk Version 6 of March 13, 1996 Abstract We derive a simple abstract machine for lazy evaluation of the lambda cal ..."
Abstract
-
Cited by 81 (2 self)
- Add to MetaCart
Machine Peter Sestoft Department of Mathematics and Physics Royal Veterinary and Agricultural University Thorvaldsensvej 40, DK-1871 Frederiksberg C, Denmark E-mail: sestoft@dina.kvl.dk Version 6 of March 13, 1996 Abstract We derive a simple abstract machine for lazy evaluation of the lambda calculus, starting from Launchbury's natural semantics. Lazy evaluation here means non-strict evaluation with sharing of argument evaluation, that is, call-by-need. The machine we derive is a lazy version of Krivine's abstract machine, which was originally designed for call-by-name evaluation. We extend it with datatype constructors and base values, so the final machine implements all dynamic aspects of a lazy functional language. 1 Introduction The development of an efficient abstract machine for lazy evaluation usually starts from either a graph reduction machine or an environment machine. Graph reduction machines perform substitution by rewriting the term graph, that is, the program itself...
Warm Fusion: Deriving Build-Catas from Recursive Definitions
- In Conf. on Func. Prog. Languages and Computer Architecture
, 1995
"... Program fusion is the process whereby separate pieces of code are fused into a single piece, typically transforming a multi-pass algorithm into a single pass. Recent work has made it clear that the process is especially successful if the loops or recursions are expressed using catamorphisms (e.g.fol ..."
Abstract
-
Cited by 76 (3 self)
- Add to MetaCart
(Show Context)
Program fusion is the process whereby separate pieces of code are fused into a single piece, typically transforming a multi-pass algorithm into a single pass. Recent work has made it clear that the process is especially successful if the loops or recursions are expressed using catamorphisms (e.g.foldr) and constructor-abstraction (e.g. build). In this paper we show how to transform recursive programs into this form automatically, thus enabling the fusion transformation to be applied more easily than before. 1 Introduction There are significant advantages to multi-pass algorithms, in which intermediate data-structures are created and traversed. In particular, each of the passes may be relatively simple, so are both easier to write and are potentially more reusable. By separating many distinct phases it becomes possible to focus on a single task, rather than attempting to do many things at the same time. The classic toy example of this is to compute the sum of the squares of the numbe...
Flexible representation analysis
- IN ACM SIGPLAN INTERNATIONAL CONFERENCE ON FUNCTIONAL PROGRAMMING
, 1997
"... Statically typed languages with Hindley-Milner polymorphism have long been compiled using inefficient and fully boxed data representations. Recently, several new compilation methods have been proposed to support more efficient and unboxed multi-word representations. Unfortunately, none of these tech ..."
Abstract
-
Cited by 66 (13 self)
- Add to MetaCart
Statically typed languages with Hindley-Milner polymorphism have long been compiled using inefficient and fully boxed data representations. Recently, several new compilation methods have been proposed to support more efficient and unboxed multi-word representations. Unfortunately, none of these techniques is fully satisfactory. For example, Leroy's coercion-based approach does not handle recursive data types and mutable types well. The type-passing approach (proposed by Harper and Morrisett) handles all data objects, but it involves extensive runtime type analysis and code manipulations. This paper presents a new flexible representation analysis technique that combines the best of both approaches. Our new scheme supports unboxed representations for recursive and mutable types, yet it only requires little runtime type analysis. In fact, we show that there is a continuum of possibilities between the coercion-based approach and the type-passing approach. By varying the amount of boxing an...
Implementing Typed Intermediate Languages
, 1998
"... Recent advances in compiler technology have demonstrated the benefits of using strongly typed intermediate languages to compile richly typed source languages (e.g., ML). A typepreserving compiler can use types to guide advanced optimizations and to help generate provably secure mobile code. Types, u ..."
Abstract
-
Cited by 58 (14 self)
- Add to MetaCart
(Show Context)
Recent advances in compiler technology have demonstrated the benefits of using strongly typed intermediate languages to compile richly typed source languages (e.g., ML). A typepreserving compiler can use types to guide advanced optimizations and to help generate provably secure mobile code. Types, unfortunately, are very hard to represent and manipulate efficiently; a naive implementation can easily add exponential overhead to the compilation and execution of a program. This paper describes our experience with implementing the FLINT typed intermediate language in the SML/NJ production compiler. We observe that a type-preserving compiler will not scale to handle large types unless all of its type-preserving stages preserve the asymptotic time and space usage in representing and manipulating types. We present a series of novel techniques for achieving this property and give empirical evidence of their effectiveness.
Once Upon a Polymorphic Type
, 1998
"... We present a sound type-based `usage analysis' for a realistic lazy functional language. Accurate information on the usage of program subexpressions in a lazy functional language permits a compiler to perform a number of useful optimisations. However, existing analyses are either ad-hoc and app ..."
Abstract
-
Cited by 42 (6 self)
- Add to MetaCart
We present a sound type-based `usage analysis' for a realistic lazy functional language. Accurate information on the usage of program subexpressions in a lazy functional language permits a compiler to perform a number of useful optimisations. However, existing analyses are either ad-hoc and approximate, or defined over restricted languages. Our work extends the Once Upon A Type system of Turner, Mossin, and Wadler (FPCA'95). Firstly, we add type polymorphism, an essential feature of typed functional programming languages. Secondly, we include general Haskell-style user-defined algebraic data types. Thirdly, we explain and solve the `poisoning problem', which causes the earlier analysis to yield poor results. Interesting design choices turn up in each of these areas. Our analysis is sound with respect to a Launchbury-style operational semantics, and it is straightforward to implement. Good results have been obtained from a prototype implementation, and we are currently integrating the system into the Glasgow Haskell Compiler.
Dictionary-free Overloading by Partial Evaluation
"... One of the most novel features in the functional programming language Haskell is the system of type classes used to support a combination of overloading and polymorphism. Current implementations of type class overloading are based on the use of dictionary values, passed as extra parameters to overlo ..."
Abstract
-
Cited by 40 (1 self)
- Add to MetaCart
(Show Context)
One of the most novel features in the functional programming language Haskell is the system of type classes used to support a combination of overloading and polymorphism. Current implementations of type class overloading are based on the use of dictionary values, passed as extra parameters to overloaded functions. Unfortunately, this can have a significant effect on run-time performance, for example, by reducing the effectiveness of important program analyses and optimizations. This paper describes how a simple partial evaluator can be used to avoid the need for dictionary values at run-time by generating specialized versions of overloaded functions. This eliminates the run-time costs of overloading. Furthermore, and somewhat surprisingly given the presence of multiple versions of some functions, for all of the examples that we have tried so far, specialization actually leads to a reduction in the size of compiled programs.
The effectiveness of type-based unboxing
- Boston College, Computer Science Department
, 1997
"... Abstract We compare the efficiency of type-based unboxing strategies with that of simpler, untyped unboxing optimizations, building on our practical experience with the Gallium and Objective Caml compilers. We find the untyped optimizations to perform as well on the best case and significantly bette ..."
Abstract
-
Cited by 36 (1 self)
- Add to MetaCart
(Show Context)
Abstract We compare the efficiency of type-based unboxing strategies with that of simpler, untyped unboxing optimizations, building on our practical experience with the Gallium and Objective Caml compilers. We find the untyped optimizations to perform as well on the best case and significantly better in the worst case. 1 Introduction In Pascal or C, the actual types of all data are always known at compile-time, allowing the compilers to base data representation decisions on this typing information, thus supporting efficient memory layout of data structures as well as efficient calling conventions for functions. This is no longer true for languages featuring polymorphism and type abstraction, such as ML: there, the static, compile-time type information does not always determine the actual, run-time type of a data (e.g. when the static type contains type variables or abstract type identifiers).