Results 1  10
of
20
Explicit Polymorphism and CPS Conversion
 IN TWENTIETH ACM SYMPOSIUM ON PRINCIPLES OF PROGRAMMING LANGUAGES
, 1992
"... We study the typing properties of CPS conversion for an extension of F ! with control operators. Two classes of evaluation strategies are considered, each with callbyname and callbyvalue variants. Under the "standard" strategies, constructor abstractions are values, and constructor applications ..."
Abstract

Cited by 69 (9 self)
 Add to MetaCart
We study the typing properties of CPS conversion for an extension of F ! with control operators. Two classes of evaluation strategies are considered, each with callbyname and callbyvalue variants. Under the "standard" strategies, constructor abstractions are values, and constructor applications can lead to nontrivial control effects. In contrast, the "MLlike" strategies evaluate beneath constructor abstractions, reflecting the usual interpretation of programs in languages based on implicit polymorphism. Three continuation passing style sublanguages are considered, one on which the standard strategies coincide, one on which the MLlike strategies coincide, and one on which all the strategies coincide. Compositional, typepreserving CPS transformation algorithms are given for the standard strategies, resulting in terms on which all evaluation strategies coincide. This has as a corollary the soundness and termination of welltyped programs under the standard evaluation strategies. A similar result is obtained for the MLlike callbyname strategy. In contrast, such results are obtained for the callby value MLlike strategy only for a restricted sublanguage in which constructor abstractions are limited to values.
Translucent Sums: A Foundation for HigherOrder Module Systems
, 1997
"... The ease of understanding, maintaining, and developing a large program depends crucially on how it is divided up into modules. The possible ways a program can be divided are constrained by the available modular programming facilities ("module system") of the programming language being used. Experien ..."
Abstract

Cited by 58 (0 self)
 Add to MetaCart
The ease of understanding, maintaining, and developing a large program depends crucially on how it is divided up into modules. The possible ways a program can be divided are constrained by the available modular programming facilities ("module system") of the programming language being used. Experience with the StandardML module system has shown the usefulness of functions mapping modules to modules and modules with module subcomponents. For example, functions over modules permit abstract data types (ADTs) to be parameterized by other ADTs, and submodules permit modules to be organized hierarchically. Module systems with such facilities are called higherorder, by analogy with higherorder functions. Previous higherorder module systems can be classified as either opaque or transparent. Opaque systems totally obscure information about the identity of type components of modules, often resulting in overly abstract types. This loss of type identities precludes most interesting uses of hi...
Typability and Type Checking in System F Are Equivalent and Undecidable
 Annals of Pure and Applied Logic
, 1998
"... Girard and Reynolds independently invented System F (a.k.a. the secondorder polymorphically typed lambda calculus) to handle problems in logic and computer programming language design, respectively. Viewing F in the Curry style, which associates types with untyped lambda terms, raises the questions ..."
Abstract

Cited by 58 (4 self)
 Add to MetaCart
Girard and Reynolds independently invented System F (a.k.a. the secondorder polymorphically typed lambda calculus) to handle problems in logic and computer programming language design, respectively. Viewing F in the Curry style, which associates types with untyped lambda terms, raises the questions of typability and type checking . Typability asks for a term whether there exists some type it can be given. Type checking asks, for a particular term and type, whether the term can be given that type. The decidability of these problems has been settled for restrictions and extensions of F and related systems and complexity lowerbounds have been determined for typability in F, but this report is the rst to resolve whether these problems are decidable for System F. This report proves that type checking in F is undecidable, by a reduction from semiuni cation, and that typability in F is undecidable, by a reduction from type checking. Because there is an easy reduction from typability to typ...
Tagless Staged Interpreters for Typed Languages
 In the International Conference on Functional Programming (ICFP ’02
, 2002
"... Multistage programming languages provide a convenient notation for explicitly staging programs. Staging a definitional interpreter for a domain specific language is one way of deriving an implementation that is both readable and efficient. In an untyped setting, staging an interpreter "removes a co ..."
Abstract

Cited by 53 (11 self)
 Add to MetaCart
Multistage programming languages provide a convenient notation for explicitly staging programs. Staging a definitional interpreter for a domain specific language is one way of deriving an implementation that is both readable and efficient. In an untyped setting, staging an interpreter "removes a complete layer of interpretive overhead", just like partial evaluation. In a typed setting however, HindleyMilner type systems do not allow us to exploit typing information in the language being interpreted. In practice, this can have a slowdown cost factor of three or more times.
The complexity of type inference for higherorder typed lambda calculi
 In. Proc. 18th ACM Symposium on the Principles of Programming Languages
, 1991
"... We analyse the computational complexity of type inference for untyped X,terms in the secondorder polymorphic typed Xcalculus (F2) invented by Girard and Reynolds, as well as higherorder extensions F3,F4,...,/ ^ proposed by Girard. We prove that recognising the i^typable terms requires exponential ..."
Abstract

Cited by 28 (11 self)
 Add to MetaCart
We analyse the computational complexity of type inference for untyped X,terms in the secondorder polymorphic typed Xcalculus (F2) invented by Girard and Reynolds, as well as higherorder extensions F3,F4,...,/ ^ proposed by Girard. We prove that recognising the i^typable terms requires exponential time, and for Fa the problem is nonelementary. We show as well a sequence of lower bounds on recognising the i^typable terms, where the bound for Fk+1 is exponentially larger than that for Fk. The lower bounds are based on generic simulation of Turing Machines, where computation is simulated at the expression and type level simultaneously. Nonaccepting computations are mapped to nonnormalising reduction sequences, and hence nontypable terms. The accepting computations are mapped to typable terms, where higherorder types encode reduction sequences, and firstorder types encode the entire computation as a circuit, based on a unification simulation of Boolean logic. A primary technical tool in this reduction is the composition of polymorphic functions having different domains and ranges. These results are the first nontrivial lower bounds on type inference for the Girard/Reynolds
On the undecidability of partial polymorphic type reconstruction
 FUNDAMENTA INFORMATICAE
, 1992
"... We prove that partial type reconstruction for the pure polymorphic *calculus is undecidable by a reduction from the secondorder unification problem, extending a previous result by H.J. Boehm. We show further that partial type reconstruction remains undecidable even in a very small predicative f ..."
Abstract

Cited by 27 (0 self)
 Add to MetaCart
We prove that partial type reconstruction for the pure polymorphic *calculus is undecidable by a reduction from the secondorder unification problem, extending a previous result by H.J. Boehm. We show further that partial type reconstruction remains undecidable even in a very small predicative fragment of the polymorphic *calculus, which implies undecidability of partial type reconstruction for * ML as introduced by Harper, Mitchell, and Moggi.
Type inference and semiunification
 In Proceedings of the ACM Conference on LISP and Functional Programming (LFP ) (Snowbird
, 1988
"... In the last ten years declarationfree programming languages with a polymorphic typing discipline (ML, B) have been developed to approximate the flexibility and conciseness of dynamically typed languages (LISP, SETL) while retaining the safety and execution efficiency of conventional statically type ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
In the last ten years declarationfree programming languages with a polymorphic typing discipline (ML, B) have been developed to approximate the flexibility and conciseness of dynamically typed languages (LISP, SETL) while retaining the safety and execution efficiency of conventional statically typed languages (Algol68, Pascal). These polymorphic languages can be type checked at compile time, yet allow functions whose arguments range over a variety of types. We investigate several polymorphic type systems, the most powerful of which, termed MilnerMycroft Calculus, extends the socalled letpolymorphism found in, e.g., ML with a polymorphic typing rule for recursive definitions. We show that semiunification, the problem of solving inequalities over firstorder terms, characterizes type checking in the MilnerMycroft Calculus to polynomial time, even in the restricted case where nested definitions are disallowed. This permits us to extend some infeasibility results for related combinatorial problems to type inference and to correct several claims and statements in the literature. We prove the existence of unique most general solutions of term inequalities, called most general semiunifiers, and present an algorithm for computing them that terminates for all known inputs due to a novel “extended occurs check”. We conjecture this algorithm to be
Metaprogramming through typeful code representation
 In Proceedings of the Eighth ACM SIGPLAN International Conference on Functional Programming
, 2003
"... By allowing the programmer to write code that can generate code at runtime, metaprogramming offers a powerful approach to program construction. For instance, metaprogramming can often be employed to enhance program efficiency and facilitate the construction of generic programs. However, metaprog ..."
Abstract

Cited by 22 (4 self)
 Add to MetaCart
By allowing the programmer to write code that can generate code at runtime, metaprogramming offers a powerful approach to program construction. For instance, metaprogramming can often be employed to enhance program efficiency and facilitate the construction of generic programs. However, metaprogramming, especially in an untyped setting, is notoriously errorprone. In this paper, we aim at making metaprogramming less errorprone by providing a type system to facilitate the construction of correct metaprograms. We first introduce some code constructors for constructing typeful code representation in which program variables are replaced with deBruijn indices, and then formally demonstrate how such typeful code representation can be used to support metaprogramming. The main contribution of the paper lies in recognition and then formalization of a novel approach to typed metaprogramming that is practical, general and flexible.
MLISP: A RepresentationIndependent Dialect of LISP with Reduction Semantics
 ACM Transactions on Programming Languages and Systems
, 1992
"... In this paper we introduce MLISP, a simple new dialect of LISP which is designed with an eye toward reconciling LISP's metalinguistic power with the structural style of operational semantics advocated by Plotkin [Plo75]. We begin by reviewing the original denition of LISP [McC61] in an attempt t ..."
Abstract

Cited by 22 (2 self)
 Add to MetaCart
In this paper we introduce MLISP, a simple new dialect of LISP which is designed with an eye toward reconciling LISP's metalinguistic power with the structural style of operational semantics advocated by Plotkin [Plo75]. We begin by reviewing the original denition of LISP [McC61] in an attempt to clarify the source of its metalinguistic power. We nd that it arises from a problematic clause in this denition. We then dene the abstract syntax and operational semantics of MLISP, essentially a hybrid of Mexpression LISP and Scheme. Next, we tie the operational semantics to the corresponding equational logic. As usual, provable equality in the logic implies operational equality. Having established this framework we then extend MLISP with the metalinguistic eval and reify operators (the latter is a nonstrict operator which converts its argument to its metalanguage representation.) These operators encapsulate the metalinguistic representation conversions that occur globall...
Tag Elimination and JonesOptimality
"... Tag elimination is a program transformation for removing unnecessary tagging and untagging operations from automatically... ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
Tag elimination is a program transformation for removing unnecessary tagging and untagging operations from automatically...