Results 11  20
of
25
An Implementation of Standard ML Modules
 In ACM Conf. on Lisp and Functional Programming
, 1988
"... Standard ML includes a set of module constructs that support programming in the large. These constructs extend ML's basic polymorphic type system by introducing the dependent types of Martin Lo"f's Intuitionistic Type Theory. This paper discusses the problems involved in implementing ..."
Abstract

Cited by 32 (3 self)
 Add to MetaCart
(Show Context)
Standard ML includes a set of module constructs that support programming in the large. These constructs extend ML's basic polymorphic type system by introducing the dependent types of Martin Lo"f's Intuitionistic Type Theory. This paper discusses the problems involved in implementing Standard ML's modules and describes a practical, efficient solution to these problems. The representations and algorithms of this implementation were inspired by a detailed formal semantics of Standard ML developed by Milner, Tofte, and Harper. The implementation is part of a new Standard ML compiler that is written in Standard ML using the module system. March 11, An Implementation of Standard ML Modules David MacQueen AT&T Bell Laboratories Murray Hill, NJ 07974 1. Introduction An important part of the revision of ML that led to the Standard ML language was the inclusion of module facilities for the support of "programming in the large." The design of these facilities went through several versions [...
Type Checking with Universes
, 1991
"... Various formulations of constructive type theories have been proposed to serve as the basis for machineassisted proof and as a theoretical basis for studying programming languages. Many of these calculi include a cumulative hierarchy of "universes," each a type of types closed under a ..."
Abstract

Cited by 32 (6 self)
 Add to MetaCart
Various formulations of constructive type theories have been proposed to serve as the basis for machineassisted proof and as a theoretical basis for studying programming languages. Many of these calculi include a cumulative hierarchy of "universes," each a type of types closed under a collection of typeforming operations. Universes are of interest for a variety of reasons, some philosophical (predicative vs. impredicative type theories), some theoretical (limitations on the closure properties of type theories), and some practical (to achieve some of the advantages of a type of all types without sacrificing consistency.) The Generalized Calculus of Constructions (CC ! ) is a formal theory of types that includes such a hierarchy of universes. Although essential to the formalization of constructive mathematics, universes are tedious to use in practice, for one is required to make specific choices of universe levels and to ensure that all choices are consistent. In this pa...
Type inference and semiunification
 In Proceedings of the ACM Conference on LISP and Functional Programming (LFP ) (Snowbird
, 1988
"... In the last ten years declarationfree programming languages with a polymorphic typing discipline (ML, B) have been developed to approximate the flexibility and conciseness of dynamically typed languages (LISP, SETL) while retaining the safety and execution efficiency of conventional statically type ..."
Abstract

Cited by 30 (7 self)
 Add to MetaCart
In the last ten years declarationfree programming languages with a polymorphic typing discipline (ML, B) have been developed to approximate the flexibility and conciseness of dynamically typed languages (LISP, SETL) while retaining the safety and execution efficiency of conventional statically typed languages (Algol68, Pascal). These polymorphic languages can be type checked at compile time, yet allow functions whose arguments range over a variety of types. We investigate several polymorphic type systems, the most powerful of which, termed MilnerMycroft Calculus, extends the socalled letpolymorphism found in, e.g., ML with a polymorphic typing rule for recursive definitions. We show that semiunification, the problem of solving inequalities over firstorder terms, characterizes type checking in the MilnerMycroft Calculus to polynomial time, even in the restricted case where nested definitions are disallowed. This permits us to extend some infeasibility results for related combinatorial problems to type inference and to correct several claims and statements in the literature. We prove the existence of unique most general solutions of term inequalities, called most general semiunifiers, and present an algorithm for computing them that terminates for all known inputs due to a novel “extended occurs check”. We conjecture this algorithm to be
An applicative module calculus
 In Theory and Practice of Software Development 97, Lecture Notes in Computer Science
, 1997
"... Abstract. The SMLlike module systems are small typed languages of their own. As is, one would expect a proof of their soundness following from a proof of subject reduction. Unfortunately, the subjectreduction property and the preservation of type abstraction seem to be incompatible. As a consequen ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
Abstract. The SMLlike module systems are small typed languages of their own. As is, one would expect a proof of their soundness following from a proof of subject reduction. Unfortunately, the subjectreduction property and the preservation of type abstraction seem to be incompatible. As a consequence, in relevant module systems, the theoretical study of reductions is meaningless, and for instance, the question of normalization of module expressions can not even be considered. In this paper, we analyze this problem as a misunderstanding of the notion of module definition. We build a variant of the SML module system — inspired from recent works by Leroy, Harper, and Lillibridge — which enjoys the subject reduction property. Type abstraction — achieved through an explicit declaration of the signature of a module at its definition — is preserved. This was the initial motivation. Besides our system enjoys other typetheoretic properties: the calculus is strongly normalizing, there are no syntactic restrictions on module paths, it enjoys a purely applicative semantics, every module has a principal type, and type inference is decidable. Neither Leroy’s system nor Harper and Lillibridge’s system has all of them. 1
Extending Record typing to type parametric modules with sharing
, 1993
"... We extend term unification techniques used to type extensible records in order to solve the two main typing problems for modules in Standard ML: matching and sharing. We obtain a type system for modules based only on well known unification problems, modulo some equational theories we define. Our for ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
We extend term unification techniques used to type extensible records in order to solve the two main typing problems for modules in Standard ML: matching and sharing. We obtain a type system for modules based only on well known unification problems, modulo some equational theories we define. Our formalization is sim ple and has the elegance of polymorphic type disciplines based on unification. It can be seen as a synthesis of previous work on module and record typing.
A calculus of higherorder parameterization for algebraic specifications
 BULLETIN OF THE INTEREST GROUP IN PURE AND APPLIED LOGICS (IGPL
, 1995
"... A specification language is presented which provides three specificationbuilding operators: amalgamated union, renaming and restriction. The language is enhanced with parameterization over higherorder variables based on the simply typed lambda calculus. Context dependencies that ensure the welld ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
A specification language is presented which provides three specificationbuilding operators: amalgamated union, renaming and restriction. The language is enhanced with parameterization over higherorder variables based on the simply typed lambda calculus. Context dependencies that ensure the welldefinedness of a parameterized specification, are defined over a calculus of requirements and can be syntactically derived. A contextual proof system for parameterized specifications is also presented, that is correct and relatively complete.
Key Words in Context, an example
, 1990
"... This paper presents a nontrivial example developed according to their method. We offer comments on how this process might be automated and what demands it places on the logic. In reviewing the text of this paper we have noticed, to our consternation, that the volume of words and symbols generated i ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper presents a nontrivial example developed according to their method. We offer comments on how this process might be automated and what demands it places on the logic. In reviewing the text of this paper we have noticed, to our consternation, that the volume of words and symbols generated is at the verge of overwhelming the reader. The reader of a specification must be able to concentrate his/her attention on specific details of concern. Having the visual field cluttered with text that dilutes the density of information content is distracting. There are several ways to mitigate this problem. (1) Some parts of component specifications are repeated many times in the linear text of the paper because they occur in several contexts. If one were reading the specification from the screen of a workstation, particularly one utilizing a hypertextlike environment, the duplications would be less distracting. (2) The text of specifications suffers from the wordiness of a programming notation. It would be easier to digest if it appeared in more familiar mathematical notation with abbreviated names and special symbols for operators such as universal quantification. Setting the text of axioms in an italic font in `math mode' format would help to distinguish them from the declarations that appear in a signature. (3) At some points in the example, coercion functions are used to resolve otherwise overloaded operator names. Overloaded operator names are commonly used in mathematics, and a convention that permitted such overloading would be a boon to the readability of specifications. In evaluating the specification language and methods we have used in this paper, the reader should also keep in mind that were a similar problem done in practice, the software designer would expect...
True HigherOrder Modules, Separate Compilation, and Signature Calculi
, 2009
"... In the past three decades, the ML module system has been the focal point of tremendous interest in the research community. The combination of parameterized modules and finegrain data abstraction control have proven to be quite powerful in practice. Mainstream languages have slowly adopted features ..."
Abstract
 Add to MetaCart
(Show Context)
In the past three decades, the ML module system has been the focal point of tremendous interest in the research community. The combination of parameterized modules and finegrain data abstraction control have proven to be quite powerful in practice. Mainstream languages have slowly adopted features inspired by the ML module system. However, programmers have run into various limitations and complexities in implementations of the ML module system. In the presence of common extensions such as true higherorder modules, true separate compilation becomes a problem. This conflict reflects a fundamental tension in module system design. Module systems should both propagate as much type information across module boundaries as is unconstrained by the programmer and be able to separately typecheck modules.
Types in Programming Languages
"... Studies about types have influenced, in a significant way, the design and definition of programming languages. This survey presents an introductory overview of concepts related to types and type systems for modern programming languages. We introduce by identifying why types are useful, and go on ..."
Abstract
 Add to MetaCart
Studies about types have influenced, in a significant way, the design and definition of programming languages. This survey presents an introductory overview of concepts related to types and type systems for modern programming languages. We introduce by identifying why types are useful, and go on to discuss the formalization of the syntax of programming languages by pointing out which properties should type systems satisfy, in particular with respect to denotational and operational semantic definitions. We provide an overview of simple type systems, polymorphic type systems, type inference, constrained polymorphism, subtyping and abstract types.
Compiling Curried Functional Languages . . .
, 2004
"... Recent trends in programming language implementation are moving more and more towards “managed ” runtime environments. These offer many benefits, including static and dynamic type checking, security, profiling, bounds checking and garbage collection. The Common Language Infrastructure (CLI) is Micro ..."
Abstract
 Add to MetaCart
Recent trends in programming language implementation are moving more and more towards “managed ” runtime environments. These offer many benefits, including static and dynamic type checking, security, profiling, bounds checking and garbage collection. The Common Language Infrastructure (CLI) is Microsoft’s attempt to define a managed runtime environment. However, since it was designed with more mainstream languages in mind, including C ♯ and C++, CLI proves restrictive when compiling functional languages. More specifically, for compilers such as GHC, which compiles Haskell, the CLI provides little support for lazy evaluation, currying (partial applications) and static type checking. The CLI does not provide any way of representing a computation in an evaluated and nonevaluated form. It does not allow functions to directly manipulate the runtime stack, and it restricts static typing in various forms; including subsumption over function types. In this thesis, we describe a new compilation method that removes the need for runtime argument checks. Runtime argument checking is required to