Results 1  10
of
10
A direct algorithm for type inference in the rank2 fragment of the secondorder λcalculus
, 1993
"... We study the problem of type inference for a family of polymorphic type disciplines containing the power of CoreML. This family comprises all levels of the stratification of the secondorder lambdacalculus by "rank" of types. We show that typability is an undecidable problem at every rank k >= 3 o ..."
Abstract

Cited by 78 (14 self)
 Add to MetaCart
We study the problem of type inference for a family of polymorphic type disciplines containing the power of CoreML. This family comprises all levels of the stratification of the secondorder lambdacalculus by "rank" of types. We show that typability is an undecidable problem at every rank k >= 3 of this stratification. While it was already known that typability is decidable at rank 2, no direct and easytoimplement algorithm was available. To design such an algorithm, we develop a new notion of reduction and show howto use it to reduce the problem of typability at rank 2 to the problem of acyclic semiunification. A byproduct of our analysis is the publication of a simple solution procedure for acyclic semiunification.
Lambda Calculus: A Case for Inductive Definitions
, 2000
"... These lecture notes intend to introduce to the subject of lambda calculus and types. A special focus is on the use of inductive denitions. The ultimate goal of the course is an advanced treatment of inductive types. Contents 1 Overview 2 2 Introduction to Inductive Denitions 4 3 Lambda Calculus 13 ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
These lecture notes intend to introduce to the subject of lambda calculus and types. A special focus is on the use of inductive denitions. The ultimate goal of the course is an advanced treatment of inductive types. Contents 1 Overview 2 2 Introduction to Inductive Denitions 4 3 Lambda Calculus 13 3.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.2 Pure Untyped Lambda Calculus . . . . . . . . . . . . . . . . . . 15 4 Conuence 19 5 Weak and Strong Normalization 27 6 Simple and Intersection Types 33 6.1 SimplyTyped Lambda Calculus . . . . . . . . . . . . . . . . . . 34 6.2 Lambda Calculus with Intersection Types . . . . . . . . . . . . . 36 6.3 Strong Normalization of Typable Terms . . . . . . . . . . . . . . 39 6.4 Typability of Strongly Normalizing Terms . . . . . . . . . . . . . 41 7 Parametric Polymorphism 41 7.1 Strong Normalization of Typable Terms . . . . . . . . . . . . . . 44 7.1.1 Saturated Sets . . . . . . . . . . . . . . . . . . . . . ....
Pure type systems with corecursion on streams From finite to infinitary normalisation
 IN ICFP
, 2012
"... In this paper, we use types for ensuring that programs involving streams are wellbehaved. We extend pure type systems with a type constructor for streams, a modal operator next and a fixed point operator for expressing corecursion. This extension is called Pure Type Systems with Corecursion (CoPTS) ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
In this paper, we use types for ensuring that programs involving streams are wellbehaved. We extend pure type systems with a type constructor for streams, a modal operator next and a fixed point operator for expressing corecursion. This extension is called Pure Type Systems with Corecursion (CoPTS). The typed lambda calculus for reactive programs defined by Krishnaswami and Benton can be obtained as a CoPTS. CoPTS’s allow us to study a wide range of typed lambda calculi extended with corecursion using only one framework. In particular, we study this extension for the calculus of constructions which is the underlying formal language of Coq. We use the machinery of infinitary rewriting and formalize the idea of wellbehaved programs using the concept of infinitary normalization. We study the properties of infinitary weak and strong normalization for CoPTS’s. The set of finite and infinite terms is defined as a metric completion. We shed new light on the meaning of the modal operator by connecting the modality with the depth used to define the metric. This connection is the key to the proofs of infinitary weak and strong normalization.
LIGHT LOGICS AND THE CALLBYVALUE LAMBDA CALCULUS
, 809
"... Abstract. The socalled light logics [13, 1, 2] have been introduced as logical systems enjoying quite remarkable normalization properties. Designing a type assignment system for pure lambda calculus from these logics, however, is problematic, as discussed in [6]. In this paper we show that shifting ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. The socalled light logics [13, 1, 2] have been introduced as logical systems enjoying quite remarkable normalization properties. Designing a type assignment system for pure lambda calculus from these logics, however, is problematic, as discussed in [6]. In this paper we show that shifting from usual callbyname to callbyvalue lambda calculus allows regaining strong connections with the underlying logic. This will be done in the context of Elementary Affine Logic (EAL), designing a type system in natural deduction style assigning EAL formulae to lambda terms. 1.
Theoretical Foundations for Practical ‘Totally Functional Programming’
, 2007
"... Interpretation is an implicit part of today’s programming; it has great power but is overused and has
significant costs. For example, interpreters are typically significantly hard to understand and hard
to reason about. The methodology of “Totally Functional Programming” (TFP) is a reasoned
attempt ..."
Abstract
 Add to MetaCart
Interpretation is an implicit part of today’s programming; it has great power but is overused and has
significant costs. For example, interpreters are typically significantly hard to understand and hard
to reason about. The methodology of “Totally Functional Programming” (TFP) is a reasoned
attempt to redress the problem of interpretation. It incorporates an awareness of the undesirability
of interpretation with observations that definitions and a certain style of programming appear to
offer alternatives to it. Application of TFP is expected to lead to a number of significant outcomes,
theoretical as well as practical. Primary among these are novel programming languages to lessen or
eliminate the use of interpretation in programming, leading to betterquality software. However,
TFP contains a number of lacunae in its current formulation, which hinder development of these
outcomes. Among others, formal semantics and typesystems for TFP languages are yet to be
discovered, the means to reduce interpretation in programs is to be determined, and a detailed
explication is needed of interpretation, definition, and the differences between the two. Most
important of all however is the need to develop a complete understanding of the nature of
interpretation. In this work, suitable typesystems for TFP languages are identified, and guidance
given regarding the construction of appropriate formal semantics. Techniques, based around the
‘fold’ operator, are identified and developed for modifying programs so as to reduce the amount of
interpretation they contain. Interpretation as a means of languageextension is also investigated.
v
Finally, the nature of interpretation is considered. Numerous hypotheses relating to it considered in
detail. Combining the results of those analyses with discoveries from elsewhere in this work leads
to the proposal that interpretation is not, in fact, symbolbased computation, but is in fact something
more fundamental: computation that varies with input. We discuss in detail various implications of
this characterisation, including its practical application. An often moreuseful property, ‘inherent
interpretiveness’, is also motivated and discussed in depth. Overall, our inquiries act to give
conceptual and theoretical foundations for practical TFP.
IMPROVING EFFICIENCY AND SAFETY OF PROGRAM GENERATION BY
"... Program Generation (PG) is about writing programs that write programs. A program generator composes various pieces of code to construct a new program. When employed at runtime, PG can produce an efficient version of a program by specializing it according to inputs that become available at runtime. P ..."
Abstract
 Add to MetaCart
Program Generation (PG) is about writing programs that write programs. A program generator composes various pieces of code to construct a new program. When employed at runtime, PG can produce an efficient version of a program by specializing it according to inputs that become available at runtime. PG has been used in a wide range of applications to improve program efficiency and modularity as well as programmer productivity. There are two major problems associated with PG: (1) Program generation has its own cost, which may cause a performance loss even though PG is intended for performance gain. This is especially important for runtime program generation. (2) Compilability guarantees about the generated program are poor; the generator may produce a typeincorrect program. In this dissertation we focus on these two problems. We provide three techniques that address the first problem. First, we show that justintime generation can successfully reduce the cost of generation by avoiding unnecessary program generation. We do this by means of an experiment in the context of marshalling in Java, where we generate specialized object marshallers based on object types. Justintime generation improved the speedup from 1.22 to 3.16. Second, we apply sourcelevel transformations to optimize
Type Inference with Runtime Logs (Work in Progress)
"... Abstract. Gradual type systems offer the possibility of migrating programs in dynamicallytyped languages to more staticallytyped ones. There is little evidence yet that large, realworld dynamicallytyped programs can be migrated with a large degree of automation. Unfortunately, since these system ..."
Abstract
 Add to MetaCart
Abstract. Gradual type systems offer the possibility of migrating programs in dynamicallytyped languages to more staticallytyped ones. There is little evidence yet that large, realworld dynamicallytyped programs can be migrated with a large degree of automation. Unfortunately, since these systems typically lack principal types, fully automatic type inference is beyond reach. To combat this challenge, we propose using logs from runtime executions to assist inference. As a first step, in this paper we study how to use runtime logs to improve the efficiency of a type inference algorithm for a small language with firstorder functions, records, parametric polymorphism, subtyping, and bounded quantification. Handling more expressive features in order to scale up to gradual type systems for dynamic languages is left to future work. 1
LIGHT LOGICS AND THE CALLBYVALUE LAMBDA CALCULUS
, 2007
"... Vol. 4 (4:5) 2008, pp. 1–28 www.lmcsonline.org ..."
Compiling Curried Functional Languages . . .
, 2004
"... Recent trends in programming language implementation are moving more and more towards “managed ” runtime environments. These offer many benefits, including static and dynamic type checking, security, profiling, bounds checking and garbage collection. The Common Language Infrastructure (CLI) is Micro ..."
Abstract
 Add to MetaCart
Recent trends in programming language implementation are moving more and more towards “managed ” runtime environments. These offer many benefits, including static and dynamic type checking, security, profiling, bounds checking and garbage collection. The Common Language Infrastructure (CLI) is Microsoft’s attempt to define a managed runtime environment. However, since it was designed with more mainstream languages in mind, including C ♯ and C++, CLI proves restrictive when compiling functional languages. More specifically, for compilers such as GHC, which compiles Haskell, the CLI provides little support for lazy evaluation, currying (partial applications) and static type checking. The CLI does not provide any way of representing a computation in an evaluated and nonevaluated form. It does not allow functions to directly manipulate the runtime stack, and it restricts static typing in various forms; including subsumption over function types. In this thesis, we describe a new compilation method that removes the need for runtime argument checks. Runtime argument checking is required to