Results 1  10
of
14
Compiling polymorphism using intensional type analysis
 In Symposium on Principles of Programming Languages
, 1995
"... The views and conclusions contained in this document are those of the authors and should not be interpreted as ..."
Abstract

Cited by 260 (18 self)
 Add to MetaCart
The views and conclusions contained in this document are those of the authors and should not be interpreted as
Typed closure conversion
 In Proceedings of the 23th Symposium on Principles of Programming Languages (POPL
, 1996
"... The views and conclusions contained in this document are those of the authors and should not be interpreted as representing o cial policies, either expressed or implied, of the Advanced Research Projects Agency or the U.S. Government. Any opinions, ndings, and conclusions or recommendations expresse ..."
Abstract

Cited by 154 (22 self)
 Add to MetaCart
The views and conclusions contained in this document are those of the authors and should not be interpreted as representing o cial policies, either expressed or implied, of the Advanced Research Projects Agency or the U.S. Government. Any opinions, ndings, and conclusions or recommendations expressed in this material are those of the We study the typing properties of closure conversion for simplytyped and polymorphiccalculi. Unlike most accounts of closure conversion, which only treat the untypedcalculus, we translate welltyped source programs to welltyped target programs. This allows later compiler phases to take advantage of types for representation analysis and tagfree garbage collection, and it facilitates correctness proofs. Our account of closure conversion for the simplytyped language takes advantage of a simple model of objects by mapping closures to existentials. Closure conversion for the polymorphic language requires additional type machinery, namely translucency in the style of Harper and Lillibridge's module calculus, to express the type of a closure.
Proof Normalization Modulo
, 1998
"... We consider a class of logical formalisms, in which firstorder logic is extended by identifying propositions modulo a given congruence. We particularly focus on the case where this congruence is induced by a confluent and terminating rewrite system over the propositions. This extension enhances the ..."
Abstract

Cited by 46 (17 self)
 Add to MetaCart
We consider a class of logical formalisms, in which firstorder logic is extended by identifying propositions modulo a given congruence. We particularly focus on the case where this congruence is induced by a confluent and terminating rewrite system over the propositions. This extension enhances the power of firstorder logic and various formalisms, including higherorder logic, can be described in this framework. We conjecture that proof normalization and logical consistency always hold over this class of formalisms, provided some minimal conditions over the rewrite system are fulfilled. We prove this conjecture for some subcases, including higherorder logic. At last, we extend these results to classical sequent calculus.
New Notions of Reduction and NonSemantic Proofs of βStrong Normalization in Typed λCalculi
, 1994
"... Two new notions of reduction for terms of the λcalculus are introduced and the question of whether a λterm is βstrongly normalizing is reduced to the question of whether a λterm is merely normalizing under one of the new notions of reduction. This leads to a new way to provestrong normalization ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
Two new notions of reduction for terms of the λcalculus are introduced and the question of whether a λterm is βstrongly normalizing is reduced to the question of whether a λterm is merely normalizing under one of the new notions of reduction. This leads to a new way to provestrong normalization for typedcalculi. Instead of the usual semantic proof style based on Girard's "candidats de reductibilite", termination can be proved using a decreasing metric over a wellfounded ordering in a style more common in the eld of term rewriting. This new proof method is applied to the simplytyped λcalculus and the system of intersection types.
Compilation based on a calculus for explicit type passing
 In Proceedings of Fuji International Workshop on Functional and Logic Programming
, 1996
"... We propose several calculi for explicit type passing that enable us to formalize compilation of polymorphic programming languages like MLasphasesoftypepreserving translations. In our calculi various manipulations for type parameters can be expressed without typing problemsthis is impossible in the ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
We propose several calculi for explicit type passing that enable us to formalize compilation of polymorphic programming languages like MLasphasesoftypepreserving translations. In our calculi various manipulations for type parameters can be expressed without typing problemsthis is impossible in the polymorphiccalculi. Furthermore, we develop the translation from an explicit typed source calculus similar to CoreXML to one of the proposed calculi which completely eliminates runtime construction type parameters. We proposeanintermediate language based on this calculus, and discuss an implementation of a compiler for Core Standard ML. 1.
A realizability interpretation of MartinLöf's type theory
"... In this paper we present a simple argument for normalization of the fragment of MartinLöf's type theory that contains the natural numbers, dependent function types and the first universe. We do this by building a realizability model of this theory which directly reflects that terms and types are ge ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
In this paper we present a simple argument for normalization of the fragment of MartinLöf's type theory that contains the natural numbers, dependent function types and the first universe. We do this by building a realizability model of this theory which directly reflects that terms and types are generated simultaneously.
Typing untyped λterms, or Reducibility strikes again!
, 1995
"... It was observed by Curry that when (untyped) λterms can be assigned types, for example, simple types, these terms have nice properties (for example, they are strongly normalizing). Coppo, Dezani, and Veneri, introduced type systems using conjunctive types, and showed that several important classes ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
It was observed by Curry that when (untyped) λterms can be assigned types, for example, simple types, these terms have nice properties (for example, they are strongly normalizing). Coppo, Dezani, and Veneri, introduced type systems using conjunctive types, and showed that several important classes of (untyped) terms can be characterized according to the shape of the types that can be assigned to these terms. For example, the strongly normalizable terms, the normalizable terms, and the terms having headnormal forms, can be characterized in some systems D and D. The proofs use variants of the method of reducibility. In this paper, we presenta uniform approach for proving several metatheorems relating properties ofterms and their typability in the systems D and D. Our proofs use a new and more modular version of the reducibility method. As an application of our metatheorems, we show how the characterizations obtained by Coppo, Dezani, Veneri, and Pottinger, can be easily rederived. We alsocharacterize the terms that have weak headnormal forms, which appears to be new. We conclude by stating a number of challenging open problems regarding possible generalizations of the realizability method.
Proof Normalization for a FirstOrder Formulation of HigherOrder Logic
, 1998
"... We define a notion of cut and a proof reduction process for a class of theories, including all equational theories and a firstorder formulation of higherorder logic. Proofs normalize for all equational theories. We show that the proof of the normalization theorem for the usual formulation of highe ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
We define a notion of cut and a proof reduction process for a class of theories, including all equational theories and a firstorder formulation of higherorder logic. Proofs normalize for all equational theories. We show that the proof of the normalization theorem for the usual formulation of higherorder logic can be adapted to prove normalization for its firstorder formulation. The "hard part" of the proof, that cannot be carried out in higherorder logic itself (the normalization of the system Fomega) is left unchanged. Thus, from the point of view of proof normalization, defining higherorder logic as a different logic or as a firstorder theory does not matter. This result also explains a relation between the normalization of propositions and the normalization of proofs in equational theories and in higherorder logic: normalizing propositions does not eliminate cuts, but it transforms them.
The Scott model of Linear Logic is the extensional collapse of its relational model
, 2011
"... We show that the extensional collapse of the relational model of linear logic is the model of primealgebraic complete lattices, a natural extension to linear logic of the well known Scott semantics of the lambdacalculus. ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
We show that the extensional collapse of the relational model of linear logic is the model of primealgebraic complete lattices, a natural extension to linear logic of the well known Scott semantics of the lambdacalculus.
Conservation and Uniform Normalization in Lambda Calculi With Erasing Reductions
, 2002
"... For a notion of reduction in a #calculus one can ask whether a term satises conservation and uniform normalization. Conservation means that singlestep reductions of the term preserve innite reduction paths from the term. Uniform normalization means that either the term will have no reduction path ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
For a notion of reduction in a #calculus one can ask whether a term satises conservation and uniform normalization. Conservation means that singlestep reductions of the term preserve innite reduction paths from the term. Uniform normalization means that either the term will have no reduction paths leading to a normal form, or all reduction paths will lead to a normal form.