Results 1  10
of
25
Intersection types for explicit substitutions
, 2003
"... We present a new system of intersection types for a compositionfree calculus of explicit substitutions with a rule for garbage collection, and show that it characterizes those terms which are strongly normalizing. This system extends previous work on the natural generalization of the classical inte ..."
Abstract

Cited by 22 (8 self)
 Add to MetaCart
(Show Context)
We present a new system of intersection types for a compositionfree calculus of explicit substitutions with a rule for garbage collection, and show that it characterizes those terms which are strongly normalizing. This system extends previous work on the natural generalization of the classical intersection types system, which characterized head normalization and weak normalization, but was not complete for strong normalization. An important role is played by the notion of available variable in a term, which is a generalization of the classical notion of free variable.
Compositional Characterizations of λterms using Intersection Types (Extended Abstract)
, 2000
"... We show how to characterize compositionally a number of evaluation properties of λterms using Intersection Type assignment systems. In particular, we focus on termination properties, such as strong normalization, normalization, head normalization, and weak head normalization. We consider also the ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
We show how to characterize compositionally a number of evaluation properties of λterms using Intersection Type assignment systems. In particular, we focus on termination properties, such as strong normalization, normalization, head normalization, and weak head normalization. We consider also the persistent versions of such notions. By way of example, we consider also another evaluation property, unrelated to termination, namely reducibility to a closed term. Many of these characterization results are new, to our knowledge, or else they streamline, strengthen, or generalize earlier results in the literature. The completeness parts of the characterizations are proved uniformly for all the properties, using a settheoretical semantics of intersection types over suitable kinds of stable sets. This technique generalizes Krivine 's and Mitchell's methods for strong normalization to other evaluation properties.
CutElimination in the Strict Intersection Type Assignment System is Strongly Normalising
 NOTRE DAME J. OF FORMAL LOGIC
, 2004
"... This paper defines reduction on derivations (cutelimination) in the Strict Intersection Type Assignment System of [1] and shows a strong normalisation result for this reduction. Using this result, new proofs are given for the approximation theorem and the characterisation of normalisability of term ..."
Abstract

Cited by 16 (12 self)
 Add to MetaCart
This paper defines reduction on derivations (cutelimination) in the Strict Intersection Type Assignment System of [1] and shows a strong normalisation result for this reduction. Using this result, new proofs are given for the approximation theorem and the characterisation of normalisability of terms, using intersection types.
BetaReduction As Unification
, 1996
"... this report, we use a lean version of the usual system of intersection types, whichwe call . Hence, UP is also an appropriate unification problem to characterize typability of terms in . Quite apart from the new light it sheds on fireduction, such an analysis turns out to have several othe ..."
Abstract

Cited by 13 (9 self)
 Add to MetaCart
this report, we use a lean version of the usual system of intersection types, whichwe call . Hence, UP is also an appropriate unification problem to characterize typability of terms in . Quite apart from the new light it sheds on fireduction, such an analysis turns out to have several other benefits
Two behavioural lambda models
 Types for Proofs and Programs
, 2003
"... Abstract. We build a lambda model which characterizes completely (persistently) normalizing, (persistently) head normalizing, and (persistently) weak head normalizing terms. This is proved by using the finitary logical description of the model obtained by defining a suitable intersection type assign ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
(Show Context)
Abstract. We build a lambda model which characterizes completely (persistently) normalizing, (persistently) head normalizing, and (persistently) weak head normalizing terms. This is proved by using the finitary logical description of the model obtained by defining a suitable intersection type assignment system.
Complexity of strongly normalising λterms via nonidempotent intersection types
"... We present a typing system for the λcalculus, with nonidempotent intersection types. As it is the case in (some) systems with idempotent intersections, a λterm is typable if and only if it is strongly normalising. Nonidempotency brings some further information into typing trees, such as a bound o ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
We present a typing system for the λcalculus, with nonidempotent intersection types. As it is the case in (some) systems with idempotent intersections, a λterm is typable if and only if it is strongly normalising. Nonidempotency brings some further information into typing trees, such as a bound on the longest βreduction sequence reducing a term to its normal form. We actually present these results in Klop’s extension of λcalculus, where the bound that is read in the typing tree of a term is refined into an exact measure of the longest reduction sequence. This complexity result is, for longest reduction sequences, the counterpart of de Carvalho’s result for linear headreduction sequences.
A Lambda Model Characterizing Computational Behaviours of Terms
 PROCEEDINGS OF THE AND LIKAVEC INTERNATIONAL WORKSHOP REWRITING IN PROOF AND COMPUTATION
, 2001
"... We build a lambda model which characterizes completely (persistently) normalizing, (persistently) head normalizing, and (persistently) weak head normalizing terms. ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
(Show Context)
We build a lambda model which characterizes completely (persistently) normalizing, (persistently) head normalizing, and (persistently) weak head normalizing terms.
Reducibility: a ubiquitous method in lambda calculus with intersection types
, 2002
"... A general reducibility method is developed for proving reduction properties of lambda terms typeable in intersection type systems with and without the universal type #. Sufficient conditions for its application are derived. This method leads to uniform proofs of confluence, standardization, and weak ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
A general reducibility method is developed for proving reduction properties of lambda terms typeable in intersection type systems with and without the universal type #. Sufficient conditions for its application are derived. This method leads to uniform proofs of confluence, standardization, and weak head normalization of terms typeable in the system with the type #. The method extends Tait's reducibility method for the proof of strong normalization of the simply typed lambda calculus, Krivine's extension of the same method for the strong normalization of intersection type system without #, and StatmanMitchell's logical relation method for the proof of confluence of ##reduction on the simply typed lambda terms. As a consequence, the confluence and the standardization of all (untyped) lambda terms is obtained.
A Linearization of the LambdaCalculus and Consequences
, 2000
"... We embed the standard #calculus, denoted #, into two larger #calculi, denoted # # and &# # . The standard notion of #reduction for # corresponds to two new notions of reduction, # # for # # and &# # for &# # . A distinctive feature of our new calculus # # (resp., &# # ) is that, i ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
We embed the standard #calculus, denoted #, into two larger #calculi, denoted # # and &# # . The standard notion of #reduction for # corresponds to two new notions of reduction, # # for # # and &# # for &# # . A distinctive feature of our new calculus # # (resp., &# # ) is that, in every function application, an argument is used at most once (resp., exactly once) in the body of the function. We establish various connections between the three notions of reduction, #, # # and &# # . As a consequence, we provide an alternative framework to study the relationship between #weak normalization and #strong normalization, and give a new proof of the oftmentioned equivalence between #strong normalization of standard #terms and typability in a system of "intersection types".
Characterising Strong Normalisation for Explicit Substitutions
 In Proceedings of Latin American Theoretical Informatics (LATIN'02), 2002. In Proceedings of Latin American Theoretical Informatics (LATIN'02), Canc
, 2002
"... Abstract. We characterise the strongly normalising terms of a compositionfree calculus of explicit substitutions (with or without garbage collection) by means of an intersection type assignment system. The main novelty is a cutrule which allows to forget the context of the minor premise when the c ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We characterise the strongly normalising terms of a compositionfree calculus of explicit substitutions (with or without garbage collection) by means of an intersection type assignment system. The main novelty is a cutrule which allows to forget the context of the minor premise when the context of the main premise does not have an assumption for the cut variable.