Results 1 
6 of
6
Two behavioural lambda models
 Types for Proofs and Programs
, 2003
"... Abstract. We build a lambda model which characterizes completely (persistently) normalizing, (persistently) head normalizing, and (persistently) weak head normalizing terms. This is proved by using the finitary logical description of the model obtained by defining a suitable intersection type assign ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
(Show Context)
Abstract. We build a lambda model which characterizes completely (persistently) normalizing, (persistently) head normalizing, and (persistently) weak head normalizing terms. This is proved by using the finitary logical description of the model obtained by defining a suitable intersection type assignment system.
Reducibility: a ubiquitous method in lambda calculus with intersection types
, 2002
"... A general reducibility method is developed for proving reduction properties of lambda terms typeable in intersection type systems with and without the universal type #. Sufficient conditions for its application are derived. This method leads to uniform proofs of confluence, standardization, and weak ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
A general reducibility method is developed for proving reduction properties of lambda terms typeable in intersection type systems with and without the universal type #. Sufficient conditions for its application are derived. This method leads to uniform proofs of confluence, standardization, and weak head normalization of terms typeable in the system with the type #. The method extends Tait's reducibility method for the proof of strong normalization of the simply typed lambda calculus, Krivine's extension of the same method for the strong normalization of intersection type system without #, and StatmanMitchell's logical relation method for the proof of confluence of ##reduction on the simply typed lambda terms. As a consequence, the confluence and the standardization of all (untyped) lambda terms is obtained.
A Fully Abstract Model for
"... Abstract. Aim of this paper is to develop a filter model for a calculus with mobility and higherorder value passing. We will define it for an extension of the Ambient Calculus in which processes can be passed as values. This model turns out to be fully abstract with respect to the notion of context ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. Aim of this paper is to develop a filter model for a calculus with mobility and higherorder value passing. We will define it for an extension of the Ambient Calculus in which processes can be passed as values. This model turns out to be fully abstract with respect to the notion of contextual equivalence where the observables are ambients at top level. 1
A Fully Abstract Model forHigherOrder Mobile Ambients
"... Abstract. Aim of this paper is to develop a filter model for a calculus with mobility and higherorder value passing. We will define it for an extension of the Ambient Calculus in which processes can be passed as values. This model turnsout to be fully abstract with respect to the notion of context ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. Aim of this paper is to develop a filter model for a calculus with mobility and higherorder value passing. We will define it for an extension of the Ambient Calculus in which processes can be passed as values. This model turnsout to be fully abstract with respect to the notion of contextual equivalence where the observables are ambients at top level.
Behavioural Inverse Limit λModels
, 2003
"... We construct two inverse limit λmodels which completely characterise sets of terms with similar computational behaviours: the sets of normalising, head normalising, weak head normalising λterms, those corresponding to the persistent versions of these notions, and the sets of closable, closable nor ..."
Abstract
 Add to MetaCart
We construct two inverse limit λmodels which completely characterise sets of terms with similar computational behaviours: the sets of normalising, head normalising, weak head normalising λterms, those corresponding to the persistent versions of these notions, and the sets of closable, closable normalising, and closable head normalising λterms. More precisely, for each of these sets of terms there is a corresponding element in at least one of the two models such that a term belongs to the set if and only if its interpretation (in a suitable environment) is greater than or equal to that element. We use the finitary logical description of the models, obtained by defining suitable intersection type assignment systems, to prove this.