Results 1  10
of
16
StepIndexed Syntactic Logical Relations for Recursive and Quantified Types
 of Lecture Notes in Computer Science
, 2006
"... We present a sound and complete proof technique, based on syntactic logical relations, for showing contextual equivalence of expressions in a #calculus with recursive types and impredicative universal and existential types. Our development builds on the stepindexed PER model of recursive types ..."
Abstract

Cited by 73 (11 self)
 Add to MetaCart
We present a sound and complete proof technique, based on syntactic logical relations, for showing contextual equivalence of expressions in a #calculus with recursive types and impredicative universal and existential types. Our development builds on the stepindexed PER model of recursive types presented by Appel and McAllester. We have discovered that a direct proof of transitivity of that model does not go through, leaving the "PER" status of the model in question. We show how to extend the AppelMcAllester model to obtain a logical relation that we can prove is transitive, as well as sound and complete with respect to contextual equivalence. We then augment this model to support relational reasoning in the presence of quantified types.
Finitary PCF is not decidable
 Theoretical Computer Science
, 1996
"... The question of the decidability of the observational ordering of finitary PCF was raised [5] to give mathematical content to the full abstraction problem for PCF [9, 14]. We show that the ordering is in fact undecidable. This result places limits on how explicit a representation of the fully abstra ..."
Abstract

Cited by 25 (0 self)
 Add to MetaCart
The question of the decidability of the observational ordering of finitary PCF was raised [5] to give mathematical content to the full abstraction problem for PCF [9, 14]. We show that the ordering is in fact undecidable. This result places limits on how explicit a representation of the fully abstract model can be. It also gives a slight strengthening of the author’s earlier result on typed λdefinability [6].
Semantic foundations for typed assembly languages
 Prog. Languages and Systems (TOPLAS
, 2008
"... Typed Assembly Languages (TALs) are used to validate the safety of machinelanguage programs. The Foundational ProofCarrying Code project seeks to verify the soundness of TALs using the smallest possible set of axioms—the axioms of a suitably expressive logic plus a specification of machine semanti ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Typed Assembly Languages (TALs) are used to validate the safety of machinelanguage programs. The Foundational ProofCarrying Code project seeks to verify the soundness of TALs using the smallest possible set of axioms—the axioms of a suitably expressive logic plus a specification of machine semantics. This paper proposes general semantic foundations that permit modular proofs of the soundness of TALs. These semantic foundations include Typed Machine Language (TML), a type theory for specifying properties of lowlevel data with powerful and orthogonal type constructors, and Lc, a compositional logic for specifying properties of machine instructions with simplified reasoning about unstructured control flow. Both of these components, whose semantics we specify using higherorder logic, are useful for proving the soundness of TALs. We demonstrate this by using TML and Lc to verify the soundness of a lowlevel, typed assembly language, LTAL, which is the target of our coreMLtosparc compiler. To prove the soundness of the TML type system we have successfully applied a new approach, that of stepindexed logical relations. This approach provides the first semantic model for a type system with updatable references to values of impredicative quantified types. Both impredicative polymorphism and mutable references are essential when representing function closures in compilers with typed closure conversion, or when compiling objects to simpler typed primitives.
On the Correspondence Between Proofs and λTerms
 Cahiers Du Centre de Logique
, 1995
"... Abstract. The correspondence between natural deduction proofs and λterms is presented and discussed. A variant of the reducibility method is presented, and a general theorem for establishing properties of typed (firstorder) λterms is proved. As a corollary, we obtain a simple proof of the Church ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Abstract. The correspondence between natural deduction proofs and λterms is presented and discussed. A variant of the reducibility method is presented, and a general theorem for establishing properties of typed (firstorder) λterms is proved. As a corollary, we obtain a simple proof of the ChurchRosser property, and of the strong normalization property, for the typed λcalculus associated with the system of (intuitionistic) firstorder natural deduction, including all the connectors
A prooftheoretic account of logical relations
, 2006
"... Proofs by logical relations (a.k.a. Tait’s method) are useful for demonstrating foundational properties of typed λcalculi. Unfortunately, because the definition of a logical relation typically relies on settheoretic machinery, these proofs are notoriously difficult to formalize in machinecheckabl ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Proofs by logical relations (a.k.a. Tait’s method) are useful for demonstrating foundational properties of typed λcalculi. Unfortunately, because the definition of a logical relation typically relies on settheoretic machinery, these proofs are notoriously difficult to formalize in machinecheckable detail. Our notion of logical relation is defined in terms of the provability of a class of firstorder formulas. We prove canonical forms for the simplytyped λcalculus using our notion of logical relations, and provide a generic recipe for proving a class of theorems in a similar manner. The proof of canonical forms has been formalized in the Twelf proof assistant. 1
Two behavioural lambda models
 Types for Proofs and Programs
, 2003
"... Abstract. We build a lambda model which characterizes completely (persistently) normalizing, (persistently) head normalizing, and (persistently) weak head normalizing terms. This is proved by using the finitary logical description of the model obtained by defining a suitable intersection type assign ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
Abstract. We build a lambda model which characterizes completely (persistently) normalizing, (persistently) head normalizing, and (persistently) weak head normalizing terms. This is proved by using the finitary logical description of the model obtained by defining a suitable intersection type assignment system.
Reducibility: a ubiquitous method in lambda calculus with intersection types
, 2002
"... A general reducibility method is developed for proving reduction properties of lambda terms typeable in intersection type systems with and without the universal type #. Su#cient conditions for its application are derived. This method leads to uniform proofs of confluence, standardization, and weak h ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
A general reducibility method is developed for proving reduction properties of lambda terms typeable in intersection type systems with and without the universal type #. Su#cient conditions for its application are derived. This method leads to uniform proofs of confluence, standardization, and weak head normalization of terms typeable in the system with the type #. The method extends Tait's reducibility method for the proof of strong normalization of the simply typed lambda calculus, Krivine's extension of the same method for the strong normalization of intersection type system without #, and StatmanMitchell's logical relation method for the proof of confluence of ##reduction on the simply typed lambda terms. As a consequence, the confluence and the standardization of all (untyped) lambda terms is obtained.
A verified framework for higherorder uncurrying optimizations
 HIGHERORDER AND SYMBOLIC COMPUTATION
"... ..."
Abstract Interpretation and Attribute Grammars
, 1992
"... The objective of this thesis is to explore the connections between abstract interpretation and attribute grammars as frameworks in program analysis. Abstract interpretation is a semanticsbased program analysis method. A large class of data flow analysis problems can be expressed as nonstandard sem ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
The objective of this thesis is to explore the connections between abstract interpretation and attribute grammars as frameworks in program analysis. Abstract interpretation is a semanticsbased program analysis method. A large class of data flow analysis problems can be expressed as nonstandard semantics where the “meaning ” contains information about the runtime behaviour of programs. In an abstract interpretation the analysis is proved correct by relating it to the usual semantics for the language. Attribute grammars provide a method and notation to specify code generation and program analysis directly from the syntax of the programming language. They are especially used for describing compilation of programming languages and very efficient evaluators have been developed for subclasses of attribute grammars. By relating abstract interpretation and attribute grammars we obtain a closer connection between the specification and implementation of abstract interpretations which at the same time facilitates the correctness proofs of interpretations. Implementation and specification of abstract interpretations using circular attribute grammars is realised with an evaluator system for a class of domain theoretic attribute grammars. In this system thecircularity of attribute grammars is resolved by fixpoint iteration. The use of finite lattices in abstract interpretations requires automatic generation of specialised fixpoint iterators. This is done using a technique called lazy fixpoint iteration which is presented in the thesis. Methods from abstract interpretation can also be used in correctness proofs of attribute grammars. This proof technique introduces a new class of attribute grammars based on domain theory. This method is illustrated with examples. i ii SUMMARY
Integrating Functional Programming Into C++:Implementation and Verification
"... Abstract. We describe a parsertranslator program that translates typed λterms to C++ classes so as to integrate functional programming. We prove the correctness of the translation with respect to a denotational semantics using Kripkestyle logical relations. 1 ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. We describe a parsertranslator program that translates typed λterms to C++ classes so as to integrate functional programming. We prove the correctness of the translation with respect to a denotational semantics using Kripkestyle logical relations. 1