Results 1  10
of
18
Mechanized metatheory for the masses: The POPLmark challenge
 In Theorem Proving in Higher Order Logics: 18th International Conference, number 3603 in LNCS
, 2005
"... Abstract. How close are we to a world where every paper on programming languages is accompanied by an electronic appendix with machinechecked proofs? We propose an initial set of benchmarks for measuring progress in this area. Based on the metatheory of System F<:, a typed lambdacalculus with se ..."
Abstract

Cited by 161 (14 self)
 Add to MetaCart
(Show Context)
Abstract. How close are we to a world where every paper on programming languages is accompanied by an electronic appendix with machinechecked proofs? We propose an initial set of benchmarks for measuring progress in this area. Based on the metatheory of System F<:, a typed lambdacalculus with secondorder polymorphism, subtyping, and records, these benchmarks embody many aspects of programming languages that are challenging to formalize: variable binding at both the term and type levels, syntactic forms with variable numbers of components (including binders), and proofs demanding complex induction principles. We hope that these benchmarks will help clarify the current state of the art, provide a basis for comparing competing technologies, and motivate further research. 1
Proving ML type soundness within Coq
 In Proc. TPHOLs ’00
, 2000
"... Abstract. We verify within the Coq proof assistant that ML typing is sound with respect to the dynamic semantics. We prove this property in the framework of a big step semantics and also in the framework of a reduction semantics. For that purpose, we use a syntaxdirected version of the typing rules ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We verify within the Coq proof assistant that ML typing is sound with respect to the dynamic semantics. We prove this property in the framework of a big step semantics and also in the framework of a reduction semantics. For that purpose, we use a syntaxdirected version of the typing rules: we prove mechanically its equivalence with the initial type system provided by Damas and Milner. This work is complementary to the certification of the ML type inference algorithm done previously by the author and Valérie MénissierMorain. 1
Verified Bytecode Verification and TypeCertifying Compilation
 JOURNAL OF LOGIC AND ALGEBRAIC PROGRAMMING
, 2003
"... This article presents a type certifying compiler for a subset of Java and proves the type correctness of the bytecode it generates in the proof assistant Isabelle. The proof is performed by defining a type compiler that emits a type certificate and by showing a correspondence between bytecode and th ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
This article presents a type certifying compiler for a subset of Java and proves the type correctness of the bytecode it generates in the proof assistant Isabelle. The proof is performed by defining a type compiler that emits a type certificate and by showing a correspondence between bytecode and the certificate which entails welltyping. The basis for this work is an extensive formalization of the Java bytecode type system, which is first presented in an abstract, latticetheoretic setting and then instantiated to Java types.
General Bindings and AlphaEquivalence in Nominal Isabelle
"... Abstract. Nominal Isabelle is a definitional extension of the Isabelle/HOL theorem prover. It provides a proving infrastructure for reasoning about programming language calculi involving named bound variables (as opposed to deBruijn indices). In this paper we present an extension of Nominal Isabell ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Nominal Isabelle is a definitional extension of the Isabelle/HOL theorem prover. It provides a proving infrastructure for reasoning about programming language calculi involving named bound variables (as opposed to deBruijn indices). In this paper we present an extension of Nominal Isabelle for dealing with general bindings, that means termconstructors where multiple variables are bound at once. Such general bindings are ubiquitous in programming language research and only very poorly supported with single binders, such as lambdaabstractions. Our extension includes new definitions of αequivalence and establishes automatically the reasoning infrastructure for αequated terms. We also prove strong induction principles that have the usual variable convention already built in. 1
A certified implementation of ML with structural polymorphism
 In Proceedings of the 8th Asian conference on Programming Languages and Systems, APLAS’10
, 2010
"... Abstract. The type system of Objective Caml has many unique features, which make ensuring the correctness of its implementation difficult. One of these features is structurally polymorphic types, such as polymorphic object and variant types, which have the extra specificity of allowing recursion. We ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Abstract. The type system of Objective Caml has many unique features, which make ensuring the correctness of its implementation difficult. One of these features is structurally polymorphic types, such as polymorphic object and variant types, which have the extra specificity of allowing recursion. We implemented in Coq a certified interpreter for Core ML extended with structural polymorphism and recursion. Along with type soundness of evaluation, soundness and principality of type inference are also proved. 1
Type Reconstruction Algorithms: A Survey
, 2007
"... Most type reconstruction algorithms can be broadly classified into two distinct categories: unification and substitution based and constraint based. This report is a survey of some of the popular type reconstruction algorithms in the above two categories to promote better understanding of these algo ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Most type reconstruction algorithms can be broadly classified into two distinct categories: unification and substitution based and constraint based. This report is a survey of some of the popular type reconstruction algorithms in the above two categories to promote better understanding of these algorithms. We have implemented the above algorithms for a language based on pure lambda calculus extended extended with poylorphic let construct on some nontrivial examples. 1
Mechanical Proof of the Optimality of a Partial Evaluator
, 1999
"... We present a proof of the optimality of lambdamix, Gomard's partial evaluator for an untyped applied lambda calculus. We also report on a mechanically verified version of the proof, which was done using Isabelle/HOL, the typed higher order logic instance of the generic proof system Isabelle. ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We present a proof of the optimality of lambdamix, Gomard's partial evaluator for an untyped applied lambda calculus. We also report on a mechanically verified version of the proof, which was done using Isabelle/HOL, the typed higher order logic instance of the generic proof system Isabelle.
Bracket Abstraction Preserves Typability A formal proof of Diller–algorithm–C in PVS
"... Abstract. Bracket abstraction is an algorithm that transforms lambda expressions into combinator terms. There are several versions of this algorithm depending on the actual set of combinators that is used. Most of them have been proven correct with respect to the operational semantics. In this paper ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Bracket abstraction is an algorithm that transforms lambda expressions into combinator terms. There are several versions of this algorithm depending on the actual set of combinators that is used. Most of them have been proven correct with respect to the operational semantics. In this paper we focus on typability. We present a fully machine verified proof of the property that bracket abstraction preserves types; the types assigned to an expression before and after performing bracket abstraction are identical. To our knowledge, this is the first time that (1) such proof has been given, and (2) the proof is verified by a theorem prover. The theorem prover used in the development of the proof is PVS. 1
Type inference in context
 In Mathematically Structured Functional Programming (MSFP
, 2010
"... We consider the problems of firstorder unification and type inference from a general perspective on problemsolving, namely that of information increase in the problem context. This leads to a powerful technique for implementing type inference algorithms. We describe a unification algorithm and ill ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We consider the problems of firstorder unification and type inference from a general perspective on problemsolving, namely that of information increase in the problem context. This leads to a powerful technique for implementing type inference algorithms. We describe a unification algorithm and illustrate the technique for the familiar HindleyMilner type system, but it can be applied to more advanced type systems. The algorithms depend on wellfounded contexts: type variable bindings and typeschemes for terms may depend only on earlier bindings. We ensure that unification yields a most general unifier, and that type inference yields principal types, by advancing definitions earlier in the context only when necessary. Categories and Subject Descriptors F.3.3 [Logics and Meanings
TypeDirected Specification Refinement By
"... ii Specification languages serve a fundamentally different purpose than generalpurpose programming languages, and their type systems reflect these needs. Specification type systems must record and track more information for us to reason about a system adequately, and this added expressiveness may ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
ii Specification languages serve a fundamentally different purpose than generalpurpose programming languages, and their type systems reflect these needs. Specification type systems must record and track more information for us to reason about a system adequately, and this added expressiveness may lead to an undecidable typing analysis. System level design begins with a highlevel specification that is continually refined and expanded with implementation details, constraints, and typing information, down to a concrete specification. During this refinement process, the system is underspecified, and many static analyses aren’t applicable until the system is fully specified. However, partial specifications contain valuable information that can inform the refinement process—we can locally inspect parts of the specification from a typing perspective to look for inferrable information or inconsistencies early on to aid the refinement process. This work defines a typing analysis that gathers constraints and typing information to inform the specification refinement process. It explores localized techniques such as local type inference and tracking of values as a means of influencing the specification refinement process. iii