Results 1  10
of
16
Contracts Made Manifest
, 2009
"... Since Findler and Felleisen [2002] introduced higherorder contracts, many variants of their system have been proposed. Broadly, these fall into two groups: some follow Findler and Felleisen in using latent contracts, purely dynamic checks that are transparent to the type system; others use manifest ..."
Abstract

Cited by 24 (4 self)
 Add to MetaCart
Since Findler and Felleisen [2002] introduced higherorder contracts, many variants of their system have been proposed. Broadly, these fall into two groups: some follow Findler and Felleisen in using latent contracts, purely dynamic checks that are transparent to the type system; others use manifest contracts, where refinement types record the most recent check that has been applied. These two approaches are generally assumed to be equivalent—different ways of implementing the same idea, one retaining a simple type system, and the other providing more static information. Our goal is to formalize and clarify this folklore understanding. Our work extends that of Gronski and Flanagan [2007], who defined a latent calculus λC and a manifest calculus λH, gave a translation φ from λC to λH, and proved that if a λC term reduces to a constant, then so does its φimage. We enrich their account with a translation ψ in the opposite direction and prove an analogous theorem for ψ. More importantly, we generalize the whole framework to dependent contracts, where the predicates in contracts can mention variables from the local context. This extension is both pragmatically crucial, supporting a much more interesting range of contracts, and theoretically challenging. We define dependent versions of λC (following Findler and Felleisen’s semantics) and λH, establish type soundness—a challenging result in itself, for λH—and extend φ and ψ accordingly. Interestingly, the intuition that the two systems are equivalent appears to break down here: we show that ψ preserves behavior exactly, but that a natural extension of φ to the dependent case will sometimes yield terms that blame more because of a subtle difference in the treatment of dependent function contracts when the codomain contract itself abuses the argument. Note to reviewers: This is a preliminary version. It is technically complete, but not yet fully polished. Please do not distribute. 1 1
Metatheory à la carte
 In POPL ’13
, 2013
"... Formalizing metatheory, or proofs about programming languages, in a proof assistant has many wellknown benefits. However, the considerable effort involved in mechanizing proofs has prevented it from becoming standard practice. This cost can be amortized by reusing as much of an existing formalizat ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
(Show Context)
Formalizing metatheory, or proofs about programming languages, in a proof assistant has many wellknown benefits. However, the considerable effort involved in mechanizing proofs has prevented it from becoming standard practice. This cost can be amortized by reusing as much of an existing formalization as possible when building a new language or extending an existing one. Unfortunately reuse of components is typically adhoc, with the language designer cutting and pasting existing definitions and proofs, and expending considerable effort to patch up the results. This paper presents a more structured approach to the reuse of formalizations of programming language semantics through the composition of modular definitions and proofs. The key contribution is the development of an approach to induction for extensible Church encodings which uses a novel reinterpretation of the universal property of folds. These encodings provide the foundation for a framework, formalized in Coq, which uses type classes to automate the composition of proofs from modular components. Several interesting language features, including binders and general recursion, illustrate the capabilities of our framework. We reuse these features to build fully mechanized definitions and proofs for a number of languages, including a version of miniML. Bounded induction enables proofs of properties for noninductive semantic functions, and mediating type classes enable proof adaptation for more featurerich languages. 1.
Binders Unbound
 In Proc. of the 16th International Conference on Functional Programming (ICFP
, 2011
"... Implementors of compilers, program refactorers, theorem provers, proof checkers, and other systems that manipulate syntax know that dealing with name binding is difficult to do well. Operations such as αequivalence and captureavoiding substitution seem simple, yet subtle bugs often go undetected. ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
Implementors of compilers, program refactorers, theorem provers, proof checkers, and other systems that manipulate syntax know that dealing with name binding is difficult to do well. Operations such as αequivalence and captureavoiding substitution seem simple, yet subtle bugs often go undetected. Furthermore, their implementations are tedious, requiring “boilerplate ” code that must be updated whenever the object language definition changes. Many researchers have therefore sought to specify binding syntax declaratively, so that tools can correctly handle the details behind the scenes. This idea has been the inspiration for many new systems (such as Beluga, Delphin, FreshML, FreshOCaml, Cαml, FreshLib, and Ott) but there is still room for improvement in expressivity, simplicity and convenience. In this paper, we present a new domainspecific language, UNBOUND, for specifying binding structure. Our language is particularly expressive—it supports multiple atom types, pattern binders, type annotations, recursive binders, and nested binding (necessary for telescopes, a feature found in dependentlytyped languages). However, our specification language is also simple, consisting of just five basic combinators. We provide a formal semantics for this language derived from a locally nameless representation and prove that it satisfies a number of desirable properties. We also present an implementation of our binding specification language as a GHC Haskell library implementing an embedded domain specific language (EDSL). By using Haskell type constructors to represent binding combinators, we implement the EDSL succinctly using datatypegeneric programming. Our implementation supports a number of features necessary for practical programming, including flexibility in the treatment of userdefined types, besteffort name preservation (for error messages), and integration with Haskell’s monad transformer library.
Dependent types and program equivalence
 In Proceedings of the 37th ACM SIGACTSIGPLAN Symposium on Principles of Programming Languages (POPL). ACM
, 2009
"... The definition of type equivalence is one of the most important design issues for any typed language. In dependentlytyped languages, because terms appear in types, this definition must rely on a definition of term equivalence. In that case, decidability of type checking requires decidability for the ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
The definition of type equivalence is one of the most important design issues for any typed language. In dependentlytyped languages, because terms appear in types, this definition must rely on a definition of term equivalence. In that case, decidability of type checking requires decidability for the term equivalence relation. Almost all dependentlytyped languages require this relation to be decidable. Some, such as Coq, Epigram or Agda, do so by employing analyses to force all programs to terminate. Conversely, others, such as DML, ATS, Ωmega, or Haskell, allow nonterminating computation, but do not allow those terms to appear in types. Instead, they identify a terminating index language and use singleton types to connect indices to computation. In both cases, decidable type checking comes at a cost, in terms of complexity and expressiveness. Conversely, the benefits to be gained by decidable type checking are modest. Termination analyses allow dependently typed programs to verify total correctness properties. However, decidable type checking is not a prerequisite for type safety. Furthermore, decidability does not imply tractability. A decidable approximation of program equivalence may not be useful in practice. Therefore, we take a different approach: instead of a fixed notion for term equivalence, we parameterize our type system with an abstract relation that is not necessarily decidable. We then design a novel set of typing rules that require only weak properties of this abstract relation in the proof of the preservation and progress lemmas. This design provides flexibility: we compare valid instantiations of term equivalence which range from betaequivalence, to contextual equivalence, to some exotic equivalences.
GMeta: A Generic Formal Metatheory Framework for FirstOrder Representations
"... This paper presents GMeta: a generic framework for firstorder representations of variable binding that provides once and for all many of the socalled infrastructure lemmas and definitions required in mechanizations of formal metatheory. The key idea is to employ datatypegeneric programming (DGP) ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
This paper presents GMeta: a generic framework for firstorder representations of variable binding that provides once and for all many of the socalled infrastructure lemmas and definitions required in mechanizations of formal metatheory. The key idea is to employ datatypegeneric programming (DGP) and modular programming techniques to deal with the infrastructure overhead. Using a generic universe for representing a large family of object languages we define datatypegeneric libraries of infrastructure for firstorder representations such as locally nameless or de Bruijn indices. Modules are used to provide templates: a convenient interface between the datatypegeneric libraries and the endusers of GMeta. We conducted case studies based on the POPLmark challenge, and showed that dealing with challenging binding constructs, like the ones found in System F<:, is possible with GMeta. All of GMeta’s generic infrastructure is implemented in the Coq theorem prover. Furthermore, due to GMeta’s modular design, the libraries can be easily used, extended, and customized by users.
A Generic Formal Metatheory Framework for FirstOrder Representations
"... This paper presents GMETA: a generic framework for firstorder representations of variable binding that provides once and for all many of the socalled infrastructure lemmas and definitions required in mechanizations of formal metatheory. The framework employs datatypegeneric programming and modula ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
This paper presents GMETA: a generic framework for firstorder representations of variable binding that provides once and for all many of the socalled infrastructure lemmas and definitions required in mechanizations of formal metatheory. The framework employs datatypegeneric programming and modular programming techniques to provide a universe representing a family of datatypes. This universe is generic in two different ways: it is languagegeneric in the sense that several object languages can be represented within the universe; and it is representationgeneric, meaning that it is parameterizable over the particular choice of firstorder representations for binders (for example, locally nameless or de Bruijn). Using this universe, several libraries providing generic infrastructure lemmas and definitions are implemented. These libraries are used in case studies based on the POPLmark challenge, showing that dealing with challenging binding constructs, like the ones found in System F<:, is possible with GMETA. All of GMETA’s generic infrastructure is implemented in the Coq theorem prover, ensuring the soundness of that infrastructure. Furthermore, due to GMETA’s modular design, the libraries can be easily used, extended and customized by end users. 1.
CS Study Group FY07 Phase 2 MachineChecked Metatheory for SecurityOriented Languages
"... DTIC ® has determined on >£<. / c%$/£) that this Technical Document has the Distribution Statement checked below. The current distribution for this document can ..."
Abstract
 Add to MetaCart
(Show Context)
DTIC ® has determined on >£<. / c%$/£) that this Technical Document has the Distribution Statement checked below. The current distribution for this document can
Experience report: Mechanizing Core F � using the locally nameless approach (extended abstract)
"... For a couple of years, much effort has been put in the development of techniques that ease the mechanization of proofs involving binders. We report such a mechanized development of metatheory, the type soundness of Core F � [3], by a non expert user of Coq [2], using the locally nameless representat ..."
Abstract
 Add to MetaCart
(Show Context)
For a couple of years, much effort has been put in the development of techniques that ease the mechanization of proofs involving binders. We report such a mechanized development of metatheory, the type soundness of Core F � [3], by a non expert user of Coq [2], using the locally nameless representation of binders and cofinite quantification, with the help of the tools LNgen [1] and Ott [4]. 1. F � and its formal proof in a nutshell Core F � (Fzip) is a variant of System F that allows for more freedom in the structure of programs that make use of existential types, by considering existentials with an open scope. It is equipped with a smallstep reduction semantics and a sound type system. The paper proof is neither very informative, nor very difficult, and consists in the subject reduction and the progress properties. The mechanized proof was carried out in about one month by the author, who is not an expert user of Coq. It makes use of LNgen [1] and the experimental locally nameless backend of Ott [4] to reduce the burden of the locally nameless encoding and its infrastructure lemmas. The only complex automation we used is the one provided by the Metatheory library from UPenn, that was of great help. Thus, a clean up of the development as well as clever tactics could certainly reduce the size of the whole proof. Much time is spent in proof search, so that Coq compiles it in about 45 minutes on a recent computer, while type checking takes just a few minutes.
Union, Intersection, and Refinement Types and Reasoning About Type Disjointness for Secure Protocol Implementations
, 2012
"... We present a new type system for verifying the security of cryptographic protocol implementations. The type system combines prior work on refinement types, with union, intersection, and polymorphic types, and with the novel ability to reason statically about the disjointness of types. The increased ..."
Abstract
 Add to MetaCart
(Show Context)
We present a new type system for verifying the security of cryptographic protocol implementations. The type system combines prior work on refinement types, with union, intersection, and polymorphic types, and with the novel ability to reason statically about the disjointness of types. The increased expressivity enables the analysis of important protocol classes that were previously out of scope for the typebased analyses of protocol implementations. In particular, our types can statically characterize: (i) more usages of asymmetric cryptography, such as signatures of private data and encryptions of authenticated data; (ii) authenticity and integrity properties achieved by showing knowledge of secret data; (iii) applications based on zeroknowledge proofs. The type system comes with a mechanized proof of correctness and an efficient typechecker.