Results 1  10
of
12
Contracts Made Manifest
, 2009
"... Since Findler and Felleisen [2002] introduced higherorder contracts, many variants of their system have been proposed. Broadly, these fall into two groups: some follow Findler and Felleisen in using latent contracts, purely dynamic checks that are transparent to the type system; others use manifest ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
Since Findler and Felleisen [2002] introduced higherorder contracts, many variants of their system have been proposed. Broadly, these fall into two groups: some follow Findler and Felleisen in using latent contracts, purely dynamic checks that are transparent to the type system; others use manifest contracts, where refinement types record the most recent check that has been applied. These two approaches are generally assumed to be equivalent—different ways of implementing the same idea, one retaining a simple type system, and the other providing more static information. Our goal is to formalize and clarify this folklore understanding. Our work extends that of Gronski and Flanagan [2007], who defined a latent calculus λC and a manifest calculus λH, gave a translation φ from λC to λH, and proved that if a λC term reduces to a constant, then so does its φimage. We enrich their account with a translation ψ in the opposite direction and prove an analogous theorem for ψ. More importantly, we generalize the whole framework to dependent contracts, where the predicates in contracts can mention variables from the local context. This extension is both pragmatically crucial, supporting a much more interesting range of contracts, and theoretically challenging. We define dependent versions of λC (following Findler and Felleisen’s semantics) and λH, establish type soundness—a challenging result in itself, for λH—and extend φ and ψ accordingly. Interestingly, the intuition that the two systems are equivalent appears to break down here: we show that ψ preserves behavior exactly, but that a natural extension of φ to the dependent case will sometimes yield terms that blame more because of a subtle difference in the treatment of dependent function contracts when the codomain contract itself abuses the argument. Note to reviewers: This is a preliminary version. It is technically complete, but not yet fully polished. Please do not distribute. 1 1
Metatheory à la carte
 In POPL ’13
, 2013
"... Formalizing metatheory, or proofs about programming languages, in a proof assistant has many wellknown benefits. However, the considerable effort involved in mechanizing proofs has prevented it from becoming standard practice. This cost can be amortized by reusing as much of an existing formalizat ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Formalizing metatheory, or proofs about programming languages, in a proof assistant has many wellknown benefits. However, the considerable effort involved in mechanizing proofs has prevented it from becoming standard practice. This cost can be amortized by reusing as much of an existing formalization as possible when building a new language or extending an existing one. Unfortunately reuse of components is typically adhoc, with the language designer cutting and pasting existing definitions and proofs, and expending considerable effort to patch up the results. This paper presents a more structured approach to the reuse of formalizations of programming language semantics through the composition of modular definitions and proofs. The key contribution is the development of an approach to induction for extensible Church encodings which uses a novel reinterpretation of the universal property of folds. These encodings provide the foundation for a framework, formalized in Coq, which uses type classes to automate the composition of proofs from modular components. Several interesting language features, including binders and general recursion, illustrate the capabilities of our framework. We reuse these features to build fully mechanized definitions and proofs for a number of languages, including a version of miniML. Bounded induction enables proofs of properties for noninductive semantic functions, and mediating type classes enable proof adaptation for more featurerich languages. 1.
Dependent types and program equivalence
 In Proceedings of the 37th ACM SIGACTSIGPLAN Symposium on Principles of Programming Languages (POPL). ACM
, 2009
"... The definition of type equivalence is one of the most important design issues for any typed language. In dependentlytyped languages, because terms appear in types, this definition must rely on a definition of term equivalence. In that case, decidability of type checking requires decidability for the ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
The definition of type equivalence is one of the most important design issues for any typed language. In dependentlytyped languages, because terms appear in types, this definition must rely on a definition of term equivalence. In that case, decidability of type checking requires decidability for the term equivalence relation. Almost all dependentlytyped languages require this relation to be decidable. Some, such as Coq, Epigram or Agda, do so by employing analyses to force all programs to terminate. Conversely, others, such as DML, ATS, Ωmega, or Haskell, allow nonterminating computation, but do not allow those terms to appear in types. Instead, they identify a terminating index language and use singleton types to connect indices to computation. In both cases, decidable type checking comes at a cost, in terms of complexity and expressiveness. Conversely, the benefits to be gained by decidable type checking are modest. Termination analyses allow dependently typed programs to verify total correctness properties. However, decidable type checking is not a prerequisite for type safety. Furthermore, decidability does not imply tractability. A decidable approximation of program equivalence may not be useful in practice. Therefore, we take a different approach: instead of a fixed notion for term equivalence, we parameterize our type system with an abstract relation that is not necessarily decidable. We then design a novel set of typing rules that require only weak properties of this abstract relation in the proof of the preservation and progress lemmas. This design provides flexibility: we compare valid instantiations of term equivalence which range from betaequivalence, to contextual equivalence, to some exotic equivalences.
Binders Unbound
 In Proc. of the 16th International Conference on Functional Programming (ICFP
, 2011
"... Implementors of compilers, program refactorers, theorem provers, proof checkers, and other systems that manipulate syntax know that dealing with name binding is difficult to do well. Operations such as αequivalence and captureavoiding substitution seem simple, yet subtle bugs often go undetected. ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Implementors of compilers, program refactorers, theorem provers, proof checkers, and other systems that manipulate syntax know that dealing with name binding is difficult to do well. Operations such as αequivalence and captureavoiding substitution seem simple, yet subtle bugs often go undetected. Furthermore, their implementations are tedious, requiring “boilerplate ” code that must be updated whenever the object language definition changes. Many researchers have therefore sought to specify binding syntax declaratively, so that tools can correctly handle the details behind the scenes. This idea has been the inspiration for many new systems (such as Beluga, Delphin, FreshML, FreshOCaml, Cαml, FreshLib, and Ott) but there is still room for improvement in expressivity, simplicity and convenience. In this paper, we present a new domainspecific language, UNBOUND, for specifying binding structure. Our language is particularly expressive—it supports multiple atom types, pattern binders, type annotations, recursive binders, and nested binding (necessary for telescopes, a feature found in dependentlytyped languages). However, our specification language is also simple, consisting of just five basic combinators. We provide a formal semantics for this language derived from a locally nameless representation and prove that it satisfies a number of desirable properties. We also present an implementation of our binding specification language as a GHC Haskell library implementing an embedded domain specific language (EDSL). By using Haskell type constructors to represent binding combinators, we implement the EDSL succinctly using datatypegeneric programming. Our implementation supports a number of features necessary for practical programming, including flexibility in the treatment of userdefined types, besteffort name preservation (for error messages), and integration with Haskell’s monad transformer library.
GMeta: A Generic Formal Metatheory Framework for FirstOrder Representations
"... This paper presents GMeta: a generic framework for firstorder representations of variable binding that provides once and for all many of the socalled infrastructure lemmas and definitions required in mechanizations of formal metatheory. The key idea is to employ datatypegeneric programming (DGP) ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
This paper presents GMeta: a generic framework for firstorder representations of variable binding that provides once and for all many of the socalled infrastructure lemmas and definitions required in mechanizations of formal metatheory. The key idea is to employ datatypegeneric programming (DGP) and modular programming techniques to deal with the infrastructure overhead. Using a generic universe for representing a large family of object languages we define datatypegeneric libraries of infrastructure for firstorder representations such as locally nameless or de Bruijn indices. Modules are used to provide templates: a convenient interface between the datatypegeneric libraries and the endusers of GMeta. We conducted case studies based on the POPLmark challenge, and showed that dealing with challenging binding constructs, like the ones found in System F<:, is possible with GMeta. All of GMeta’s generic infrastructure is implemented in the Coq theorem prover. Furthermore, due to GMeta’s modular design, the libraries can be easily used, extended, and customized by users.
A Generic Formal Metatheory Framework for FirstOrder Representations
"... This paper presents GMETA: a generic framework for firstorder representations of variable binding that provides once and for all many of the socalled infrastructure lemmas and definitions required in mechanizations of formal metatheory. The framework employs datatypegeneric programming and modula ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper presents GMETA: a generic framework for firstorder representations of variable binding that provides once and for all many of the socalled infrastructure lemmas and definitions required in mechanizations of formal metatheory. The framework employs datatypegeneric programming and modular programming techniques to provide a universe representing a family of datatypes. This universe is generic in two different ways: it is languagegeneric in the sense that several object languages can be represented within the universe; and it is representationgeneric, meaning that it is parameterizable over the particular choice of firstorder representations for binders (for example, locally nameless or de Bruijn). Using this universe, several libraries providing generic infrastructure lemmas and definitions are implemented. These libraries are used in case studies based on the POPLmark challenge, showing that dealing with challenging binding constructs, like the ones found in System F<:, is possible with GMETA. All of GMETA’s generic infrastructure is implemented in the Coq theorem prover, ensuring the soundness of that infrastructure. Furthermore, due to GMETA’s modular design, the libraries can be easily used, extended and customized by end users. 1.
OF THE UNIVERSITY OF MINNESOTA BY
"... Many people have supported me during the development of this thesis and I owe them all a debt of gratitude. Firstly, I would like to thank my advisor Gopalan Nadathur for his patience and guidance which have played a significant part in my development as a researcher. His willingness to share his op ..."
Abstract
 Add to MetaCart
Many people have supported me during the development of this thesis and I owe them all a debt of gratitude. Firstly, I would like to thank my advisor Gopalan Nadathur for his patience and guidance which have played a significant part in my development as a researcher. His willingness to share his opinions on everything from academic life to playing squash has helped me to develop a perspective and to have fun while doing this. I look forward to continuing my interactions with him far into the future. I am grateful to Dale Miller for sharing with me an excitement for research and an appreciation of the uncertainty that precedes understanding. I have never met anybody else who so enjoys when things seem amiss, because he knows that a new perspective will eventually emerge and bring clarity. This thesis has been heavily influenced by the time I have spent working with Alwen Tiu, David Baelde, Zach Snow, and Xiaochu Qi. Understanding their work has given me a deeper understanding of my own research and its role in the bigger picture. I am thankful for the time I have had with each and every one of them.
tion]: Logics and Meanings of Programs—Specifying and Verify
"... Programs that treat datatypes with binders, such as theorem provers or higherorder compilers, are regularly used for missioncritical purposes, and must be both reliable and performant. Formally proving such programs using as much automation as possible is highly desirable. In this paper, we propo ..."
Abstract
 Add to MetaCart
Programs that treat datatypes with binders, such as theorem provers or higherorder compilers, are regularly used for missioncritical purposes, and must be both reliable and performant. Formally proving such programs using as much automation as possible is highly desirable. In this paper, we propose a generic approach to handle datatypes with binders both in the program and its specification in a way that facilitates automated reasoning about such datatypes and also leads to a reasonably efficient code. Our method is implemented in the Why3 environment for program verification. We validate it on the examples of a lambdainterpreter with several reduction strategies and a simple tableauxbased theorem prover.
Under consideration for publication in J. Functional Programming 1 Acute: Highlevel programming language design for distributed computation
"... † INRIA Rocquencourt Existing languages provide good support for typeful programming of standalone programs. In a distributed system, however, there may be interaction between multiple instances of many distinct programs, sharing some (but not necessarily all) of their module structure, and with som ..."
Abstract
 Add to MetaCart
† INRIA Rocquencourt Existing languages provide good support for typeful programming of standalone programs. In a distributed system, however, there may be interaction between multiple instances of many distinct programs, sharing some (but not necessarily all) of their module structure, and with some instances rebuilt with new versions of certain modules as time goes on. In this paper we discuss programminglanguage support for such systems, focussing on their typing and naming issues. We describe an experimental language, Acute, which extends an ML core to support distributed development, deployment, and execution, allowing typesafe interaction between separatelybuilt programs. The main features are: (1) typesafe marshalling of arbitrary values; (2) type names that are generated (freshly and by hashing) to ensure that type equality tests suffice to protect the invariants of abstract types, across the entire distributed system; (3) expressionlevel names generated to ensure that name equality tests suffice for type safety of associated values, e.g. values carried on named channels; (4) controlled dynamic rebinding of marshalled values to local resources; and (5) thunkification of threads and mutexes to support computation mobility.
Experience report: Mechanizing Core F � using the locally nameless approach (extended abstract)
"... For a couple of years, much effort has been put in the development of techniques that ease the mechanization of proofs involving binders. We report such a mechanized development of metatheory, the type soundness of Core F � [3], by a non expert user of Coq [2], using the locally nameless representat ..."
Abstract
 Add to MetaCart
For a couple of years, much effort has been put in the development of techniques that ease the mechanization of proofs involving binders. We report such a mechanized development of metatheory, the type soundness of Core F � [3], by a non expert user of Coq [2], using the locally nameless representation of binders and cofinite quantification, with the help of the tools LNgen [1] and Ott [4]. 1. F � and its formal proof in a nutshell Core F � (Fzip) is a variant of System F that allows for more freedom in the structure of programs that make use of existential types, by considering existentials with an open scope. It is equipped with a smallstep reduction semantics and a sound type system. The paper proof is neither very informative, nor very difficult, and consists in the subject reduction and the progress properties. The mechanized proof was carried out in about one month by the author, who is not an expert user of Coq. It makes use of LNgen [1] and the experimental locally nameless backend of Ott [4] to reduce the burden of the locally nameless encoding and its infrastructure lemmas. The only complex automation we used is the one provided by the Metatheory library from UPenn, that was of great help. Thus, a clean up of the development as well as clever tactics could certainly reduce the size of the whole proof. Much time is spent in proof search, so that Coq compiles it in about 45 minutes on a recent computer, while type checking takes just a few minutes.