Results 1  10
of
12
An expressive, scalable type theory for certified code
 In ACM International Conference on Functional Programming
, 2002
"... Abstract We present the type theory LTT, intended to form a basis for typed target languages, providing an internal notion of logical proposition and proof. The inclusion of explicit proofs allows the type system to guarantee properties that would otherwise be incompatible with decidable type checki ..."
Abstract

Cited by 35 (4 self)
 Add to MetaCart
Abstract We present the type theory LTT, intended to form a basis for typed target languages, providing an internal notion of logical proposition and proof. The inclusion of explicit proofs allows the type system to guarantee properties that would otherwise be incompatible with decidable type checking. LTT also provides linear facilities for tracking ephemeral properties that hold only for certain program states. Our type theory allows for reuse of typechecking software by casting a variety of type systems within a single language. We provide additional reuse with a framework for modular development of operational semantics. This framework allows independent type systems and their operational semantics to be joined together, automatically inheriting the type safety properties of those individual systems.
Metatheory à la carte
 In POPL ’13
, 2013
"... Formalizing metatheory, or proofs about programming languages, in a proof assistant has many wellknown benefits. However, the considerable effort involved in mechanizing proofs has prevented it from becoming standard practice. This cost can be amortized by reusing as much of an existing formalizat ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Formalizing metatheory, or proofs about programming languages, in a proof assistant has many wellknown benefits. However, the considerable effort involved in mechanizing proofs has prevented it from becoming standard practice. This cost can be amortized by reusing as much of an existing formalization as possible when building a new language or extending an existing one. Unfortunately reuse of components is typically adhoc, with the language designer cutting and pasting existing definitions and proofs, and expending considerable effort to patch up the results. This paper presents a more structured approach to the reuse of formalizations of programming language semantics through the composition of modular definitions and proofs. The key contribution is the development of an approach to induction for extensible Church encodings which uses a novel reinterpretation of the universal property of folds. These encodings provide the foundation for a framework, formalized in Coq, which uses type classes to automate the composition of proofs from modular components. Several interesting language features, including binders and general recursion, illustrate the capabilities of our framework. We reuse these features to build fully mechanized definitions and proofs for a number of languages, including a version of miniML. Bounded induction enables proofs of properties for noninductive semantic functions, and mediating type classes enable proof adaptation for more featurerich languages. 1.
Proof Weaving
 In Proceedings of the First Informal ACM SIGPLAN Workshop on Mechanizing Metatheory
, 2006
"... Automated proof assistants provide few facilities for incremental development. Generally, if the underlying structures on which a proof is based are modified, the developer must redo much of the proof. Yet incremental development is really the most natural approach for proofs of programming language ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Automated proof assistants provide few facilities for incremental development. Generally, if the underlying structures on which a proof is based are modified, the developer must redo much of the proof. Yet incremental development is really the most natural approach for proofs of programming language properties [5, 12]. We propose “proof weaving”, a technique that allows a proof developer to combine small proofs into larger ones by merging proof objects. We automate much of the merging process and thus ease incremental proof development for programming language properties. To make the discussion concrete we take as an example the problem of proving typesoundness by proving progress and preservation [17] in Coq [3, 7]. However we believe that the methods can be generalized to other proof assistants which generate proof objects, and most directly to those proof assistants which exploit the CurryHoward isomorphism in representing proof terms as λterms [16], e.g. Isabelle and Minlog. We rely on the proof developer to initially prove typesoundness for “tiny ” languages. Each of these languages encapsulates a single welldefined programming feature. For example, a tiny language of booleans can be restricted to the terms True, False, and If and their
A Rewriting Logic Approach to Type Inference ⋆
"... Abstract. Meseguer and Ros,u proposed rewriting logic semantics (RLS) as a programing language definitional framework that unifies operational and algebraic denotational semantics. RLS has already been used to define a series of didactic and real languages, but its benefits in connection with defini ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Abstract. Meseguer and Ros,u proposed rewriting logic semantics (RLS) as a programing language definitional framework that unifies operational and algebraic denotational semantics. RLS has already been used to define a series of didactic and real languages, but its benefits in connection with defining and reasoning about type systems have not been fully investigated. This paper shows how the same RLS style employed for giving formal definitions of languages can be used to define type systems. The same termrewriting mechanism used to execute RLS language definitions can now be used to execute type systems, giving type checkers or type inferencers. The proposed approach is exemplified by defining the HindleyMilner polymorphic type inferencer W as a rewrite logic theory and using this definition to obtain a type inferencer by executing it in a rewriting logic engine. The inferencer obtained this way compares favorably with other definitions or implementations of W. The performance of the executable definition is within an order of magnitude of that of highly optimized implementations of type inferencers, such as that of OCaml. 1
Mixing Induction and Coinduction
, 2009
"... Purely inductive definitions give rise to treeshaped values where all branches have finite depth, and purely coinductive definitions give rise to values where all branches are potentially infinite. If this is too restrictive, then an alternative is to use mixed induction and coinduction. This techn ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Purely inductive definitions give rise to treeshaped values where all branches have finite depth, and purely coinductive definitions give rise to values where all branches are potentially infinite. If this is too restrictive, then an alternative is to use mixed induction and coinduction. This technique appears to be fairly unknown. The aim of this paper is to make the technique more widely known, and to present several new applications of it, including a parser combinator library which guarantees termination of parsing, and a method for combining coinductively defined inference systems with rules like transitivity. The developments presented in the paper have been formalised and checked in Agda, a dependently typed programming language and proof assistant.
Theorem Proving for Product Lines
 In OOPSLA’11
, 2011
"... Mechanized proof assistants are powerful verification tools, but proof developments can still be difficult and timeconsuming. When verifying a family of related programs, the effort can be reduced by proof reuse. In this paper, we show how to engineer proofs for product lines built from feature modu ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Mechanized proof assistants are powerful verification tools, but proof developments can still be difficult and timeconsuming. When verifying a family of related programs, the effort can be reduced by proof reuse. In this paper, we show how to engineer proofs for product lines built from feature modules. Each module contains proof fragments which are composed together to build a complete proof of correctness for each product. We consider a product line of programming languages, where each variant includes metatheory proofs verifying the correctness of its syntax and semantic definitions. This approach has been realized in the Coq proof assistant, with the proofs of each feature independently certifiable by Coq. These proofs are composed for each language variant, with Coq mechanically verifying that the composite proofs are correct. As validation, we formalize a core calculus for Java in Coq which can be extended with any combination of casts, interfaces, or generics. 1.
Inference Rules Plus ProofSearch Strategies Equals Programs
, 2009
"... In the programminglanguage community many authors communicate algorithms through the use of inference rules. To get from rules to working code requires careful thought and effort. If the rules change or the author wants to use a different algorithm, the effort required to fix the code can be dispro ..."
Abstract
 Add to MetaCart
In the programminglanguage community many authors communicate algorithms through the use of inference rules. To get from rules to working code requires careful thought and effort. If the rules change or the author wants to use a different algorithm, the effort required to fix the code can be disproportionate to the size of the change in the rules. This thesis shows that it is possible to generate working code automatically from inference rules as they appear in publications. The method of this generation is found in the combination of two domainspecific languages: Ruletex and MonStr. Ruletex formally describes inference rules; MonStr connects the rules to an algorithm. Ruletex descriptions are embedded in LATEX, the language that researchers use to publish their work, so that the author commands complete control of the rules ’ appearance. Moreover the generated code enjoys several nice properties: Existing code written in a generalpurpose programming language can interoperate with Ruletex code, correctness of rules is decoupled from performance and termination of code, and implementations are conceptually simple, consisting only of λcalculus with pattern matching. The main technical contribution of this work is the design of MonStr, the executionstrategy language used to form an algorithm out of rules. MonStr specifications provide an important guarantee: a valid strategy cannot affect partial correctness, although it can affect termination, completeness, and efficiency. iii Contents
on Language Descriptions Tools and Applications
, 2010
"... (European Joint Conferences on Theory and Practice of Software) organized in cooperation with ACM Sigplan. LDTA is an application and tool oriented forum on meta programming in a broad sense. A meta program is a program that takes other programs as input or output. The focus of LDTA is on generated ..."
Abstract
 Add to MetaCart
(European Joint Conferences on Theory and Practice of Software) organized in cooperation with ACM Sigplan. LDTA is an application and tool oriented forum on meta programming in a broad sense. A meta program is a program that takes other programs as input or output. The focus of LDTA is on generated or otherwise efficiently implemented meta programs, possibly using high level descriptions of programming languages. Tools and techniques presented at LDTA are usually applicable in the context of “Language Workbenches ” or “Meta Programming Systems ” or simply as parts of advanced programming environments or IDEs. The prelimiary proceedings include an extended abstract based on the invited talk by JeanLouis Giavitto (“A Domain Specific Language for Complex Natural & Artificial Systems Simulations”) and the 11 contributed papers that were selected for presentation and the preliminary proceedings by the programme committee from 30 submissions (i.e., 37 % acceptance rate). Every submission was reviewed by at least three members of the program committee. In addition, the program committee sought the opinions of additional referees, selected because of their expertise on particular topics. The final selection of papers was made during the first week of February 2010. We would like to thank all of the authors who submitted papers to the workshop, and the members of the programme committee for their excellent work. The program committee did not meet in person, but carried out extensive discussions during the electronic PC meeting via EasyChair. We would also like to thank the LDTA organizing committee (Giorgios Robert Economopoulos and Jurgen Vinju) for their assistance and sound counsil, Torbjörn Ekman for contributing to the organization, and the ETAPS organization.
Preliminary Proceedings Structural Operational Semantics
, 2008
"... A stochastic calculus of binding – applications to the modelling of cellular ..."
Abstract
 Add to MetaCart
A stochastic calculus of binding – applications to the modelling of cellular