Results 1  10
of
20
Mechanized metatheory for the masses: The POPLmark challenge
 In Theorem Proving in Higher Order Logics: 18th International Conference, number 3603 in LNCS
, 2005
"... Abstract. How close are we to a world where every paper on programming languages is accompanied by an electronic appendix with machinechecked proofs? We propose an initial set of benchmarks for measuring progress in this area. Based on the metatheory of System F<:, a typed lambdacalculus with secon ..."
Abstract

Cited by 136 (15 self)
 Add to MetaCart
Abstract. How close are we to a world where every paper on programming languages is accompanied by an electronic appendix with machinechecked proofs? We propose an initial set of benchmarks for measuring progress in this area. Based on the metatheory of System F<:, a typed lambdacalculus with secondorder polymorphism, subtyping, and records, these benchmarks embody many aspects of programming languages that are challenging to formalize: variable binding at both the term and type levels, syntactic forms with variable numbers of components (including binders), and proofs demanding complex induction principles. We hope that these benchmarks will help clarify the current state of the art, provide a basis for comparing competing technologies, and motivate further research. 1
On the unity of duality
 Special issue on “Classical Logic and Computation
, 2008
"... Most type systems are agnostic regarding the evaluation strategy for the underlying languages, with the value restriction for ML which is absent in Haskell as a notable exception. As type systems become more precise, however, detailed properties of the operational semantics may become visible becaus ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
Most type systems are agnostic regarding the evaluation strategy for the underlying languages, with the value restriction for ML which is absent in Haskell as a notable exception. As type systems become more precise, however, detailed properties of the operational semantics may become visible because properties captured by the types may be sound under one strategy but not the other. For example, intersection types distinguish between callbyname and callbyvalue functions, because the subtyping law (A → B) ∩ (A → C) ≤ A → (B ∩ C) is unsound for the latter in the presence of effects. In this paper we develop a prooftheoretic framework for analyzing the interaction of types with evaluation order, based on the notion of polarity. Polarity was discovered through linear logic, but we propose a fresh origin in Dummett’s program of justifying the logical laws through alternative verificationist or pragmatist “meaningtheories”, which include a bias towards either introduction or elimination rules. We revisit Dummett’s analysis using the tools of MartinLöf’s judgmental method, and then show how to extend it to a unified polarized logic, with Girard’s “shift ” connectives acting as intermediaries. This logic safely combines intuitionistic and dual intuitionistic reasoning principles, while simultaneously admitting a focusing interpretation for the classical sequent calculus. Then, by applying the CurryHoward isomorphism to polarized logic, we obtain a single programming language in which evaluation order is reflected at the level of types. Different logical notions correspond directly to natural programming constructs, such as patternmatching, explicit substitutions, values and callbyvalue continuations. We give examples demonstrating the expressiveness of the language and type system, and prove a basic but modular type safety result. We conclude with a brief discussion of extensions to the language with additional effects and types, and sketch the sort of explanation this can provide for operationallysensitive typing phenomena. 1
Trusted source translation of a total function language
 In 14th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS
, 2008
"... Abstract. We present a trusted source translator that transforms total functions defined in the specification language of the HOL theorem prover to simple intermediate code. This translator eliminates polymorphism by code specification, removes higherorder functions through closure conversion, inte ..."
Abstract

Cited by 9 (8 self)
 Add to MetaCart
Abstract. We present a trusted source translator that transforms total functions defined in the specification language of the HOL theorem prover to simple intermediate code. This translator eliminates polymorphism by code specification, removes higherorder functions through closure conversion, interprets pattern matching as conditional expressions, etc. The target intermediate language can be further translated by proof to a simple imperative language. Each transformation is proven to be correct automatically. The formalization, implementation and mechanical verification of all transformations are done in HOL4. 1
Extending the Loop Language with HigherOrder Procedural Variables
 Special issue of ACM TOCL on Implicit Computational Complexity
, 2010
"... We extend Meyer and Ritchie’s Loop language with higherorder procedures and procedural variables and we show that the resulting programming language (called Loop ω) is a natural imperative counterpart of Gödel System T. The argument is twofold: 1. we define a translation of the Loop ω language int ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
We extend Meyer and Ritchie’s Loop language with higherorder procedures and procedural variables and we show that the resulting programming language (called Loop ω) is a natural imperative counterpart of Gödel System T. The argument is twofold: 1. we define a translation of the Loop ω language into System T and we prove that this translation actually provides a lockstep simulation, 2. using a converse translation, we show that Loop ω is expressive enough to encode any term of System T. Moreover, we define the “iteration rank ” of a Loop ω program, which corresponds to the classical notion of “recursion rank ” in System T, and we show that both translations preserve ranks. Two applications of these results in the area of implicit complexity are described. 1
Unchecked Exceptions can be Strictly More Powerful than Call/CC
 HigherOrder and Symbolic Computation
, 1996
"... We demonstrate that in the context of staticallytyped purelyfunctional lambda calculi without recursion, unchecked exceptions (e.g., SML exceptions) can be strictly more powerful than call/cc. More precisely, we prove that a natural extension of the simplytyped lambda calculus with unchecked exce ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
We demonstrate that in the context of staticallytyped purelyfunctional lambda calculi without recursion, unchecked exceptions (e.g., SML exceptions) can be strictly more powerful than call/cc. More precisely, we prove that a natural extension of the simplytyped lambda calculus with unchecked exceptions is strictly more powerful than all known sound extensions of Girard's Fomega (a superset of the simplytyped lambda calculus) with call/cc. This result is established by showing that the first language is Turing complete while the later languages permit only a subset of the recursive functions to be written. We show that our natural extension of the simplytyped lambda calculus with unchecked exceptions is Turing complete by reducing the untyped lambda calculus to it by means of a novel method for simulating recursive types using uncheckedexceptionreturning functions. The result concerning extensions of Fomega with call/cc stems from previous work of the author and Robert Harper.
Validated Compilation through Logic
"... Abstract. To reason about programs written in a language, one needs to define its formal semantics, derive a reasoning mechanism (e.g. a program logic), and maximize the proof automation. Unfortunately, a compiler may involve multiple languages and phases; it is tedious and error prone to do so for ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Abstract. To reason about programs written in a language, one needs to define its formal semantics, derive a reasoning mechanism (e.g. a program logic), and maximize the proof automation. Unfortunately, a compiler may involve multiple languages and phases; it is tedious and error prone to do so for each language and each phase. We present an approach based on the use of higher order logic to ease this burden. All the Intermediate Representations (IRs) are special forms of the logic of a prover such that IR programs can be reasoned about directly in the logic. We use this technique to construct and validate an optimizing compiler. New techniques are used to compilewithproof all the programs into the logic, e.g. a logic specification is derived automatically from the monad interpretation of a piece of assembly code. 1
Functional Genetic Programming with Combinators
"... Abstract. Prior program representations for genetic programming that incorporated features of modern programming languages solved harder problems than earlier representations, but required more complex genetic operators. We develop the idea of using combinator expressions as a program representation ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract. Prior program representations for genetic programming that incorporated features of modern programming languages solved harder problems than earlier representations, but required more complex genetic operators. We develop the idea of using combinator expressions as a program representation for genetic programming. This representation makes it possible to evolve programs with a variety of programming language constructs using simple genetic operators. We investigate the effort required to evolve combinatorexpression solutions to several problems: linear regression, even parity on N inputs, and implementation of the stack and queue data structures. Genetic programming with combinator expressions compares favorably to prior approaches, namely the works
Typing the Numeric Tower
"... Abstract. In the past, the creators of numerical programs had to choose between simple expression of mathematical formulas and static type checking. While the Lisp family and its dynamically typed relatives support the straightforward expression via a rich numeric tower, existing statically typed la ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Abstract. In the past, the creators of numerical programs had to choose between simple expression of mathematical formulas and static type checking. While the Lisp family and its dynamically typed relatives support the straightforward expression via a rich numeric tower, existing statically typed languages force programmers to pollute textbook formulas with explicit coercions or unwieldy notation. In this paper, we demonstrate how the type system of Typed Racket accommodates both a textbook programming style and expressive static checking. The type system provides a hierarchy of numeric types that can be freely mixed as well as precise specifications of sign, representation, and range information—all while supporting generic operations. In addition, the type system provides information to the compiler so that it can perform standard numeric optimizations. 1 Designing the Numeric Tower From the classic twoline factorial program to financial applications to scientific computation to graphics software, programs rely on numbers and numeric computations. Because of this spectrum of numeric applications, programmers wish to use a wide variety
An Isabelle/HOL Formalization of the Textbook Proof of Huffman’s Algorithm
, 2009
"... Huffman’s algorithm is a procedure for constructing a binary tree with minimum weighted path length. This report presents a formal proof of the correctness of Huffman’s algorithm written using Isabelle/HOL. Our proof closely follows the sketches found in standard algorithms textbooks, uncovering a f ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Huffman’s algorithm is a procedure for constructing a binary tree with minimum weighted path length. This report presents a formal proof of the correctness of Huffman’s algorithm written using Isabelle/HOL. Our proof closely follows the sketches found in standard algorithms textbooks, uncovering a few snags in the process. Another distinguishing feature of our formalization is the use of custom induction rules to help Isabelle’s automatic tactics, leading to very short