Results 1  10
of
24
Definitional interpreters for higherorder programming languages
 Reprinted from the proceedings of the 25th ACM National Conference
, 1972
"... Abstract. Higherorder programming languages (i.e., languages in which procedures or labels can occur as values) are usually defined by interpreters that are themselves written in a programming language based on the lambda calculus (i.e., an applicative language such as pure LISP). Examples include ..."
Abstract

Cited by 343 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Higherorder programming languages (i.e., languages in which procedures or labels can occur as values) are usually defined by interpreters that are themselves written in a programming language based on the lambda calculus (i.e., an applicative language such as pure LISP). Examples include McCarthy’s definition of LISP, Landin’s SECD machine, the Vienna definition of PL/I, Reynolds ’ definitions of GEDANKEN, and recent unpublished work by L. Morris and C. Wadsworth. Such definitions can be classified according to whether the interpreter contains higherorder functions, and whether the order of application (i.e., call by value versus call by name) in the defined language depends upon the order of application in the defining language. As an example, we consider the definition of a simple applicative programming language by means of an interpreter written in a similar language. Definitions in each of the above classifications are derived from one another by informal but constructive methods. The treatment of imperative features such as jumps and assignment is also discussed.
The Denotational Semantics of Programming Languages
, 1976
"... This paper is a tutorial introduction to the theory of programming language semantics developed by D. Scott and C. Strachey. The application of the theory to formal language specification is demonstrated and other applications are surveyed. The first language considered, LOOP, is very elementary and ..."
Abstract

Cited by 195 (0 self)
 Add to MetaCart
This paper is a tutorial introduction to the theory of programming language semantics developed by D. Scott and C. Strachey. The application of the theory to formal language specification is demonstrated and other applications are surveyed. The first language considered, LOOP, is very elementary and its definition merely introduces the notation and methodology of the approach. Then the semantic concepts of environments, stores, and continuations are introduced to model classes of programming language features and the underlying mathematical theory of computation due to Scott is motivated and outlined. Finally, the paper presents a formal definition of the language GEDANKEN.
Metatheory and Reflection in Theorem Proving: A Survey and Critique
, 1995
"... One way to ensure correctness of the inference performed by computer theorem provers is to force all proofs to be done step by step in a simple, more or less traditional, deductive system. Using techniques pioneered in Edinburgh LCF, this can be made palatable. However, some believe such an appro ..."
Abstract

Cited by 69 (2 self)
 Add to MetaCart
One way to ensure correctness of the inference performed by computer theorem provers is to force all proofs to be done step by step in a simple, more or less traditional, deductive system. Using techniques pioneered in Edinburgh LCF, this can be made palatable. However, some believe such an approach will never be efficient enough for large, complex proofs. One alternative, commonly called reflection, is to analyze proofs using a second layer of logic, a metalogic, and so justify abbreviating or simplifying proofs, making the kinds of shortcuts humans often do or appealing to specialized decision algorithms. In this paper we contrast the fullyexpansive LCF approach with the use of reflection. We put forward arguments to suggest that the inadequacy of the LCF approach has not been adequately demonstrated, and neither has the practical utility of reflection (notwithstanding its undoubted intellectual interest). The LCF system with which we are most concerned is the HOL proof ...
Proving Theorems about LISP Functions
, 1975
"... Program verification is the idea that properties of programs can be precisely stated and proved in the mathematical sense. In this paper, some simple heuristics combining evaluation and mathematical induction are described, which the authors have implemented in a program that automatically proves a ..."
Abstract

Cited by 58 (2 self)
 Add to MetaCart
(Show Context)
Program verification is the idea that properties of programs can be precisely stated and proved in the mathematical sense. In this paper, some simple heuristics combining evaluation and mathematical induction are described, which the authors have implemented in a program that automatically proves a wide variety of theorems about recursive LISP functions. The method the program uses to generate induction formulas is described at length. The theorems proved by the program include that REVERSE is its own inverse and that a particular SORT program is correct. A list of theorems proved by the program is given. key words and phrases: LISP, automatic theoremproving, structural induction, program verification cr categories: 3.64, 4.22, 5.21 1 Introduction We are concerned with proving theorems in a firstorder theory of lists, akin to the elementary theory of numbers. We use a subset of LISP as our language because recursive list processing functions are easy to write in LISP and because ...
A Hoare Logic for CallbyValue Functional Programs
"... Abstract. We present a Hoare logic for a callbyvalue programming language equipped with recursive, higherorder functions, algebraic data types, and a polymorphic type system in the style of Hindley and Milner. It is the theoretical basis for a tool that extracts proof obligations out of programs ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We present a Hoare logic for a callbyvalue programming language equipped with recursive, higherorder functions, algebraic data types, and a polymorphic type system in the style of Hindley and Milner. It is the theoretical basis for a tool that extracts proof obligations out of programs annotated with logical assertions. These proof obligations, expressed in a typed, higherorder logic, are discharged using offtheshelf automated or interactive theorem provers. Although the technical apparatus that we exploit is by now standard, its application to callbyvalue functional programming languages appears to be new, and (we claim) deserves attention. As a sample application, we check the partial correctness of a balanced binary search tree implementation. 1
Strategic Computation and Deduction
, 2009
"... I'd like to conclude by emphasizing what a wonderful eld this is to work in. Logical reasoning plays such a fundamental role in the spectrum of intellectual activities that advances in automating logic will inevitably have a profound impact in many intellectual disciplines. Of course, these thi ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
I'd like to conclude by emphasizing what a wonderful eld this is to work in. Logical reasoning plays such a fundamental role in the spectrum of intellectual activities that advances in automating logic will inevitably have a profound impact in many intellectual disciplines. Of course, these things take time. We tend to be impatient, but we need some historical perspective. The study of logic has a very long history, going back at least as far as Aristotle. During some of this time not very much progress was made. It's gratifying to realize how much has been accomplished in the less than fty years since serious e orts to mechanize logic began.
Formalizing Domains, Ultrametric Spaces and Semantics of Programming Languages
 UNDER CONSIDERATION FOR PUBLICATION IN MATH. STRUCT. IN COMP. SCIENCE
, 2010
"... We describe a Coq formalization of constructive ωcpos, ultrametric spaces and ultrametricenriched categories, up to and including the inverselimit construction of solutions to mixedvariance recursive equations in both categories enriched over ωcppos and categories enriched over ultrametric spac ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
We describe a Coq formalization of constructive ωcpos, ultrametric spaces and ultrametricenriched categories, up to and including the inverselimit construction of solutions to mixedvariance recursive equations in both categories enriched over ωcppos and categories enriched over ultrametric spaces. We show how these mathematical structures may be used in formalizing semantics for three representative programming languages. Specifically, we give operational and denotational semantics for both a simplytyped CBV language with recursion and an untyped CBV language, establishing soundness and adequacy results in each case, and then use a Kripke logical relation over a recursivelydefined metric space of worlds to give an interpretation of types over a stepcounting operational semantics for a language with recursive types and general references.
Experience with Randomized Testing in Programming Language Metatheory
, 2009
"... We explore the use of QuickCheckstyle randomized testing in programming languages metatheory, a methodology proposed to reduce development time by revealing shallow errors early, before a formal proof attempt. This exploration begins with the development of a randomized testing framework for PLT Re ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We explore the use of QuickCheckstyle randomized testing in programming languages metatheory, a methodology proposed to reduce development time by revealing shallow errors early, before a formal proof attempt. This exploration begins with the development of a randomized testing framework for PLT Redex, a domainspecific language for specifying and debugging operational semantics. In keeping with the spirit of Redex, the framework is as lightweight as possible—the user encodes a conjecture as a predicate over the terms of the language, and guided by the structure of the language’s grammar, reduction relation, and metafunctions, Redex attempts to falsify the conjecture automatically. In addition to the details of this framework, we present a tutorial demonstrating its use and two case studies applying it to large language specifications. The first study, a postmortem, applies randomized testing to the formal semantics published with the latest revision of the Scheme language standard. Despite a community review period and a comprehensive, manuallyconstructed test suite, randomized testing in Redex revealed four bugs in the semantics. The second study presents our experience applying the tool concurrently with the development of a formal model for the MzScheme virtual machine and bytecode verifier. In addition to many errors in our formalization, randomized testing revealed six bugs in the core bytecode verification algorithm in production use. The results of these studies suggest that randomized testing is a cheap and effective technique for finding bugs in large programming language metatheories.
Implementing Certified Programming Language Tools in Dependent Type Theory
, 2007
"... ..."
(Show Context)