Results 1  10
of
34
A Compiled Implementation of Strong Reduction
"... Motivated by applications to proof assistants based on dependent types, we develop and prove correct a strong reducer and b equivalence checker for the lcalculus with products, sums, and guarded fixpoints. Our approach is based on compilation to the bytecode of an abstract machine performing weak ..."
Abstract

Cited by 82 (5 self)
 Add to MetaCart
Motivated by applications to proof assistants based on dependent types, we develop and prove correct a strong reducer and b equivalence checker for the lcalculus with products, sums, and guarded fixpoints. Our approach is based on compilation to the bytecode of an abstract machine performing weak reductions on nonclosed terms, derived with minimal modifications from the ZAM machine used in the Objective Caml bytecode interpreter, and complemented by a recursive "read back" procedure. An implementation in the Coq proof assistant demonstrates important speedups compared with the original interpreterbased implementation of strong reduction in Coq.
The Theory of LEGO  A Proof Checker for the Extended Calculus of Constructions
, 1994
"... LEGO is a computer program for interactive typechecking in the Extended Calculus of Constructions and two of its subsystems. LEGO also supports the extension of these three systems with inductive types. These type systems can be viewed as logics, and as meta languages for expressing logics, and LEGO ..."
Abstract

Cited by 69 (10 self)
 Add to MetaCart
(Show Context)
LEGO is a computer program for interactive typechecking in the Extended Calculus of Constructions and two of its subsystems. LEGO also supports the extension of these three systems with inductive types. These type systems can be viewed as logics, and as meta languages for expressing logics, and LEGO is intended to be used for interactively constructing proofs in mathematical theories presented in these logics. I have developed LEGO over six years, starting from an implementation of the Calculus of Constructions by G erard Huet. LEGO has been used for problems at the limits of our abilities to do formal mathematics. In this thesis I explain some aspects of the metatheory of LEGO's type systems leading to a machinechecked proof that typechecking is decidable for all three type theories supported by LEGO, and to a verified algorithm for deciding their typing judgements, assuming only that they are normalizing. In order to do this, the theory of Pure Type Systems (PTS) is extended and f...
A FineGrained Notation for Lambda Terms and Its Use in Intensional Operations
 Journal of Functional and Logic Programming
, 1996
"... We discuss issues relevant to the practical use of a previously proposed notation for lambda terms in contexts where the intensions of such terms have to be manipulated. This notation uses the `nameless' scheme of de Bruijn, includes expressions for encoding terms together with substitutions to ..."
Abstract

Cited by 25 (9 self)
 Add to MetaCart
(Show Context)
We discuss issues relevant to the practical use of a previously proposed notation for lambda terms in contexts where the intensions of such terms have to be manipulated. This notation uses the `nameless' scheme of de Bruijn, includes expressions for encoding terms together with substitutions to be performed on them and contains a mechanism for combining such substitutions so that they can be effected in a common structure traversal. The combination mechanism is a general one and consequently difficult to implement. We propose a simplification to it that retains its functionality in situations that occur commonly in fireduction. We then describe a system for annotating terms to determine if they can be affected by substitutions generated by external ficontractions. These annotations can lead to a conservation of space and time in implementations of reduction by permitting substitutions to be performed trivially in certain situations. The use of the resulting notation in the reduction...
Scoping Constructs In Logic Programming: Implementation Problems And Their Solution
, 1995
"... Machine (WAM). The provision of implications in goals results in the possibility of program clauses being added to the program for the purpose of solving specific subgoals. A naive scheme based on asserting and retracting program clauses does not suffice for implementing such additions for two reaso ..."
Abstract

Cited by 21 (9 self)
 Add to MetaCart
(Show Context)
Machine (WAM). The provision of implications in goals results in the possibility of program clauses being added to the program for the purpose of solving specific subgoals. A naive scheme based on asserting and retracting program clauses does not suffice for implementing such additions for two reasons. First, it is necessary to also support the resurrection of an earlier existing program in the face of backtracking. Second, the possibility for implication goals to be surrounded by quantifiers requires a consideration of the parameterization of program clauses by bindings for their free variables. Devices for supporting these additional requirements are described as also is the integration of these devices into the WAM. Further extensions to the machine are outlined for handling higherorder additions to the language. The ideas Work on this paper has been partially supported by NSF Grants CCR8905825 and CCR 9208465. Address correspondence to Gopalan Nadathur, Department of Compute...
Mixing finite success and finite failure in an automated prover
 In Proceedings of ESHOL’05: Empirically Successful Automated Reasoning in HigherOrder Logics, pages 79 – 98
, 2005
"... Abstract. The operational semantics and typing judgements of modern programming and specification languages are often defined using relations and proof systems. In simple settings, logic programming languages can be used to provide rather direct and natural interpreters for such operational semantic ..."
Abstract

Cited by 14 (7 self)
 Add to MetaCart
(Show Context)
Abstract. The operational semantics and typing judgements of modern programming and specification languages are often defined using relations and proof systems. In simple settings, logic programming languages can be used to provide rather direct and natural interpreters for such operational semantics. More complex features of specifications such as names and their bindings, proof rules with negative premises, and the exhaustive enumeration of state spaces, all pose significant challenges to conventional logic programming systems. In this paper, we describe a simple architecture for the implementation of deduction systems that allows a specification to interleave between finite success and finite failure. The implementation techniques for this prover are largely common ones from higherorder logic programming, i.e., logic variables, (higherorder pattern) unification, backtracking (using streambased computation), and abstract syntax based on simply typed λterms. We present a particular instance of this prover’s architecture and its prototype implementation, Level 0/1, based on the dual interpretation of (finite) success and finite failure in proof search. We show how Level 0/1 provides a highlevel and declarative implementation of model checking and bisimulation checking for the (finite) πcalculus. 1
A Verified Typechecker
 PROCEEDINGS OF THE SECOND INTERNATIONAL CONFERENCE ON TYPED LAMBDA CALCULI AND APPLICATIONS, VOLUME 902 OF LECTURE NOTES IN COMPUTER SCIENCE
, 1995
"... ..."
A Notation for Lambda Terms II: Refinements and Applications
, 1994
"... Issues that are relevant to the representation of lambda terms in contexts where their intensions have to be manipulated are examined. The basis for such a representation is provided by the suspension notation for lambda terms that is described in a companion paper. This notation obviates ffconver ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
(Show Context)
Issues that are relevant to the representation of lambda terms in contexts where their intensions have to be manipulated are examined. The basis for such a representation is provided by the suspension notation for lambda terms that is described in a companion paper. This notation obviates ffconversion in the comparison of terms by using the `nameless' scheme of de Bruijn and also permits a delaying of substitutions by including a class of terms that encode other terms together with substitutions to be performed on them. The suspension notation contains a mechanism for `merging' substitutions so that they can be effected in a common structure traversal. The mechanism is cumbersome to implement in its full generality and a simplification to it is considered. In particular, the old merging operations are eliminated in favor of new ones that capture some of their functionality and that permit a simplified syntax for terms. The resulting notation is refined by the addition of annotations ...
Tradeoffs in the Intensional Representation of Lambda Terms
 Rewriting Techniques and Applications (RTA 2002), volume 2378 of LNCS
, 2002
"... Higherorder representations of objects such as programs, specifications and proofs are important to many metaprogramming and symbolic computation tasks. Systems that support such representations often depend on the implementation of an intensional view of the terms of suitable typed lambda calculi. ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
(Show Context)
Higherorder representations of objects such as programs, specifications and proofs are important to many metaprogramming and symbolic computation tasks. Systems that support such representations often depend on the implementation of an intensional view of the terms of suitable typed lambda calculi. Refined lambda calculus notations have been proposed that can be used in realizing such implementations. There are, however, choices in the actual deployment of such notations whose practical consequences are not well understood. Towards addressing this lacuna, the impact of three specific ideas is examined: the de Bruijn representation of bound variables, the explicit encoding of substitutions in terms and the annotation of terms to indicate their independence on external abstractions. Qualitative assessments are complemented by experiments over actual computations using the lambdaProlog language.
Practical higherorder pattern unification with onthefly raising
 In ICLP 2005: 21st International Logic Programming Conference, volume 3668 of LNCS
, 2005
"... Abstract. Higherorder pattern unification problems arise often in computations carried out within systems such as Twelf, λProlog and Isabelle. An important characteristic of such problems is that they are given by equations appearing under a prefix of alternating universal and existential quantifie ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
(Show Context)
Abstract. Higherorder pattern unification problems arise often in computations carried out within systems such as Twelf, λProlog and Isabelle. An important characteristic of such problems is that they are given by equations appearing under a prefix of alternating universal and existential quantifiers. Existing algorithms for solving these problems assume that such prefixes are simplified to a ∀∃ ∀ form by an a priori application of a transformation known as raising. There are drawbacks to this approach. Mixed quantifier prefixes typically manifest themselves in the course of computation, thereby requiring a dynamic form of preprocessing that is difficult to support in lowlevel implementations. Moreover, raising may be redundant in many cases and its effect may have to be undone by a subsequent pruning transformation. We propose a method to overcome these difficulties. In particular, a unification algorithm is described that proceeds by recursively descending through the structures of terms, performing raising and other transformations onthefly and only as needed. This algorithm also exploits an explicit substitution notation for lambda terms. 1
Checking foundational proof certificates for firstorder logic
"... We present the design philosophy of a proof checker based on a notion of foundational proof certificates. This checker provides a semantics of proof evidence using recent advances in the theory of proofs for classical and intuitionistic logic. That semantics is then performed by a (higherorder) log ..."
Abstract

Cited by 7 (7 self)
 Add to MetaCart
(Show Context)
We present the design philosophy of a proof checker based on a notion of foundational proof certificates. This checker provides a semantics of proof evidence using recent advances in the theory of proofs for classical and intuitionistic logic. That semantics is then performed by a (higherorder) logic program: successful performance means that a formal proof of a theorem has been found. We describe how the λProlog programming language provides several features that help guarantee such a soundness claim. Some of these features (such as strong typing, abstract datatypes, and higherorder programming) were features of the ML programming language when it was first proposed as a proof checker for LCF. Other features of λProlog (such as support for bindings, substitution, and backtracking search) turn out to be equally important for describing and checking the proof evidence encoded in proof certificates. Since trusting our proof checker requires trusting a programming language implementation, we discuss various avenues for enhancing one’s trust of such a checker. 1