Results 1  10
of
17
Type theories
 In STACS ’02: Proceedings of the 19th Annual Symposium on Theoretical Aspects of Computer Science
, 1995
"... Abstract. Deduction modulo is a way to express a theory using computation rules instead of axioms. We present in this paper an extension of deduction modulo, called Polarized deduction modulo, where some rules can only be used at positive occurrences, while others can only be used at negative ones. ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
Abstract. Deduction modulo is a way to express a theory using computation rules instead of axioms. We present in this paper an extension of deduction modulo, called Polarized deduction modulo, where some rules can only be used at positive occurrences, while others can only be used at negative ones. We show that all theories in propositional calculus can be expressed in this framework and that cuts can always be eliminated with such theories. Mathematical proofs are almost never built in pure logic, but besides the deduction rules and the logical axioms that express the meaning of the connectors and quantifiers, they use something else a theory that expresses the meaning of the other symbols of the language. Examples of theories are equational theories, arithmetic, type theory, set theory,... The usual definition of a theory, as a set of axioms, is sufficient when one is interested in the provability relation, but, as wellknown, it is not when one is interested in the structure of proofs and in the theorem proving process. For
A completeness theorem for strong normalization in minimal deduction modulo
, 2009
"... Abstract. Deduction modulo is an extension of firstorder predicate logic where axioms are replaced by rewrite rules and where many theories, such as arithmetic, simple type theory and some variants of set theory, can be expressed. An important question in deduction modulo is to find a condition of ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Abstract. Deduction modulo is an extension of firstorder predicate logic where axioms are replaced by rewrite rules and where many theories, such as arithmetic, simple type theory and some variants of set theory, can be expressed. An important question in deduction modulo is to find a condition of the theories that have the strong normalization property. Dowek and Werner have given a semantic sufficient condition for a theory to have the strong normalization property: they have proved a ”soundness ” theorem of the form: if a theory has a model (of a particular form) then it has the strong normalization property. In this paper, we refine their notion of model in a way allowing not only to prove soundness, but also completeness: if a theory has the strong normalization property, then it has a model of this form. The key idea of our model construction is a refinement of Girard’s notion of reducibility candidates. By providing a sound and complete semantics for theories having the strong normalization property, this paper contributes to explore the idea
Strategic Computation and Deduction
, 2009
"... I'd like to conclude by emphasizing what a wonderful eld this is to work in. Logical reasoning plays such a fundamental role in the spectrum of intellectual activities that advances in automating logic will inevitably have a profound impact in many intellectual disciplines. Of course, these things t ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
I'd like to conclude by emphasizing what a wonderful eld this is to work in. Logical reasoning plays such a fundamental role in the spectrum of intellectual activities that advances in automating logic will inevitably have a profound impact in many intellectual disciplines. Of course, these things take time. We tend to be impatient, but we need some historical perspective. The study of logic has a very long history, going back at least as far as Aristotle. During some of this time not very much progress was made. It's gratifying to realize how much has been accomplished in the less than fty years since serious e orts to mechanize logic began.
Unbounded prooflength speedup in deduction modulo
 CSL 2007, VOLUME 4646 OF LNCS
, 2007
"... In 1973, Parikh proved a speedup theorem conjectured by Gödel 37 years before: there exist arithmetical formulæ that are provable in first order arithmetic, but whose shorter proof in second order arithmetic is arbitrarily smaller than any proof in first order. On the other hand, resolution for h ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
In 1973, Parikh proved a speedup theorem conjectured by Gödel 37 years before: there exist arithmetical formulæ that are provable in first order arithmetic, but whose shorter proof in second order arithmetic is arbitrarily smaller than any proof in first order. On the other hand, resolution for higher order logic can be simulated step by step in a first order narrowing and resolution method based on deduction modulo, whose paradigm is to separate deduction and computation to make proofs clearer and shorter. We prove that i+1th order arithmetic can be linearly simulated into ith order arithmetic modulo some confluent and terminating rewrite system. We also show that there exists a speedup between ith order arithmetic modulo this system and ith order arithmetic without modulo. All this allows us to prove that the speedup conjectured by Gödel does not come from the deductive part of the proofs, but can be expressed as simple computation, therefore justifying the use of deduction modulo as an efficient first order setting simulating higher order.
Superdeduction at Work
"... Abstract Superdeduction is a systematic way to extend a deduction system like the sequent calculus by new deduction rules computed from the user theory. We show how this could be done in a systematic, correct and complete way. We prove in detail the strong normalization of a proof term language that ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract Superdeduction is a systematic way to extend a deduction system like the sequent calculus by new deduction rules computed from the user theory. We show how this could be done in a systematic, correct and complete way. We prove in detail the strong normalization of a proof term language that models appropriately superdeduction. We finaly examplify on several examples, including equality and noetherian induction, the usefulness of this approach which is implemented in the lemuridæ system, written in TOM. 1
Checking foundational proof certificates for firstorder logic
"... We present the design philosophy of a proof checker based on a notion of foundational proof certificates. This checker provides a semantics of proof evidence using recent advances in the theory of proofs for classical and intuitionistic logic. That semantics is then performed by a (higherorder) log ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We present the design philosophy of a proof checker based on a notion of foundational proof certificates. This checker provides a semantics of proof evidence using recent advances in the theory of proofs for classical and intuitionistic logic. That semantics is then performed by a (higherorder) logic program: successful performance means that a formal proof of a theorem has been found. We describe how the λProlog programming language provides several features that help guarantee such a soundness claim. Some of these features (such as strong typing, abstract datatypes, and higherorder programming) were features of the ML programming language when it was first proposed as a proof checker for LCF. Other features of λProlog (such as support for bindings, substitution, and backtracking search) turn out to be equally important for describing and checking the proof evidence encoded in proof certificates. Since trusting our proof checker requires trusting a programming language implementation, we discuss various avenues for enhancing one’s trust of such a checker. 1
Conversion by Evaluation Mathieu Boespflug ⋆
"... Abstract. We show how testing convertibility of two types in dependently typed systems can advantageously be implemented instead untyped normalization by evaluation, thereby reusing existing compilers and runtime environments for stock functional languages, without peeking under the hood, for a fast ..."
Abstract
 Add to MetaCart
Abstract. We show how testing convertibility of two types in dependently typed systems can advantageously be implemented instead untyped normalization by evaluation, thereby reusing existing compilers and runtime environments for stock functional languages, without peeking under the hood, for a fast yet cheap system in terms of implementation effort. Our focus is on performance of untyped normalization by evaluation. We demonstrate that with the aid of a standard optimization for higher order programs (namely uncurrying), the reuse of native datatypes and pattern matching facilities of the underlying evaluator, we may obtain a normalizer with little to no performance overhead compared to a regular evaluator. 1
Author manuscript, published in "Twelfth International Symposium on Practical Aspects of Declarative Languages (2010)" Conversion by Evaluation Mathieu Boespflug ⋆
, 2009
"... Abstract. We show how testing convertibility of two types in dependently typed systems can advantageously be implemented instead untyped normalization by evaluation, thereby reusing existing compilers and runtime environments for stock functional languages, without peeking under the hood, for a fast ..."
Abstract
 Add to MetaCart
Abstract. We show how testing convertibility of two types in dependently typed systems can advantageously be implemented instead untyped normalization by evaluation, thereby reusing existing compilers and runtime environments for stock functional languages, without peeking under the hood, for a fast yet cheap system in terms of implementation effort. Our focus is on performance of untyped normalization by evaluation. We demonstrate that with the aid of a standard optimization for higher order programs (namely uncurrying), the reuse of native datatypes and pattern matching facilities of the underlying evaluator, we may obtain a normalizer with little to no performance overhead compared to a regular evaluator. 1