Results 1  10
of
22
Type theories
 In STACS ’02: Proceedings of the 19th Annual Symposium on Theoretical Aspects of Computer Science
, 1995
"... Abstract. Deduction modulo is a way to express a theory using computation rules instead of axioms. We present in this paper an extension of deduction modulo, called Polarized deduction modulo, where some rules can only be used at positive occurrences, while others can only be used at negative ones. ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
Abstract. Deduction modulo is a way to express a theory using computation rules instead of axioms. We present in this paper an extension of deduction modulo, called Polarized deduction modulo, where some rules can only be used at positive occurrences, while others can only be used at negative ones. We show that all theories in propositional calculus can be expressed in this framework and that cuts can always be eliminated with such theories. Mathematical proofs are almost never built in pure logic, but besides the deduction rules and the logical axioms that express the meaning of the connectors and quantifiers, they use something else a theory that expresses the meaning of the other symbols of the language. Examples of theories are equational theories, arithmetic, type theory, set theory,... The usual definition of a theory, as a set of axioms, is sufficient when one is interested in the provability relation, but, as wellknown, it is not when one is interested in the structure of proofs and in the theorem proving process. For
A completeness theorem for strong normalization in minimal deduction modulo
, 2009
"... Abstract. Deduction modulo is an extension of firstorder predicate logic where axioms are replaced by rewrite rules and where many theories, such as arithmetic, simple type theory and some variants of set theory, can be expressed. An important question in deduction modulo is to find a condition of ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Abstract. Deduction modulo is an extension of firstorder predicate logic where axioms are replaced by rewrite rules and where many theories, such as arithmetic, simple type theory and some variants of set theory, can be expressed. An important question in deduction modulo is to find a condition of the theories that have the strong normalization property. Dowek and Werner have given a semantic sufficient condition for a theory to have the strong normalization property: they have proved a ”soundness ” theorem of the form: if a theory has a model (of a particular form) then it has the strong normalization property. In this paper, we refine their notion of model in a way allowing not only to prove soundness, but also completeness: if a theory has the strong normalization property, then it has a model of this form. The key idea of our model construction is a refinement of Girard’s notion of reducibility candidates. By providing a sound and complete semantics for theories having the strong normalization property, this paper contributes to explore the idea
Strategic Computation and Deduction
, 2009
"... I'd like to conclude by emphasizing what a wonderful eld this is to work in. Logical reasoning plays such a fundamental role in the spectrum of intellectual activities that advances in automating logic will inevitably have a profound impact in many intellectual disciplines. Of course, these thi ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
I'd like to conclude by emphasizing what a wonderful eld this is to work in. Logical reasoning plays such a fundamental role in the spectrum of intellectual activities that advances in automating logic will inevitably have a profound impact in many intellectual disciplines. Of course, these things take time. We tend to be impatient, but we need some historical perspective. The study of logic has a very long history, going back at least as far as Aristotle. During some of this time not very much progress was made. It's gratifying to realize how much has been accomplished in the less than fty years since serious e orts to mechanize logic began.
CoqInE: Translating the Calculus of Inductive Constructions into the λΠcalculus Modulo
 in "Second International Workshop on Proof Exchange for Theorem Proving
, 2012
"... We show how to translate the Calculus of Inductive Constructions (CIC) as implemented by Coq into the λΠcalculus modulo, a proposed common backend proof format for heterogeneous proof assistants. 1 ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
We show how to translate the Calculus of Inductive Constructions (CIC) as implemented by Coq into the λΠcalculus modulo, a proposed common backend proof format for heterogeneous proof assistants. 1
Unbounded prooflength speedup in deduction modulo
 CSL 2007, VOLUME 4646 OF LNCS
, 2007
"... In 1973, Parikh proved a speedup theorem conjectured by Gödel 37 years before: there exist arithmetical formulæ that are provable in first order arithmetic, but whose shorter proof in second order arithmetic is arbitrarily smaller than any proof in first order. On the other hand, resolution for h ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
In 1973, Parikh proved a speedup theorem conjectured by Gödel 37 years before: there exist arithmetical formulæ that are provable in first order arithmetic, but whose shorter proof in second order arithmetic is arbitrarily smaller than any proof in first order. On the other hand, resolution for higher order logic can be simulated step by step in a first order narrowing and resolution method based on deduction modulo, whose paradigm is to separate deduction and computation to make proofs clearer and shorter. We prove that i+1th order arithmetic can be linearly simulated into ith order arithmetic modulo some confluent and terminating rewrite system. We also show that there exists a speedup between ith order arithmetic modulo this system and ith order arithmetic without modulo. All this allows us to prove that the speedup conjectured by Gödel does not come from the deductive part of the proofs, but can be expressed as simple computation, therefore justifying the use of deduction modulo as an efficient first order setting simulating higher order.
Checking foundational proof certificates for firstorder logic
"... We present the design philosophy of a proof checker based on a notion of foundational proof certificates. This checker provides a semantics of proof evidence using recent advances in the theory of proofs for classical and intuitionistic logic. That semantics is then performed by a (higherorder) log ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
We present the design philosophy of a proof checker based on a notion of foundational proof certificates. This checker provides a semantics of proof evidence using recent advances in the theory of proofs for classical and intuitionistic logic. That semantics is then performed by a (higherorder) logic program: successful performance means that a formal proof of a theorem has been found. We describe how the λProlog programming language provides several features that help guarantee such a soundness claim. Some of these features (such as strong typing, abstract datatypes, and higherorder programming) were features of the ML programming language when it was first proposed as a proof checker for LCF. Other features of λProlog (such as support for bindings, substitution, and backtracking search) turn out to be equally important for describing and checking the proof evidence encoded in proof certificates. Since trusting our proof checker requires trusting a programming language implementation, we discuss various avenues for enhancing one’s trust of such a checker. 1
A theory independent CurryDe BruijnHoward correspondence
 in "International Colloquium on Automata, Languages and Programming", 2012, invited talk
"... Brouwer, Heyting, and Kolmogorov have proposed to define constructive proofs as algorithms, for instance, a proof of A ⇒ B as an algorithm taking proofs of A as input and returning proofs of B as output. Curry, De Bruijn, and Howard have developed this idea further. First, they have proposed to expr ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Brouwer, Heyting, and Kolmogorov have proposed to define constructive proofs as algorithms, for instance, a proof of A ⇒ B as an algorithm taking proofs of A as input and returning proofs of B as output. Curry, De Bruijn, and Howard have developed this idea further. First, they have proposed to express these algorithms in the lambdacalculus, writing for instance λf A⇒A⇒B λx A (f x x) for the proof of the proposition (A ⇒ A ⇒ B) ⇒ A ⇒ B taking a proof f of A ⇒ A ⇒ B and a proof x of A as input and returning the proof of B obtained by applying f to x twice. Then, they have remarked that, as proofs of A ⇒ B map proofs of A to proofs of B, their type proof(A ⇒ B) is proof(A) → proof(B). Thus the function proof mapping propositions to the type of their proofs is a morphism transforming the operation ⇒ into the operation →. In the same way, this morphism transforms cutreduction in proofs into betareduction in lambdaterms. This expression of proofs as lambdaterms has been extensively used in proof processing systems: Automath, Nuprl, Coq, Elf, Agda, etc. Lambdacalculus is
Superdeduction at Work
"... Abstract Superdeduction is a systematic way to extend a deduction system like the sequent calculus by new deduction rules computed from the user theory. We show how this could be done in a systematic, correct and complete way. We prove in detail the strong normalization of a proof term language that ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract Superdeduction is a systematic way to extend a deduction system like the sequent calculus by new deduction rules computed from the user theory. We show how this could be done in a systematic, correct and complete way. We prove in detail the strong normalization of a proof term language that models appropriately superdeduction. We finaly examplify on several examples, including equality and noetherian induction, the usefulness of this approach which is implemented in the lemuridæ system, written in TOM. 1
An Open Logical Framework
"... The LFP Framework is an extension of the HarperHonsellPlotkin’s Edinburgh Logical Framework LF with external predicates, hence the name Open Logical Framework. This is accomplished by defining lock type constructors, which are a sort of ⋄modality constructors, releasing their argument under the ..."
Abstract
 Add to MetaCart
The LFP Framework is an extension of the HarperHonsellPlotkin’s Edinburgh Logical Framework LF with external predicates, hence the name Open Logical Framework. This is accomplished by defining lock type constructors, which are a sort of ⋄modality constructors, releasing their argument under the condition that a possibly external predicate is satisfied on an appropriate typed judgement. Lock types are defined using the standard pattern of constructive type theory, i.e. via introduction, elimination, and equality rules. Using LFP, one can factor out the complexity of encoding specific features of logical systems which would otherwise be awkwardly encoded in LF, e.g. sideconditions in the application of rules in Modal Logics, and substructural rules, as in noncommutative Linear Logic. The idea of LFP is that these conditions need only to be specified, while their verification can be delegated to an external proof engine, in the style of the Poincaré Principle or Deduction Modulo. Indeed such paradigms can be adequately formalized in LFP. We investigate and characterize the metatheoretical properties of the calculus underpinning LFP: strong normalization, confluence, and subject reduction. This latter property holds under the assumption that the predicates are wellbehaved, i.e. closed under weakening, permutation, substitution, and reduction in the arguments. Moreover, we