Results 1  10
of
24
A New Deconstructive Logic: Linear Logic
, 1995
"... The main concern of this paper is the design of a noetherian and confluent normalization for LK 2 (that is, classical second order predicate logic presented as a sequent calculus). The method we present is powerful: since it allows us to recover as fragments formalisms as seemingly different a ..."
Abstract

Cited by 102 (11 self)
 Add to MetaCart
The main concern of this paper is the design of a noetherian and confluent normalization for LK 2 (that is, classical second order predicate logic presented as a sequent calculus). The method we present is powerful: since it allows us to recover as fragments formalisms as seemingly different as Girard's LC and Parigot's , FD ([9, 11, 27, 31]), delineates other viable systems as well, and gives means to extend the Krivine/Leivant paradigm of `programmingwithproofs' ([22, 23]) to classical logic; it is painless: since we reduce strong normalization and confluence to the same properties for linear logic (for nonadditive proof nets, to be precise) using appropriate embeddings (socalled decorations); it is unifying: it organizes known solutions in a simple pattern that makes apparent the how and why of their making. A comparison of our method to that of embedding LK into LJ (intuitionistic sequent calculus) brings to the fore the latter's defects for these `deconstructi...
From ProofNets to Interaction Nets
 Advances in Linear Logic
, 1994
"... Introduction If we consider the interpretation of proofs as programs, say in intuitionistic logic, the question of equality between proofs becomes crucial: The syntax introduces meaningless distinctions whereas the (denotational) semantics makes excessive identifications. This question does not hav ..."
Abstract

Cited by 55 (1 self)
 Add to MetaCart
Introduction If we consider the interpretation of proofs as programs, say in intuitionistic logic, the question of equality between proofs becomes crucial: The syntax introduces meaningless distinctions whereas the (denotational) semantics makes excessive identifications. This question does not have a simple answer in general, but it leads to the notion of proofnet, which is one of the main novelties of linear logic. This has been already explained in [Gir87] and [GLT89]. The notion of interaction net introduced in [Laf90] comes from an attempt to implement the reduction of these proofnets. It happens to be a simple model of parallel computation, and so it can be presented independently of linear logic, as in [Laf94]. However, we think that it is also useful to relate the exact origin of interaction nets, especially for readers with some knowledge in linear logic. We take this opportunity to give a survey of the theory of proofnets, including a new proof of the sequentializ
A Brief Guide to Linear Logic
, 1993
"... An overview of linear logic is given, including an extensive bibliography and a simple example of the close relationship between linear logic and computation. ..."
Abstract

Cited by 53 (8 self)
 Add to MetaCart
An overview of linear logic is given, including an extensive bibliography and a simple example of the close relationship between linear logic and computation.
The Barendregt Cube with Definitions and Generalised Reduction
, 1997
"... In this paper, we propose to extend the Barendregt Cube by generalising reduction and by adding definition mechanisms. We show that this extension satisfies all the original properties of the Cube including Church Rosser, Subject Reduction and Strong Normalisation. Keywords: Generalised Reduction, ..."
Abstract

Cited by 37 (17 self)
 Add to MetaCart
In this paper, we propose to extend the Barendregt Cube by generalising reduction and by adding definition mechanisms. We show that this extension satisfies all the original properties of the Cube including Church Rosser, Subject Reduction and Strong Normalisation. Keywords: Generalised Reduction, Definitions, Barendregt Cube, Church Rosser, Subject Reduction, Strong Normalisation. Contents 1 Introduction 3 1.1 Why generalised reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Why definition mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 The item notation for definitions and generalised reduction . . . . . . . . . . 4 2 The item notation 7 3 The ordinary typing relation and its properties 10 3.1 The typing relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.2 Properties of the ordinary typing relation . . . . . . . . . . . . . . . . . . . . 13 4 Generalising reduction in the Cube 15 4.1 The generalised...
First Order Linear Logic without Modalities Is NEXPTIMEHard
 Theoretical Computer Science
, 1994
"... The decision problem is studied for the nonmodal or multiplicativeadditive fragment of first order linear logic. This fragment is shown to be nexptime hard. The hardness proof combines Shapiro's logic programming simulation of nondeterministic Turing machines with the standard proof of the pspace ..."
Abstract

Cited by 15 (11 self)
 Add to MetaCart
The decision problem is studied for the nonmodal or multiplicativeadditive fragment of first order linear logic. This fragment is shown to be nexptime hard. The hardness proof combines Shapiro's logic programming simulation of nondeterministic Turing machines with the standard proof of the pspace hardness of quantified boolean formula validity, utilizing some of the surprisingly powerful and expressive machinery of linear logic. 1 Introduction Linear logic, introduced by Girard, is a resourcesensitive refinement of classical logic [10, 29]. Linear logic gains its expressive power by restricting the "structural" proof rules of contraction (copying) and weakening (erasing). The contraction rule makes it possible to reuse any stated assumption as often as desired. The weakening rule makes it possible to use dummy assumptions, i.e., it allows a deduction to be carried out without using all of the hypotheses. Because contraction and weakening together make it possible to use an assu...
On the semantic readings of proofnets
 Proceedings of formal Grammar
, 1996
"... A la mémoire de ..."
On the Fine Structure of the Exponential Rule
 Advances in Linear Logic
, 1993
"... We present natural deduction systems for fragments of intuitionistic linear logic obtained by dropping weakening and contractions also on !prefixed formulas. The systems are based on a twodimensional generalization of the notion of sequent, which accounts for a clean formulation of the introduction ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
We present natural deduction systems for fragments of intuitionistic linear logic obtained by dropping weakening and contractions also on !prefixed formulas. The systems are based on a twodimensional generalization of the notion of sequent, which accounts for a clean formulation of the introduction/elimination rules of the modality. Moreover, the different subsystems are obtained in a modular way, by simple conditions on the elimination rule for !. For the proposed systems we introduce a notion of reduction and we prove a normalization theorem. 1. Introduction Proof theory of modalities is a delicate subject. The shape of the rules governing the different modalities in the overpopulated world of modal logics is often an example of what a good rule should not be. In the context of sequent calculus, if we want cut elimination, we are often forced to accept rules which are neither left nor right rules, and which completely destroy the deep symmetries the calculus is based upon. In the c...
Linear Logic, Comonads and Optimal Reductions
 Fundamentae Informaticae
, 1993
"... The paper discusses, in a categorical perspective, some recent works on optimal graph reduction techniques for the calculus. In particular, we relate the two "brackets" in [GAL92a] to the two operations associated with the comonad "!" of Linear Logic. The rewriting rules can be then understood as a ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
The paper discusses, in a categorical perspective, some recent works on optimal graph reduction techniques for the calculus. In particular, we relate the two "brackets" in [GAL92a] to the two operations associated with the comonad "!" of Linear Logic. The rewriting rules can be then understood as a "local implementation" of naturality laws, that is as the broadcasting of some information from the output to the inputs of a term, following its connected structure. 1 Introduction More than fifteen years ago, L'evy [Le78] proposed a theoretical notion of optimality for calculus normalization. Roughly speaking, a reduction technique is optimal if it is able to profit of all the sharing expressed in initial term, avoiding useless duplications. For a long time, no implementation was able to achieve L'evy's performance (see [Fie90] for a quick survey). People started already to doubt of the existence of optimal evaluators, when Lamping and Kathail independently found a solution [Lam90,Ka90]...
On the invariance of the unitary cost model for head reduction (long version). Available at http://arxiv.org/abs/1202.1641
"... The λcalculus is a widely accepted computational model of higherorder functional programs, yet there is not any direct and universally accepted cost model for it. As a consequence, the computational difficulty of reducing λterms to their normal form is typically studied by reasoning on concrete i ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
The λcalculus is a widely accepted computational model of higherorder functional programs, yet there is not any direct and universally accepted cost model for it. As a consequence, the computational difficulty of reducing λterms to their normal form is typically studied by reasoning on concrete implementation algorithms. In this paper, we show that when head reduction is the underlying dynamics, the unitary cost model is indeed invariant. This improves on known results, which only deal with weak (callbyvalue or callbyname) reduction. Invariance is proved by way of a linear calculus of explicit substitutions, which allows to nicely decompose any head reduction step in the λcalculus into more elementary substitution steps, thus making the combinatorics of headreduction easier to reason about. The technique is also a promising tool to attack what we see as the main open problem, namely understanding for which normalizing strategies the unitary cost model is invariant, if any.