Results 1  10
of
54
Light Affine Logic
 ACM TRANSACTIONS ON COMPUTATIONAL LOGIC
, 1998
"... Much effort has been recently devoted to the study of polytime formal (and especially logical) systems [GSS92, LM93, Le94, Gi96]. The purpose of such systems is manyfold. On the theoretical side, they provide a better understanding of what is the logical essence of polytime reduction (and other comp ..."
Abstract

Cited by 56 (3 self)
 Add to MetaCart
Much effort has been recently devoted to the study of polytime formal (and especially logical) systems [GSS92, LM93, Le94, Gi96]. The purpose of such systems is manyfold. On the theoretical side, they provide a better understanding of what is the logical essence of polytime reduction (and other complexity classes). On the practical side, via the well known CurryHoward correspondence, they yield sophisticated typing systems, where types provide (statically) an accurate upper bound on the complexity of the computation. Even more, the type annotations give essential information on the "efficient way" to reduce the term. The most promising of these logical systems is Girard 's Light Linear Logic [Gi96] (see the same paper for a comparison with other approaches). In this paper, we introduce a slight variation of LLL, by adding full weakening (for this reason, we call it Light Affine Logic). This modification does not alter the good complexity properties of LLL: cutelimination is still pol...
Scrap your Nameplate  Functional Pearl
"... Recent research has shown how boilerplate code, or repetitive code for traversing datatypes, can be eliminated using generic programming techniques already available within some implementations of Haskell. One particularly intractable kind of boilerplate is nameplate, or code having to do with names ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
Recent research has shown how boilerplate code, or repetitive code for traversing datatypes, can be eliminated using generic programming techniques already available within some implementations of Haskell. One particularly intractable kind of boilerplate is nameplate, or code having to do with names, namebinding, and fresh name generation. One reason for the difficulty is that operations on data structures involving names, as usually implemented, are not regular instances of standard map, fold , or zip operations. However, in nominal abstract syntax, an alternative treatment of names and binding based on swapping, operations such as #equivalence, captureavoiding substitution, and free variable set functions are much betterbehaved.
Parallel Beta Reduction is Not Elementary Recursive
, 1998
"... We analyze the inherent complexity of implementing L'evy's notion of optimal evaluation for the calculus, where similar redexes are contracted in one step via socalled parallel fireduction. Optimal evaluation was finally realized by Lamping, who introduced a beautiful graph reduction technology ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
We analyze the inherent complexity of implementing L'evy's notion of optimal evaluation for the calculus, where similar redexes are contracted in one step via socalled parallel fireduction. Optimal evaluation was finally realized by Lamping, who introduced a beautiful graph reduction technology for sharing evaluation contexts dual to the sharing of values. His pioneering insights have been modified and improved in subsequent implementations of optimal reduction. We prove that the cost of parallel fireduction is not bounded by any Kalm'arelementary recursive function. Not merely do we establish that the parallel fistep cannot be a unitcost operation, we demonstrate that the time complexity of implementing a sequence of n parallel fisteps is not bounded as O(2 n ), O(2 2 n ), O(2 2 2 n ), or in general, O(K ` (n)) where K ` (n) is a fixed stack of ` 2s with an n on top. A key insight, essential to the establishment of this nonelementary lower bound, is that any simply...
Deciding Monadic Theories of Hyperalgebraic Trees
"... We show that the monadic secondorder theory of any infinite tree generated by a higherorder grammar of level 2 subject to a certain syntactic restriction is decidable. By this we extend the result of Courcelle [7] that the MSO theory of a tree generated by a grammar of level 1 (algebraic) is decid ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
We show that the monadic secondorder theory of any infinite tree generated by a higherorder grammar of level 2 subject to a certain syntactic restriction is decidable. By this we extend the result of Courcelle [7] that the MSO theory of a tree generated by a grammar of level 1 (algebraic) is decidable. To this end, we develop a technique of representing infinite trees by infinite lambda terms, in such a way that the MSO theory of a tree can be interpreted in the MSO theory of a lambda term.
From Hilbert Spaces to Dilbert Spaces: Context Semantics Made Simple
 IN 22 ND CONFERENCE ON FOUNDATIONS OF SOFTWARE TECHNOLOGY AND THEORETICAL COMPUTER SCIENCE
, 2002
"... We give a firstprinciples description of the context semantics of Gonthier, Abadi, and Levy, a computerscience analogue of Girard's geometry of interaction. We explain how this denotational semantics models λcalculus, and more generally multiplicativeexponential linear logic (MELL), by expla ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
We give a firstprinciples description of the context semantics of Gonthier, Abadi, and Levy, a computerscience analogue of Girard's geometry of interaction. We explain how this denotational semantics models λcalculus, and more generally multiplicativeexponential linear logic (MELL), by explaining the callbyname (CBN) coding of the λcalculus, and proving the correctness of readback, where the normal form of a λterm is recovered from its semantics. This analysis yields the correctness of Lamping's optimal reduction algorithm. We relate the context semantics to linear logic types and to ideas from game semantics, used to prove full abstraction theorems for PCF and other λcalculus variants.
Types, potency, and idempotency: why nonlinearity and amnesia make a type system work
 In ICFP ’04: Proceedings of the ninth ACM SIGPLAN international conference on Functional programming, 138–149, ACM
, 2004
"... Useful type inference must be faster than normalization. Otherwise, you could check safety conditions by running the program. We analyze the relationship between bounds on normalization and type inference. We show how the success of type inference is fundamentally related to the amnesia of the type ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Useful type inference must be faster than normalization. Otherwise, you could check safety conditions by running the program. We analyze the relationship between bounds on normalization and type inference. We show how the success of type inference is fundamentally related to the amnesia of the type system: the nonlinearity by which all instances of a variable are constrained to have the same type. Recent work on intersection types has advocated their usefulness for static analysis and modular compilation. We analyze SystemI (and some instances of its descendant, System E), an intersection type system with a type inference algorithm. Because SystemI lacks idempotency, each occurrence of a variable requires a distinct type. Consequently, type inference is equivalent to normalization in every single case, and time bounds on type inference and normalization are identical. Similar relationships hold for other intersection type systems without idempotency. The analysis is founded on an investigation of the relationship between linear logic and intersection types. We show a lockstep correspondence between normalization and type inference. The latter shows the promise of intersection types to facilitate static analyses of varied granularity, but also belies an immense challenge: to add amnesia to such analysis without losing all of its benefits.
Proof nets, Garbage, and Computations
, 1997
"... We study the problem of local and asynchronous computation in the context of multiplicative exponential linear logic (MELL) proof nets. The main novelty isin a complete set of rewriting rules for cutelimination in presence of weakening (which requires garbage collection). The proposed reduction s ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
We study the problem of local and asynchronous computation in the context of multiplicative exponential linear logic (MELL) proof nets. The main novelty isin a complete set of rewriting rules for cutelimination in presence of weakening (which requires garbage collection). The proposed reduction system is strongly normalizing and confluent.
(Optimal) duplication is not elementary recursive
 Information and Computation
, 2000
"... In the last ten years there has been a steady interest in optimal reduction of terms (or, more generally, of functional programs). The very story started, in fact, more than twenty ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
In the last ten years there has been a steady interest in optimal reduction of terms (or, more generally, of functional programs). The very story started, in fact, more than twenty
Optimizing optimal reduction. A type inference algorithm for elementary affine logic
 ACM Transactions on Computational Logic
"... We propose a type inference algorithm for lambda terms in Elementary Affine Logic (EAL). The algorithm decorates the syntax tree of a simple typed lambda term and collects a set of linear constraints. The result is a parametric elementary type that can be instantiated with any solution of the set of ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
We propose a type inference algorithm for lambda terms in Elementary Affine Logic (EAL). The algorithm decorates the syntax tree of a simple typed lambda term and collects a set of linear constraints. The result is a parametric elementary type that can be instantiated with any solution of the set of collected constraints. We point out that the typeability of lambda terms in EAL has a practical counterpart, since it is possible to reduce any EALtypeable lambda terms with the Lamping’s abstract algorithm obtaining a substantial increasing of performances. We show how to apply the same techniques to obtain decorations of intuitionistic proofs into Linear Logic proofs.
The Evaluation of FirstOrder Substitution is Monadic SecondOrder Compatible
"... We denote firstorder substitutions of finite and infinite terms by function symbols indexed by the sequences of firstorder variables to which substitutions are made. We consider the evaluation mapping from infinite terms to infinite terms that evaluates these substitution operations. This mapping ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
We denote firstorder substitutions of finite and infinite terms by function symbols indexed by the sequences of firstorder variables to which substitutions are made. We consider the evaluation mapping from infinite terms to infinite terms that evaluates these substitution operations. This mapping may perform infinitely many nested substitutions, so that a term which has the structure of an infinite string can be transformed into one isomorphic to an infinite binary tree. We prove that this mapping is Monadic Secondorder compatible which means that, for all finite sets of function symbols and variables, a monadic secondorder formula expressing a property of the output term produced by the evaluation mapping can be translated into a monadic secondorder formula expressing this property over the input term. This implies that, deciding the monadic secondorder theory of the output term reduces to deciding that of the input term. As an application, we obtain another proof that the monadic secondorder properties of the algebraic trees, which represent the behaviours of recursive applicative program schemes, are decidable. This proof extends to hyperalgebraic trees. These infinite trees correspond to certain recursive program schemes with functional parameters of arbitrary high type.