Results 1 
6 of
6
Parallel Beta Reduction is Not Elementary Recursive
, 1998
"... We analyze the inherent complexity of implementing L'evy's notion of optimal evaluation for the calculus, where similar redexes are contracted in one step via socalled parallel fireduction. Optimal evaluation was finally realized by Lamping, who introduced a beautiful graph reduction technology ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
We analyze the inherent complexity of implementing L'evy's notion of optimal evaluation for the calculus, where similar redexes are contracted in one step via socalled parallel fireduction. Optimal evaluation was finally realized by Lamping, who introduced a beautiful graph reduction technology for sharing evaluation contexts dual to the sharing of values. His pioneering insights have been modified and improved in subsequent implementations of optimal reduction. We prove that the cost of parallel fireduction is not bounded by any Kalm'arelementary recursive function. Not merely do we establish that the parallel fistep cannot be a unitcost operation, we demonstrate that the time complexity of implementing a sequence of n parallel fisteps is not bounded as O(2 n ), O(2 2 n ), O(2 2 2 n ), or in general, O(K ` (n)) where K ` (n) is a fixed stack of ` 2s with an n on top. A key insight, essential to the establishment of this nonelementary lower bound, is that any simply...
Proof nets, Garbage, and Computations
, 1997
"... We study the problem of local and asynchronous computation in the context of multiplicative exponential linear logic (MELL) proof nets. The main novelty isin a complete set of rewriting rules for cutelimination in presence of weakening (which requires garbage collection). The proposed reduction s ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
We study the problem of local and asynchronous computation in the context of multiplicative exponential linear logic (MELL) proof nets. The main novelty isin a complete set of rewriting rules for cutelimination in presence of weakening (which requires garbage collection). The proposed reduction system is strongly normalizing and confluent.
Parsing mell proof nets
 In TLCA
, 1997
"... We propose a new formulation for full (weakening and constants included) multiplicative and exponential (MELL) proof nets, allowing a complete set of rewriting rules to parse them. The recognizing grammar de ned by such a rewriting system (con uent and strong normalizing on the new proof nets) gives ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
We propose a new formulation for full (weakening and constants included) multiplicative and exponential (MELL) proof nets, allowing a complete set of rewriting rules to parse them. The recognizing grammar de ned by such a rewriting system (con uent and strong normalizing on the new proof nets) gives a correctness criterion that we show equivalent to the DanosRegnier one. 1
L'Implémentation Optimale des Langages Fonctionnelle
, 1998
"... Machine. 4 ' & $ % Implementing fireduction: Graph Machines  take the syntax tree of expressions. Example  (fun x: x + 4)5 is represented by: app fun x 5 + 4 x  compute with the graphical fireduction: fun x x x e' e' e' e e app 5 ' & $ % Example: The computation of (fun x: x + 4)5 app ..."
Abstract
 Add to MetaCart
Machine. 4 ' & $ % Implementing fireduction: Graph Machines  take the syntax tree of expressions. Example  (fun x: x + 4)5 is represented by: app fun x 5 + 4 x  compute with the graphical fireduction: fun x x x e' e' e' e e app 5 ' & $ % Example: The computation of (fun x: x + 4)5 app 5 fun x + x 4 5 + 4 Machines: Hughes supercombinators 6 ' & $ % The main problem: recomputing already computed expressions ffl computing needed redexes may duplicate work: Example. (fun x: x + x)(5 + 4) ! (5 + 4) + (5 + 4). ffl computing arguments may cause useless work: Example. (fun x: 5)(4 + 5) ! (fun x: 5)9. Theorem. (BarendregtKlop) There is no effective strategy which allows to compute in a minimal number of steps. in the typed lambdacalculus (strong normalization): Theorem. (Schwictenberg defeat) There are typed lambdaterms which normalize into O(2 2 \Delta \Delta \Delta 2  z ntimes ), where n is the size of the term. 7 ' & $ % Sharing Nodes Wadsworth : Overc...
Theory, Languages
"... In this paper we investigate laziness and optimal evaluation strategies for functional programming languages. We consider the weak λcalculus as a basis of functional programming languages, and we adapt to this setting the concepts of optimal reductions that were defined for the full λcalculus. We ..."
Abstract
 Add to MetaCart
In this paper we investigate laziness and optimal evaluation strategies for functional programming languages. We consider the weak λcalculus as a basis of functional programming languages, and we adapt to this setting the concepts of optimal reductions that were defined for the full λcalculus. We prove that the usual implementation of callbyneed using sharing is optimal, that is, normalizing any λterm with callbyneed requires exactly the same number of reduction steps as the shortest reduction sequence in the weak λcalculus without sharing. Furthermore, we prove that optimal reduction sequences without sharing are not computable. Hence sharing is the only computable means to reach weak optimality. I.1.3 [Languages and Sys
Jumping around the box: Graphical and operational studies on λcalculus and Linear Logic
"... 1.1 λtrees, λjdags and sharing..................... 10 1.2 The structural λcalculus...................... 13 1.2.1 Using λj to revisit λcalculus................ 14 ..."
Abstract
 Add to MetaCart
1.1 λtrees, λjdags and sharing..................... 10 1.2 The structural λcalculus...................... 13 1.2.1 Using λj to revisit λcalculus................ 14