Results 1 
9 of
9
Optimality and Inefficiency : What Isn't a Cost Model of the Lambda Calculus?
 In Proceedings of the 1996 ACM SIGPLAN International Conference on Functional Programming
, 1996
"... We investigate the computational efficiency of the sharing graphs of Lamping [Lam90], Gonthier, Abadi, and L'evy [GAL92], and Asperti [Asp94], designed to effect socalled optimal evaluation, with the goal of reconciling optimality, efficiency, and the clarification of reasonable cost models f ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
(Show Context)
We investigate the computational efficiency of the sharing graphs of Lamping [Lam90], Gonthier, Abadi, and L'evy [GAL92], and Asperti [Asp94], designed to effect socalled optimal evaluation, with the goal of reconciling optimality, efficiency, and the clarification of reasonable cost models for the calculus. Do these graphs suggest reasonable cost models for the calculus? If they are optimal, are they efficient? We present a brief survey of these optimal evaluators, identifying their common characteristics, as well as their shared failures. We give a lower bound on the efficiency of sharing graphs by identifying a class of terms that are normalizable in \Theta(n) time, and require \Theta(n) "fan interactions, " but require\Omega\Gammaq n ) bookkeeping steps. For [GAL92], we analyze this anomaly in terms of the dynamic maintenance of deBruijn indices for intermediate terms. We give another lower bound showing that sharing graphs can do \Omega\Gammao n ) work (via fan interactio...
Parallel Beta Reduction is Not Elementary Recursive
, 1998
"... We analyze the inherent complexity of implementing L'evy's notion of optimal evaluation for the calculus, where similar redexes are contracted in one step via socalled parallel fireduction. Optimal evaluation was finally realized by Lamping, who introduced a beautiful graph reduction t ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
We analyze the inherent complexity of implementing L'evy's notion of optimal evaluation for the calculus, where similar redexes are contracted in one step via socalled parallel fireduction. Optimal evaluation was finally realized by Lamping, who introduced a beautiful graph reduction technology for sharing evaluation contexts dual to the sharing of values. His pioneering insights have been modified and improved in subsequent implementations of optimal reduction. We prove that the cost of parallel fireduction is not bounded by any Kalm'arelementary recursive function. Not merely do we establish that the parallel fistep cannot be a unitcost operation, we demonstrate that the time complexity of implementing a sequence of n parallel fisteps is not bounded as O(2 n ), O(2 2 n ), O(2 2 2 n ), or in general, O(K ` (n)) where K ` (n) is a fixed stack of ` 2s with an n on top. A key insight, essential to the establishment of this nonelementary lower bound, is that any simply...
Coherence for sharing proofnets
 Proceedings of the 7th International Conference on Rewriting Techniques and Applications (RTA96), LNCS 1103
, 1996
"... Sharing graphs are an implementation of linear logic proofnets in such a way that their reduction never duplicate a redex. In their usual formulations, proofnets present a problem of coherence: if the proofnet N reduces by standard cutelimination to N 0, then, by reducing the sharing graph of N ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
Sharing graphs are an implementation of linear logic proofnets in such a way that their reduction never duplicate a redex. In their usual formulations, proofnets present a problem of coherence: if the proofnet N reduces by standard cutelimination to N 0, then, by reducing the sharing graph of N we donot obtain the sharing graph of N 0.Wesolve this problem by changing the way the information is coded into sharing graphs and introducing a new reduction rule (absorption). The rewriting system is con uent and terminating. The proof of this fact exploits an algebraic semantics for sharing graphs. 1
Optimal) duplication is not elementary recursive
 In Proceedings of the 27th ACM SIGPLANSIGACT Symposium on Principles of Programming Languages (POLP00
, 2000
"... In 1998 Asperti and Mairson proved that the cost of reducing a lambdaterm using an optimal lambdareducer (a la Lévy) cannot be bound by any elementary function in the number of sharedbeta steps. We prove in this paper that an analogous result holds for Lamping’s abstract algorithm. That is, there ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
In 1998 Asperti and Mairson proved that the cost of reducing a lambdaterm using an optimal lambdareducer (a la Lévy) cannot be bound by any elementary function in the number of sharedbeta steps. We prove in this paper that an analogous result holds for Lamping’s abstract algorithm. That is, there is no elementary function in the number of shared beta steps bounding the number of duplication steps of the optimal reducer. This theorem vindicates the oracle of Lamping’s algorithm as the culprit for the negative result of Asperti and Mairson. The result is obtained using as a technical tool Elementary Affine Logic. Key words: complexity, elementary affine logic, graph rewriting, optimal reduction
On Global Dynamics of Optimal Graph Reduction
 1997 ACM International Conference on Functional Programming
, 1997
"... Optimal graph reduction technology for the calculus, as developed by Lamping, with modifications by Asperti, Gonthier, Abadi, and L'evy, has a wellunderstood local dynamics based on a standard menagerie of reduction rules, as well as a global context semantics based on Girard's geometry ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
Optimal graph reduction technology for the calculus, as developed by Lamping, with modifications by Asperti, Gonthier, Abadi, and L'evy, has a wellunderstood local dynamics based on a standard menagerie of reduction rules, as well as a global context semantics based on Girard's geometry of interaction. However, the global dynamics of graph reduction has not been subject to careful investigation. In particular, graphs lose their structural resemblence to terms after only a few graph reduction steps, and little is known about graph reduction strategies that maintain efficiency or structure. While the context semantics provides global information about the computation, its use as part of a reduction strategy seems computationally infeasible. We propose a tractable graph reduction strategy that preserves computationally relevant global structure, and allows us to efficiently bound the computational resources needed to implement optimal reduction. A simple canonical representation for gr...
A general theory of sharing graphs
 THEORET. COMPUT. SCI
, 1999
"... Sharing graphs are the structures introduced by Lamping to implement optimal reductions of lambda calculus. Gonthier's reformulation of Lamping's technique inside Geometry of Interaction, and Asperti and Laneve's work on Interaction Systems have shown that sharing graphs can be used t ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
Sharing graphs are the structures introduced by Lamping to implement optimal reductions of lambda calculus. Gonthier's reformulation of Lamping's technique inside Geometry of Interaction, and Asperti and Laneve's work on Interaction Systems have shown that sharing graphs can be used to implement a wide class of calculi. Here, we give a general characterization of sharing graphs independent from the calculus to be implemented. Such a characterization rests on an algebraic semantics of sharing graphs exploiting the methods of Geometry of Interaction. By this semantics we can de ne an unfolding partial order between proper sharing graphs, whose minimal elements are unshared graphs. The leastshared instance of a sharing graph is the unique unshared graph that the unfolding partial order associates to it. The algebraic semantics allows to prove that we can associate a semantical readback to each unshared graph and that such a readback can be computed
The Weak Lambda Calculus as a Reasonable Machine
, 2006
"... We define a new cost model for the callbyvalue lambdacalculus satisfying the invariance thesis. That is, under the proposed cost model, Turing machines and the callbyvalue lambdacalculus can simulate each other within a polynomial time overhead. The model only relies on combinatorial propertie ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
We define a new cost model for the callbyvalue lambdacalculus satisfying the invariance thesis. That is, under the proposed cost model, Turing machines and the callbyvalue lambdacalculus can simulate each other within a polynomial time overhead. The model only relies on combinatorial properties of usual betareduction, without any reference to a specific machine or evaluator. In particular, the cost of a single beta reduction is proportional to the difference between the size of the redex and the size of the reduct. In this way, the total cost of normalizing a lambda term will take into account the size of all intermediate results (as well as the number of steps to normal form).
An Invariant Cost Model for the Lambda Calculus
, 2005
"... We define a new cost model for the callbyvalue lambdacalculus satisfying the invariance thesis. That is, under the proposed cost model, Turing machines and the callbyvalue lambdacalculus can simulate each other within a polynomial time overhead. The model only relies on combinatorial properties ..."
Abstract
 Add to MetaCart
(Show Context)
We define a new cost model for the callbyvalue lambdacalculus satisfying the invariance thesis. That is, under the proposed cost model, Turing machines and the callbyvalue lambdacalculus can simulate each other within a polynomial time overhead. The model only relies on combinatorial properties of usual betareduction, without any reference to a specific machine or evaluator. In particular, the cost of a single beta reduction is proportional to the difference between the size of the redex and the size of the reduct. In this way, the total cost of normalizing a lambda term will take into account the size of all intermediate results (as well as the number of steps to normal form). 1