Results 1  10
of
10
Thunks and the λcalculus
 IN THE JOURNAL OF FUNCTIONAL PROGRAMMING. RS976 OLIVIER DANVY AND ULRIK
, 1997
"... Plotkin, in his seminal article Callbyname, callbyvalue and the λcalculus, formalized evaluation strategies and simulations using operational semantics and continuations. In particular, he showed how callbyname evaluation could be simulated under callbyvalue evaluation and vice versa. Si ..."
Abstract

Cited by 22 (9 self)
 Add to MetaCart
(Show Context)
Plotkin, in his seminal article Callbyname, callbyvalue and the λcalculus, formalized evaluation strategies and simulations using operational semantics and continuations. In particular, he showed how callbyname evaluation could be simulated under callbyvalue evaluation and vice versa. Since Algol 60, however, callbyname is both implemented and simulated with thunks rather than with continuations. We recast
Functional BackEnds within the LambdaSigma Calculus
, 1996
"... We define a weak calculus, oe w , as a subsystem of the full calculus with explicit substitutions oe * . We claim that oe w could be the archetypal output language of functional compilers, just as the calculus is their universal input language. Furthermore, oe * could be the adequate theory to e ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
We define a weak calculus, oe w , as a subsystem of the full calculus with explicit substitutions oe * . We claim that oe w could be the archetypal output language of functional compilers, just as the calculus is their universal input language. Furthermore, oe * could be the adequate theory to establish the correctness of simplified functional compilers. Here, we illustrate these claims by proving the correctness of four simplified compilers and runtime systems modeled as abstract machines. The four machines we prove are the Krivine machine, the SECD, the FAM and the CAM. Thereby, we give the first formal proofs of Cardelli's FAM and of its compiler.
A Taxonomy of Functional Language Implementations Part II: CallbyName, CallbyNeed and Graph Reduction
, 1996
"... In Part I [5], we proposed an approach to formally describe and compare functional languages implementations. We focused on callbyvalue and described wellknown compilers for strict languages. Here, we complete our exploration of the design space of implementations by studying callbyname, cal ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
(Show Context)
In Part I [5], we proposed an approach to formally describe and compare functional languages implementations. We focused on callbyvalue and described wellknown compilers for strict languages. Here, we complete our exploration of the design space of implementations by studying callbyname, callbyneed and graph reduction. We express the whole compilation process as a succession of program transformations in a common framework. At each step, different transformations model fundamental choices or optimizations. We describe and compare the diverse alternatives for the compilation of the callbyname strategy in both environment and graphbased models. The different options for the compilation of breduction described in [5] can be applied here as well. Instead, we describe other possibilities specific to graph reduction. Callbyneed is nothing but callbyname with redex sharing and update. We present how sharing can be expressed in our framework and we describe different...
Linear Logic, Comonads and Optimal Reductions
 Fundamentae Informaticae
, 1993
"... The paper discusses, in a categorical perspective, some recent works on optimal graph reduction techniques for the calculus. In particular, we relate the two "brackets" in [GAL92a] to the two operations associated with the comonad "!" of Linear Logic. The rewriting rules can be ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
The paper discusses, in a categorical perspective, some recent works on optimal graph reduction techniques for the calculus. In particular, we relate the two "brackets" in [GAL92a] to the two operations associated with the comonad "!" of Linear Logic. The rewriting rules can be then understood as a "local implementation" of naturality laws, that is as the broadcasting of some information from the output to the inputs of a term, following its connected structure. 1 Introduction More than fifteen years ago, L'evy [Le78] proposed a theoretical notion of optimality for calculus normalization. Roughly speaking, a reduction technique is optimal if it is able to profit of all the sharing expressed in initial term, avoiding useless duplications. For a long time, no implementation was able to achieve L'evy's performance (see [Fie90] for a quick survey). People started already to doubt of the existence of optimal evaluators, when Lamping and Kathail independently found a solution [Lam90,Ka90]...
A Systematic Study of Functional Language Implementations
 ACM Transactions on Programming Languages and Systems
, 1998
"... : We introduce a unified framework to describe, relate, compare and classify functional language implementations. The compilation process is expressed as a succession of program transformations in the common framework. At each step, different transformations model fundamental choices. A benefit of t ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
: We introduce a unified framework to describe, relate, compare and classify functional language implementations. The compilation process is expressed as a succession of program transformations in the common framework. At each step, different transformations model fundamental choices. A benefit of this approach is to structure and decompose the implementation process. The correctness proofs can be tackled independently for each step and amount to proving program transformations in the functional world. This approach also paves the way to formal comparisons by making it possible to estimate the complexity of individual transformations or compositions of them. Our study aims at covering the whole known design space of sequential functional languages implementations. In particular, we consider callbyvalue, callbyname and callbyneed reduction strategies as well as environment and graphbased implementations. We describe for each compilation step the diverse alternatives as program tr...
The Next 700 Krivine Machines
 N/P
, 2005
"... The Krivine machine is a simple and natural implementation of the normal weakhead reduction strategy for pure λterms. While its original description has remained unpublished, this machine has served as a basis for many variants, extensions and theoretical studies. In this paper, we present the Kri ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
The Krivine machine is a simple and natural implementation of the normal weakhead reduction strategy for pure λterms. While its original description has remained unpublished, this machine has served as a basis for many variants, extensions and theoretical studies. In this paper, we present the Krivine machine and some wellknown variants in a common framework. Our framework consists of a hierarchy of intermediate languages that are subsets of the λcalculus. The whole implementation process (compiler + abstract machine) is described via a sequence of transformations all of which express an implementation choice. We characterize the essence of the Krivine machine and locate it in the design space of functional language implementations. We show that, even within the particular class of Krivine machines, hundreds of variants can be designed.
Programmation Fonctionnelle Et Parallélisme: Une Approche Pragmatique
, 1994
"... We introduce DPML, an intermediatelevel portable language for massively parallel programming designed as an extension of MiniML. Its parallel execution mode generalises dataparallelism and features explicit localisations and communications. Unlike imperative parallel languages with explicit com ..."
Abstract
 Add to MetaCart
We introduce DPML, an intermediatelevel portable language for massively parallel programming designed as an extension of MiniML. Its parallel execution mode generalises dataparallelism and features explicit localisations and communications. Unlike imperative parallel languages with explicit communications, DPML is deterministic. A DPML program is seen as a static vector of ML programs communicating through remote evaluation and a global protocol. The language's implementation reuses that of Caml and does not require a distributed GC.
From λσ to λν  a Journey through Calculi of Explicit Substitutions
 Proceedings of the 21st Annual ACM Symposium on Principles Of Programming Languages
, 1994
"... This paper gives a systematic description of several calculi of explicit substitutions. These systems are orthogonal and have easy proofs of termination of their substitution calculus. The last system, called λσ, entails a very simple environment machine for strong normalization ..."
Abstract
 Add to MetaCart
This paper gives a systematic description of several calculi of explicit substitutions. These systems are orthogonal and have easy proofs of termination of their substitution calculus. The last system, called &lambda;&sigma;, entails a very simple environment machine for strong normalization of &lambda;terms.
Décrire Et Comparer Les Implantations De Langages Fonctionnels
, 1996
"... ion avec des environnements partags (As) As ncessite sept nouveaux combinateurs pour exprimer la sauvegarde et la restauration des environnements (dupl e , swap se ), la construction et l'ouverture des fermetures (mkclos, appclos), l'accs aux valeurs (fst, snd), et enfin l'ajout d&ap ..."
Abstract
 Add to MetaCart
ion avec des environnements partags (As) As ncessite sept nouveaux combinateurs pour exprimer la sauvegarde et la restauration des environnements (dupl e , swap se ), la construction et l'ouverture des fermetures (mkclos, appclos), l'accs aux valeurs (fst, snd), et enfin l'ajout d'une fermeture l'environnement (bind). Ils sont dfinis dans L e par dupl e = l e e. push e e o push e e swap se = l s x. l e e. push s x o push e e mkclos = l s x. l e e. push s (x,e) appclos = l s (x,e). push e e o x fst = l e (e,x). push e e snd = l e (e,x). push s x bind = l e e. l s x. push e (e,x) Rmi Douence & Pascal Fradet La correction de As est exprime par la Proprit 4 (R est une fonction changeant la reprsentation des fermetures de (c, e) en push e e o c). Proprit 4 "E R [[ push e () o As [[ E]] ()]] = b E La transformation As peut tre optimise en ajoutant la rgle : As [[ l s x.E]] r = pop se o As [[ E]] r si x n'est pas libre dans E avec pop se = l e e. l s x. push e e Les variables son...
1The Next 700 Krivine Machines
"... Abstract: The Krivine machine is a simple and natural implementation of the normal weakhead reduction strategy for pure λterms. While its original description has remained unpublished, this machine has served as a basis for many variants, extensions and theoretical studies. In this paper, we prese ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract: The Krivine machine is a simple and natural implementation of the normal weakhead reduction strategy for pure λterms. While its original description has remained unpublished, this machine has served as a basis for many variants, extensions and theoretical studies. In this paper, we present the Krivine machine and some wellknown variants in a common framework. Our framework consists of a hierarchy of intermediate languages that are subsets of the λcalculus. The whole implementation process (compiler + abstract machine) is described via a sequence of transformations all of which express an implementation choice. We characterize the essence of the Krivine machine and locate it in the design space of functional language implementations. We show that, even within the particular class of Krivine machines, hundreds of variants can be designed. Keywords: Krivine machine, abstract machines, program transformation, compilation, functional language implementations.