Results 11  20
of
52
Three implementation models for Scheme
, 1987
"... This dissertation presents three implementation models for the Scheme Programming Language. The first is a heapbased model used in some form in most Scheme implementations to date; the second is a new stackbased model that is considerably more efficient than the heapbased model at executing most ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
This dissertation presents three implementation models for the Scheme Programming Language. The first is a heapbased model used in some form in most Scheme implementations to date; the second is a new stackbased model that is considerably more efficient than the heapbased model at executing most programs; and the third is a new stringbased model intended for use in a multipleprocessor implementation of Scheme. The heapbased model allocates several important data structures in a heap, including actual parameter lists, binding environments, and call frames. The stackbased model allocates these same structures on a stack whenever possible. This results in Jess heap allocation, fewer memory references, shorter instruction sequences, less garbage collection, and more efficient use of memory. The stringbased model allocates versions of these structures right in the program text, which is represented as a string of symbols. In the stringbased model, Scheme programs are translated into an FFP language designed specifically to support Scheme. Programs in this language are directly executed by the
Three Steps for the CPS Transformation
, 1991
"... Transforming a #term into continuationpassing style (CPS) might seem mystical at first, but in fact it can be characterized by three separate aspects: . The values of all intermediate applications are given a name. . The evaluation of these applications is sequentialized based on a traversal o ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
Transforming a #term into continuationpassing style (CPS) might seem mystical at first, but in fact it can be characterized by three separate aspects: . The values of all intermediate applications are given a name. . The evaluation of these applications is sequentialized based on a traversal of their syntax tree. This traversal mimics the reduction strategy. . The resulting term is equipped with a continuation  a #abstraction whose application to intermediate values yields the final result of the whole evaluation. The first point is fulfilled using the uniform naming mechanism of #abstraction (Church encoding), which explains why continuations are represented as functions. The second point justifies why CPS terms are evaluationorder independent  their evaluation order is determined by the syntax tree traversal of the CPS transformation. The third point captures the essence of the CPS transformation. We have staged Fischer and Plotkin's original CPS transformer accordin...
A Taxonomy of Functional Language Implementations Part II: CallbyName, CallbyNeed and Graph Reduction
, 1996
"... In Part I [5], we proposed an approach to formally describe and compare functional languages implementations. We focused on callbyvalue and described wellknown compilers for strict languages. Here, we complete our exploration of the design space of implementations by studying callbyname, cal ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
In Part I [5], we proposed an approach to formally describe and compare functional languages implementations. We focused on callbyvalue and described wellknown compilers for strict languages. Here, we complete our exploration of the design space of implementations by studying callbyname, callbyneed and graph reduction. We express the whole compilation process as a succession of program transformations in a common framework. At each step, different transformations model fundamental choices or optimizations. We describe and compare the diverse alternatives for the compilation of the callbyname strategy in both environment and graphbased models. The different options for the compilation of breduction described in [5] can be applied here as well. Instead, we describe other possibilities specific to graph reduction. Callbyneed is nothing but callbyname with redex sharing and update. We present how sharing can be expressed in our framework and we describe different...
Metacomputationbased compiler architecture
 IN 5TH INTERNATIONAL CONFERENCE ON THE MATHEMATICS OF PROGRAM CONSTRUCTION
, 2000
"... This paper presents a modular and extensible style of language specification based on metacomputations. This style uses two monads to factor the static and dynamic parts of the specification, thereby staging the specification and achieving strong bindingtime separation. Because metacomputations are ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
This paper presents a modular and extensible style of language specification based on metacomputations. This style uses two monads to factor the static and dynamic parts of the specification, thereby staging the specification and achieving strong bindingtime separation. Because metacomputations are defined interms of monads, they can be constructed modularly and extensibly using monad transformers. A number of language constructs are specified: expressions, control flow, imperative features, and block structure. Metacomputationstyle specification lends itself to semanticsdirected compilation, which we demonstrate by creating a modular compiler for a blockstructured, imperative while language.
SemanticsDirected Code Generation
, 1985
"... The intermediate representations (IR) used by most compilers have an operational semantics. The nodes in the graph (or tree, or quadcode sequence) have an interpretation as the operation codes of some abstract machine. A denotational semantics, in which each node in the IR graph has a static meanin ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
The intermediate representations (IR) used by most compilers have an operational semantics. The nodes in the graph (or tree, or quadcode sequence) have an interpretation as the operation codes of some abstract machine. A denotational semantics, in which each node in the IR graph has a static meaning, can lead to a clean interface between the front and back ends of the compiler. Furthermore, it is possible to concisely specify a code generator to translate the denotational representation into machine code. Combined with recent work allowing the denotational specification of front ends to translate the input language into the IR, a complete compiler with a welldefined semantics may be generated. Using this technique, compilers have been written for (most of) Pascal and C which, although they compile slowly, produce fairly good machine code. July 25,  1  1 1. Introduction The intermediate representations (IR) used by most compilers have an operational semantics. The nodes in the gra...
A Systematic Study of Functional Language Implementations
 ACM Transactions on Programming Languages and Systems
, 1998
"... : We introduce a unified framework to describe, relate, compare and classify functional language implementations. The compilation process is expressed as a succession of program transformations in the common framework. At each step, different transformations model fundamental choices. A benefit of t ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
: We introduce a unified framework to describe, relate, compare and classify functional language implementations. The compilation process is expressed as a succession of program transformations in the common framework. At each step, different transformations model fundamental choices. A benefit of this approach is to structure and decompose the implementation process. The correctness proofs can be tackled independently for each step and amount to proving program transformations in the functional world. This approach also paves the way to formal comparisons by making it possible to estimate the complexity of individual transformations or compositions of them. Our study aims at covering the whole known design space of sequential functional languages implementations. In particular, we consider callbyvalue, callbyname and callbyneed reduction strategies as well as environment and graphbased implementations. We describe for each compilation step the diverse alternatives as program tr...
Compiler Correctness for Concurrent Languages
 in proc. Coordination'96
, 1994
"... . This paper extends previous work in compiler derivation and verification to languages with trueconcurrency semantics. We extend the calculus to model processcentered concurrent computation, and give the semantics of a small language in terms of this calculus. We then define a target abstract m ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
. This paper extends previous work in compiler derivation and verification to languages with trueconcurrency semantics. We extend the calculus to model processcentered concurrent computation, and give the semantics of a small language in terms of this calculus. We then define a target abstract machine whose states have denotations in the same calculus. We prove the correctness of a compiler for our language: the denotation of the compiled code is shown to be strongly bisimilar to the denotation of the source program, and the abstract machine running the compiled code is shown to be branchingbisimilar to the source program's denotation. 1 Introduction Our original goal was to verify a compiler for Linda [8], using that language as a representative of modern concurrent language design. Upon searching the literature, we found a vast amount of work on models of concurrency, but little that was obviously applicable to compiler derivation and verification. Accordingly we decided to tac...
Compilation as Metacomputation: Binding Time Separation in Modular Compilers (Extended Abstract)
 In 5th Mathematics of Program Construction Conference, MPC2000, Ponte de
, 1998
"... This paper presents a modular and extensible style of language specification based on metacomputations. This style uses two monads to factor the static and dynamic parts of the specification, thereby staging the specification and achieving strong bindingtime separation. Because metacomputations are ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
This paper presents a modular and extensible style of language specification based on metacomputations. This style uses two monads to factor the static and dynamic parts of the specification, thereby staging the specification and achieving strong bindingtime separation. Because metacomputations are defined in terms of monads, they can be constructed modularly and extensibly using monad transformers. A number of language constructs are specified: expressions, controlflow, imperative features, block structure, and higherorder functions and recursive bindings. Metacomputationstyle specification lends itself to semanticsdirected compilation, which we demonstrate by creating a modular compiler for a higherorder, imperative, Algollike language.
Towards Machinechecked Compiler Correctness for Higherorder Pure Functional Languages
 CSL '94, European Association for Computer Science Logic, Springer LNCS
, 1994
"... . In this paper we show that the critical part of a correctness proof for implementations of higherorder functional languages is amenable to machineassisted proof. An extended version of the lambdacalculus is considered, and the congruence between its direct and continuation semantics is proved. ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
. In this paper we show that the critical part of a correctness proof for implementations of higherorder functional languages is amenable to machineassisted proof. An extended version of the lambdacalculus is considered, and the congruence between its direct and continuation semantics is proved. The proof has been constructed with the help of a generic theorem prover  Isabelle. The major part of the problem lies in establishing the existence of predicates which describe the congruence. This has been solved using Milne's inclusive predicate strategy [5]. The most important intermediate results and the main theorem as derived by Isabelle are quoted in the paper. Keywords: Compiler Correctness, Theorem Prover, Congruence Proof, Denotational Semantics, Lambda Calculus 1 Introduction Much of the work done previously in compiler correctness concerns restricted subsets of imperative languages. Some studies involve machinechecked correctnesse.g. Cohn [1], [2]. A lot of research h...