Results 1  10
of
11
The HDGMachine: A Highly Distributed GraphReducer for a Transputer Network
 The Computer Journal
, 1991
"... Distributed implementations of programming languages with implicit parallelism hold out the prospect that the parallel programs are immediately scalable. This paper presents some of the results of our part of Esprit 415, in which we considered the implementation of lazy functional programming langua ..."
Abstract

Cited by 28 (0 self)
 Add to MetaCart
Distributed implementations of programming languages with implicit parallelism hold out the prospect that the parallel programs are immediately scalable. This paper presents some of the results of our part of Esprit 415, in which we considered the implementation of lazy functional programming languages on distributed architectures. A compiler and abstract machine were designed to achieve this goal. The abstract parallel machine was formally specified, using Miranda 1 . Each instruction of the abstract machine was then implemented as a macro in the Transputer Assembler. Although macro expansion of the code results in nonoptimal code generation, use of the Miranda specification makes it possible to validate the compiler before the Transputer code is generated. The hardware currently available consists of five T80025's, each board having 16M bytes of memory. Benchmark timings using this hardware are given. In spite of the straight forward codegeneration, the resulting system compar...
Using Projection Analysis in Compiling Lazy Functional Programs
 In Proceedings of the 1990 ACM Conference on Lisp and Functional Programming
, 1990
"... Projection analysis is a technique for finding out information about lazy functional programs. We show how the information obtained from this analysis can be used to speed up sequential implementations, and introduce parallelism into parallel implementations. The underlying evaluation model is evalu ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
Projection analysis is a technique for finding out information about lazy functional programs. We show how the information obtained from this analysis can be used to speed up sequential implementations, and introduce parallelism into parallel implementations. The underlying evaluation model is evaluation transformers, where the amount of evaluation that is allowed of an argument in a function application depends on the amount of evaluation allowed of the application. We prove that the transformed programs preserve the semantics of the original programs. Compilation rules, which encode the information from the analysis, are given for sequential and parallel machines. 1 Introduction A number of analyses have been developed which find out information about programs. The methods that have been developed fall broadly into two classes, forwards analyses such as those based on the ideas of abstract interpretation (e.g. [9, 18, 19, 7, 17, 12, 4, 20]), and backward analyses such as those based...
A fully abstract semantics for concurrent graph reduction (Extended Abstract)
, 1993
"... . This paper presents a formal model of the concurrent graph reduction implementation of nonstrict functional programming. This model diers from other models in that: It represents concurrent rather than sequential graph reduction. It represents lowlevel considerations such as garbage collecti ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
. This paper presents a formal model of the concurrent graph reduction implementation of nonstrict functional programming. This model diers from other models in that: It represents concurrent rather than sequential graph reduction. It represents lowlevel considerations such as garbage collection. It uses techniques from concurrency theory to simplify the presentation. There are three presentations of this model: An operational semantics based on graph reduction. A denotational semantics in the domain D ' (D !D)? . A program logic and proof system based on coppo types. We can then use abramsky and ong's techniques from the lazy  calculus to show that the denotational semantics is fully abstract for the operational semantics. This proof requires some results about the operational semantics: Since the operational semantics includes garbage collection, reduction is not conuent. We nd a conuent reduction strategy which has the same convergence properties as gr...
More Advice on Proving a Compiler Correct: Improve a Correct Compiler
, 1994
"... This paper is a condensed version of the author's PhD thesis [19]. Besides the compiler for the im perative language described in this paper, the thesis derives implementations of a simple functional and a simple logic programming language ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
This paper is a condensed version of the author's PhD thesis [19]. Besides the compiler for the im perative language described in this paper, the thesis derives implementations of a simple functional and a simple logic programming language
A Chemical Abstract Machine for Graph Reduction
, 1992
"... Graph reduction is an implementation technique for the lazy lcalculus. It has been used to implement many nonstrict functional languages, such as lazy ML, Gofer and Miranda. Parallel graph reduction allows for concurrent evaluation. In this paper, we present parallel graph reduction as a Chemical ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Graph reduction is an implementation technique for the lazy lcalculus. It has been used to implement many nonstrict functional languages, such as lazy ML, Gofer and Miranda. Parallel graph reduction allows for concurrent evaluation. In this paper, we present parallel graph reduction as a Chemical Abstract Machine, and show that the resulting testing semantics is adequate wrt testing equivalence for the lazy lcalculus. We also present a πcalculus implementation of the graph reduction machine, and show that the resulting testing semantics is also adequate.
A Systematic Study of Functional Language Implementations
 ACM Transactions on Programming Languages and Systems
, 1998
"... : We introduce a unified framework to describe, relate, compare and classify functional language implementations. The compilation process is expressed as a succession of program transformations in the common framework. At each step, different transformations model fundamental choices. A benefit of t ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
: We introduce a unified framework to describe, relate, compare and classify functional language implementations. The compilation process is expressed as a succession of program transformations in the common framework. At each step, different transformations model fundamental choices. A benefit of this approach is to structure and decompose the implementation process. The correctness proofs can be tackled independently for each step and amount to proving program transformations in the functional world. This approach also paves the way to formal comparisons by making it possible to estimate the complexity of individual transformations or compositions of them. Our study aims at covering the whole known design space of sequential functional languages implementations. In particular, we consider callbyvalue, callbyname and callbyneed reduction strategies as well as environment and graphbased implementations. We describe for each compilation step the diverse alternatives as program tr...
Proving the Correctness of Compiler Optimisations Based on Strictness Analysis
 in Proceedings 5th int. Symp. on Programming Language Implementation and Logic Programming, LNCS 714
, 1993
"... . We show that compiler optimisations based on strictness analysis can be expressed formally in the functional framework using continuations. This formal presentation has two benefits: it allows us to give a rigorous correctness proof of the optimised compiler; and it exposes the various optimisatio ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
. We show that compiler optimisations based on strictness analysis can be expressed formally in the functional framework using continuations. This formal presentation has two benefits: it allows us to give a rigorous correctness proof of the optimised compiler; and it exposes the various optimisations made possible by a strictness analysis. 1 Introduction Realistic compilers for imperative or functional languages include a number of optimisations based on nontrivial global analyses. Proving the correctness of such optimising compilers can be done in three steps: 1. proving the correctness of the original (unoptimised) compiler; 2. proving the correctness of the analysis; and 3. proving the correctness of the modifications of the simpleminded compiler to exploit the results of the analysis. A substantial amount of work has been devoted to steps (1) and (2) but there have been surprisingly few attempts at tackling step (3). In this paper we show how to carry out this third step in the...
Towards Machinechecked Compiler Correctness for Higherorder Pure Functional Languages
 CSL '94, European Association for Computer Science Logic, Springer LNCS
, 1994
"... . In this paper we show that the critical part of a correctness proof for implementations of higherorder functional languages is amenable to machineassisted proof. An extended version of the lambdacalculus is considered, and the congruence between its direct and continuation semantics is proved. ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
. In this paper we show that the critical part of a correctness proof for implementations of higherorder functional languages is amenable to machineassisted proof. An extended version of the lambdacalculus is considered, and the congruence between its direct and continuation semantics is proved. The proof has been constructed with the help of a generic theorem prover  Isabelle. The major part of the problem lies in establishing the existence of predicates which describe the congruence. This has been solved using Milne's inclusive predicate strategy [5]. The most important intermediate results and the main theorem as derived by Isabelle are quoted in the paper. Keywords: Compiler Correctness, Theorem Prover, Congruence Proof, Denotational Semantics, Lambda Calculus 1 Introduction Much of the work done previously in compiler correctness concerns restricted subsets of imperative languages. Some studies involve machinechecked correctnesse.g. Cohn [1], [2]. A lot of research h...
Combinator Shared Reduction and Infinite Objects in Type Theory
, 1996
"... We will present a syntactical proof of correctness and completeness of shared reduction. This work is an application of type theory extended with infinite objects and coinduction. ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We will present a syntactical proof of correctness and completeness of shared reduction. This work is an application of type theory extended with infinite objects and coinduction.
CpsTranslation and the Correctness of Optimising Compilers
, 1992
"... We show that compiler optimisations based on strictness analysis can be expressed formally in the functional framework using continuations. This formal presentation has two benefits: it allows us to give a rigorous correctness proof of the optimised compiler; and it exposes the various optimisations ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We show that compiler optimisations based on strictness analysis can be expressed formally in the functional framework using continuations. This formal presentation has two benefits: it allows us to give a rigorous correctness proof of the optimised compiler; and it exposes the various optimisations made possible by a strictness analysis. These benefits are especially significant in the presence of partially evaluated data structures. 1 Introduction Realistic compilers for imperative or functional languages include a number of optimisations based on nontrivial global analyses. Proving the correctness of such optimising compilers should involve three steps: 1. proving the correctness of the original (unoptimised) compiler; 2. proving the correctness of the analysis; and 3. proving the correctness of the modifications of the simpleminded compiler to exploit the results of the analysis. A substantial amount of work has been devoted to steps (1) and (2) but there has been surprisingly ...