Results 1  10
of
18
Algorithm + Strategy = Parallelism
 JOURNAL OF FUNCTIONAL PROGRAMMING
, 1998
"... The process of writing large parallel programs is complicated by the need to specify both the parallel behaviour of the program and the algorithm that is to be used to compute its result. This paper introduces evaluation strategies, lazy higherorder functions that control the parallel evaluation of ..."
Abstract

Cited by 55 (19 self)
 Add to MetaCart
The process of writing large parallel programs is complicated by the need to specify both the parallel behaviour of the program and the algorithm that is to be used to compute its result. This paper introduces evaluation strategies, lazy higherorder functions that control the parallel evaluation of nonstrict functional languages. Using evaluation strategies, it is possible to achieve a clean separation between algorithmic and behavioural code. The result is enhanced clarity and shorter parallel programs. Evaluation strategies are a very general concept: this paper shows how they can be used to model a wide range of commonly used programming paradigms, including divideand conquer, pipeline parallelism, producer/consumer parallelism, and dataoriented parallelism. Because they are based on unrestricted higherorder functions, they can also capture irregular parallel structures. Evaluation strategies are not just of theoretical interest: they have evolved out of our experience in parallelising several largescale applications, where they have proved invaluable in helping to manage the complexities of parallel behaviour. These applications are described in detail here. The largest application we have studied to date, Lolita, is a 60,000 line natural language parser. Initial results show that for these applications we can achieve acceptable parallel performance, while incurring minimal overhead for using evaluation strategies.
Strictness Analysis in Logical Form
, 1991
"... This paper presents a framework for comparing two strictness analysis techniques: Abstract interpretation and nonstandard type inference. The comparison is based on the representation of a lattice by its ideals. A formal system for deducing inclusions between ideals of a lattice is presented and p ..."
Abstract

Cited by 44 (2 self)
 Add to MetaCart
This paper presents a framework for comparing two strictness analysis techniques: Abstract interpretation and nonstandard type inference. The comparison is based on the representation of a lattice by its ideals. A formal system for deducing inclusions between ideals of a lattice is presented and proved sound and complete. Viewing the ideals as strictness properties we use the formal system to define a program logic for deducing strictness properties of expressions in a typed lambda calculus. This strictness logic is shown to be sound and complete with respect to the abstract interpretation, which establishes the main result that strictness analysis by typeinference and by abstract interpretation are equally powerful techniques. 1 Introduction Abstract interpretation is a wellestablished technique for static analysis of programs. Its virtue is its strong connection with denotational semantics which provides a means of proving the analysis correct. Its vice is that the process of...
The HDGMachine: A Highly Distributed GraphReducer for a Transputer Network
 The Computer Journal
, 1991
"... Distributed implementations of programming languages with implicit parallelism hold out the prospect that the parallel programs are immediately scalable. This paper presents some of the results of our part of Esprit 415, in which we considered the implementation of lazy functional programming langua ..."
Abstract

Cited by 28 (0 self)
 Add to MetaCart
Distributed implementations of programming languages with implicit parallelism hold out the prospect that the parallel programs are immediately scalable. This paper presents some of the results of our part of Esprit 415, in which we considered the implementation of lazy functional programming languages on distributed architectures. A compiler and abstract machine were designed to achieve this goal. The abstract parallel machine was formally specified, using Miranda 1 . Each instruction of the abstract machine was then implemented as a macro in the Transputer Assembler. Although macro expansion of the code results in nonoptimal code generation, use of the Miranda specification makes it possible to validate the compiler before the Transputer code is generated. The hardware currently available consists of five T80025's, each board having 16M bytes of memory. Benchmark timings using this hardware are given. In spite of the straight forward codegeneration, the resulting system compar...
Abstract Interpretation of Functional Languages: From Theory to Practice
, 1991
"... Abstract interpretation is the name applied to a number of techniques for reasoning about programs by evaluating them over nonstandard domains whose elements denote properties over the standard domains. This thesis is concerned with higherorder functional languages and abstract interpretations with ..."
Abstract

Cited by 25 (0 self)
 Add to MetaCart
Abstract interpretation is the name applied to a number of techniques for reasoning about programs by evaluating them over nonstandard domains whose elements denote properties over the standard domains. This thesis is concerned with higherorder functional languages and abstract interpretations with a formal semantic basis. It is known how abstract interpretation for the simply typed lambda calculus can be formalised by using binary logical relations. This has the advantage of making correctness and other semantic concerns straightforward to reason about. Its main disadvantage is that it enforces the identification of properties as sets. This thesis shows how the known formalism can be generalised by the use of ternary logical relations, and in particular how this allows abstract values to deno...
Automatic Generation and Management of Interprocedural Program Analyses
 In Twentieth Annual SIGPLANSIGACT Symposium on Principles of Programming Languages, POPL'93
, 1993
"... We have designedand implemented an interprocedural program analyzer generator, called system Z. Our goal is to automate the generation and management of semanticsbased interprocedural program analysis for a wide range of target languages. System Z is based on the abstract interpretation framework. ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
We have designedand implemented an interprocedural program analyzer generator, called system Z. Our goal is to automate the generation and management of semanticsbased interprocedural program analysis for a wide range of target languages. System Z is based on the abstract interpretation framework. The input to system Z is a highlevel specification of an abstract interpreter. The output is a C code for the specified interprocedural program analyzer. The system provides a highlevel command set (called projection expressions) in which the user can tune the analysis in accuracy and cost. The user writes projection expressions for selected domains; system Z takes care of the remaining things so that the generated analyzer conducts an analysis over the projected domains, which will vary in cost and accuracy according to the projections. We demonstrate the system's capabilities by experiments with a set of generated analyzers which can analyze C, FORTRAN, and SCHEME programs. 1 Introductio...
A Relational Approach to Strictness Analysis for HigherOrder Polymorphic Functions
 In Proc. ACM Symposium on Principles of Programming Languages
, 1991
"... This paper defines the categorical notions of relators and transformations and shows that these concepts enable us to give a semantics for polymorphic, higher order functional programs. We demonstrate the pertinence of this semantics to the analysis of polymorphic programs by proving that strictness ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
This paper defines the categorical notions of relators and transformations and shows that these concepts enable us to give a semantics for polymorphic, higher order functional programs. We demonstrate the pertinence of this semantics to the analysis of polymorphic programs by proving that strictness analysis is a polymorphic invariant. 1 Introduction Recently, there has been some effort to construe the semantics of polymorphic functional programming languages using the categorical notion of a natural transformation. The idea can be sketched as follows: we have a "universe of computational discourse" given by some category (in practice, a suitable category of domains). Types are objects of . Type constructions (e.g. product, function space) are functors (of appropriate arity) over . Monomorphic functional programs are morphisms of ; polymorphic programs are natural transformations. E.g. append : 8t: t ? \Theta t ? ! t ? append : (\Delta) ? \Theta (\Delta) ? : ! (\Delta) ? w...
Using Projection Analysis in Compiling Lazy Functional Programs
 In Proceedings of the 1990 ACM Conference on Lisp and Functional Programming
, 1990
"... Projection analysis is a technique for finding out information about lazy functional programs. We show how the information obtained from this analysis can be used to speed up sequential implementations, and introduce parallelism into parallel implementations. The underlying evaluation model is evalu ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
Projection analysis is a technique for finding out information about lazy functional programs. We show how the information obtained from this analysis can be used to speed up sequential implementations, and introduce parallelism into parallel implementations. The underlying evaluation model is evaluation transformers, where the amount of evaluation that is allowed of an argument in a function application depends on the amount of evaluation allowed of the application. We prove that the transformed programs preserve the semantics of the original programs. Compilation rules, which encode the information from the analysis, are given for sequential and parallel machines. 1 Introduction A number of analyses have been developed which find out information about programs. The methods that have been developed fall broadly into two classes, forwards analyses such as those based on the ideas of abstract interpretation (e.g. [9, 18, 19, 7, 17, 12, 4, 20]), and backward analyses such as those based...
The Evaluation Transformer Model of Reduction and Its Correctness
 in TAPSOFT 91
, 1991
"... Lazy evaluation of functional programs incurs time and memory overheads, and restricts parallelism compared with programs that are evaluated strictly. A number of analysis techniques, such as abstract interpretation and projection analysis, have been developed to find out information that can allevi ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
Lazy evaluation of functional programs incurs time and memory overheads, and restricts parallelism compared with programs that are evaluated strictly. A number of analysis techniques, such as abstract interpretation and projection analysis, have been developed to find out information that can alleviate these overheads. This paper formalises an evaluation model, the evaluation transformer model of reduction, which can use information from these analysis techniques, and proves that the resulting reduction strategies produce the same answers as those obtained using lazy evaluation.
Polymorphic Strictness Analysis Using Frontiers
 Proceedings of the 1993 ACM on Partial Evaluation and SemanticsBased Program Manipulation (PEPM '93), ACM
, 1992
"... This paper shows how to implement sensible polymorphic strictness analysis using the Frontiers algorithm. A central notion is to only ever analyse each function once, at its simplest polymorphic instance. Subsequent nonbase uses of functions are dealt with by generalising their simplest instance an ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
This paper shows how to implement sensible polymorphic strictness analysis using the Frontiers algorithm. A central notion is to only ever analyse each function once, at its simplest polymorphic instance. Subsequent nonbase uses of functions are dealt with by generalising their simplest instance analyses. This generalisation is done using an algorithm developed by Baraki, based on embeddingclosure pairs. Compared with an alternative approach of expanding the program out into a collection of monomorphic instances, this technique is hundreds of times faster for realistic programs. There are some approximations involved, but these do not seem to have a detrimental effect on the overall result. The overall effect of this technology is to considerably expand the range of programs for which the Frontiers algorithm gives useful results reasonably quickly. 1 Introduction The Frontiers algorithm was introduced in [CP85 ] as an allegedly efficient way of doing forwards strictness analysis, al...
Implementing the Evaluation Transformer Model of Reduction on Parallel Machines
, 1991
"... The evaluation transformer model of reduction generalises lazy evaluation in two ways: it can start the evaluation of expressions before their first use, and it can evaluate expressions further than weak head normal form. Moreover, the amount of evaluation required of an argument to a function may d ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
The evaluation transformer model of reduction generalises lazy evaluation in two ways: it can start the evaluation of expressions before their first use, and it can evaluate expressions further than weak head normal form. Moreover, the amount of evaluation required of an argument to a function may depend on the amount of evaluation required of the function application. It is a suitable candidate model for implementing lazy functional languages on parallel machines. In this paper we explore the implementation of lazy functional languages on parallel machines, both shared and distributed memory architectures, using the evaluation transformer model of reduction. We will see that the same code can be produced for both styles of architecture, and the definition of the instruction set is virtually the same for each style. The essential difference is that a distributed memory architecture has one extra node type for nonlocal pointers, and instructions which involve the value of such nodes need their definitions extended to cover this new type of node. To make our presentation accessible, we base our description on a variant of the wellknon Gmachine, a machine for executing lazy functional programs.