Results 1  10
of
49
Models and Languages for Parallel Computation
 ACM COMPUTING SURVEYS
, 1998
"... We survey parallel programming models and languages using 6 criteria [:] should be easy to program, have a software development methodology, be architectureindependent, be easy to understand, guranatee performance, and provide info about the cost of programs. ... We consider programming models in ..."
Abstract

Cited by 135 (4 self)
 Add to MetaCart
We survey parallel programming models and languages using 6 criteria [:] should be easy to program, have a software development methodology, be architectureindependent, be easy to understand, guranatee performance, and provide info about the cost of programs. ... We consider programming models in 6 categories, depending on the level of abstraction they provide.
A provable time and space efficient implementation of nesl
 In International Conference on Functional Programming
, 1996
"... In this paper we prove time and space bounds for the implementation of the programming language NESL on various parallel machine models. NESL is a sugared typed Jcalculus with a set of array primitives and an explicit parallel map over arrays. Our results extend previous work on provable implementa ..."
Abstract

Cited by 71 (7 self)
 Add to MetaCart
In this paper we prove time and space bounds for the implementation of the programming language NESL on various parallel machine models. NESL is a sugared typed Jcalculus with a set of array primitives and an explicit parallel map over arrays. Our results extend previous work on provable implementation bounds for functional languages by considering space and by including arrays. For modeling the cost of NESL we augment a standard callbyvalue operational semantics to return two cost measures: a DAG representing the sequential dependence in the computation, and a measure of the space taken by a sequential implementation. We show that a NESL program with w work (nodes in the DAG), d depth (levels in the DAG), and s sequential space can be implemented on a p processor butterfly network, hypercube, or CRCW PRAM usin O(w/p + d log p) time and 0(s + dp logp) reachable space. For programs with sufficient parallelism these bounds are optimal in that they give linew speedup and use space within a constant factor of the sequential space. 1
Static dependent costs for estimating execution time
 In Proc. of the 1994 ACM Conference on LISP and functional programming
, 1994
"... We present the first system for estimating and using datadependent expression execution times in a language with firstclass procedures and imperative constructs. Thepresence of firstclass procedures and imperative constructs makes cost estimation a global problem that can benefit from type informa ..."
Abstract

Cited by 46 (0 self)
 Add to MetaCart
We present the first system for estimating and using datadependent expression execution times in a language with firstclass procedures and imperative constructs. Thepresence of firstclass procedures and imperative constructs makes cost estimation a global problem that can benefit from type information. We estimate expression costs with the aid of an algebraic type reconstruction system that assigns every procedure atype that includes a static dependent cost. A static dependent cost describes the execution time of a procedure in terms of its inputs. In particular, a procedure’s static dependent cost can depend on the size of input data structures and the cost of input firstclass procedures. Our cost system produces symbolic cost expressions that contain free variables describing the size and cost of the procedure’s inputs. At runtime, a cost estimate is dynamically computed from the statically determined cost expression and runtime cost and size information. We present experimental results that validate our cost system onthreecompilers and architectures. We experimentally demonstrate the utility of cost estimates in making dynamic parallelization decisions. In our experience, dynamic parallelization meets or exceeds the parallel performance of any fixed number of processors. 1
The BirdMeertens Formalism as a Parallel Model
 Software for Parallel Computation, volume 106 of NATO ASI Series F
, 1993
"... The expense of developing and maintaining software is the major obstacle to the routine use of parallel computation. Architecture independent programming offers a way of avoiding the problem, but the requirements for a model of parallel computation that will permit it are demanding. The BirdMeertens ..."
Abstract

Cited by 41 (0 self)
 Add to MetaCart
The expense of developing and maintaining software is the major obstacle to the routine use of parallel computation. Architecture independent programming offers a way of avoiding the problem, but the requirements for a model of parallel computation that will permit it are demanding. The BirdMeertens formalism is an approach to developing and executing dataparallel programs; it encourages software development by equational transformation; it can be implemented efficiently across a wide range of architecture families; and it can be equipped with a realistic cost calculus, so that tradeoffs in software design can be explored before implementation. It makes an ideal model of parallel computation. Keywords: General purpose parallel computing, models of parallel computation, architecture independent programming, categorical data type, program transformation, code generation. 1 Properties of Models of Parallel Computation Parallel computation is still the domain of researchers and those ...
Systematic Efficient Parallelization of Scan and Other List Homomorphisms
 In Annual European Conference on Parallel Processing, LNCS 1124
, 1996
"... Homomorphisms are functions which can be parallelized by the divideandconquer paradigm. A class of distributable homomorphisms (DH) is introduced and an efficient parallel implementation schema for all functions of the class is derived by transformations in the BirdMeertens formalism. The schema ..."
Abstract

Cited by 27 (7 self)
 Add to MetaCart
Homomorphisms are functions which can be parallelized by the divideandconquer paradigm. A class of distributable homomorphisms (DH) is introduced and an efficient parallel implementation schema for all functions of the class is derived by transformations in the BirdMeertens formalism. The schema can be directly mapped on the hypercube with an unlimited or an arbitrary fixed number of processors, providing provable correctness and predictable performance. The popular scanfunction (parallel prefix) illustrates the presentation: the systematically derived implementation for scan coincides with the practically used "folklore" algorithm for distributedmemory machines.
Parallel Programming, List Homomorphisms and the Maximum Segment Sum Problem
 Proceedings of Parco 93. Elsevier Series in Advances in Parallel Computing
, 1993
"... We review the use of the BirdMeertens Formalism as a vehicle for the construction of programs with massive implicit parallelism. We show that a simple result from the theory, concerning the expression of list homomorphisms, can help us in our search for parallel algorithms and demonstrate its appli ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
We review the use of the BirdMeertens Formalism as a vehicle for the construction of programs with massive implicit parallelism. We show that a simple result from the theory, concerning the expression of list homomorphisms, can help us in our search for parallel algorithms and demonstrate its application to some simple problems including the maximum segment sum problem. Our main purpose is to show that an understanding of the homomorphism lemma can be helpful in producing programs for problems which are "not quite" list homomorphisms themselves. A more general goal is to illustrate the benefits which can arise from taking a little theory with a pinch of pragmatic salt. 1 Introduction The use of bulk operations on aggregate data sets as a means of generating programs with a high degree of implicit parallelism has a long history (e.g. see [4] for a recent presentation). Although traditionally associated with an imperative programming style and SIMD machines, the approach lends itself e...
Towards Parallel Programming by Transformation: The FAN Skeleton Framework
, 2001
"... A Functional Abstract Notation (FAN) is proposed for the specification and design of parallel algorithms by means of skeletons  highlevel patterns with parallel semantics. The main weakness of the current programming systems based on skeletons is that the user is still responsible for finding the ..."
Abstract

Cited by 20 (10 self)
 Add to MetaCart
A Functional Abstract Notation (FAN) is proposed for the specification and design of parallel algorithms by means of skeletons  highlevel patterns with parallel semantics. The main weakness of the current programming systems based on skeletons is that the user is still responsible for finding the most appropriate skeleton composition for a given application and a given parallel architecture. We describe a transformational framework for the development of skeletal programs which is aimed at filling this gap. The framework makes use of transformation rules which are semantic equivalences among skeleton compositions. For a given problem, an initial, possibly inefficient skeleton specification is refined by applying a sequence of transformations. Transformations are guided by a set of performance prediction models which forecast the behavior of each skeleton and the performance benefits of different rules. The design process is supported by a graphical tool which locates applicable transformations and provides performance estimates, thereby helping the programmer in navigating through the program refinement space. We give an overview of the FAN framework and exemplify its use with performancedirected program derivations for simple case studies. Our experience can be viewed as a first feasibility study of methods and tools for transformational, performancedirected parallel programming using skeletons.
A Cost Analysis for a Higherorder Parallel Programming Model
, 1996
"... Programming parallel computers remains a difficult task. An ideal programming environment should enable the user to concentrate on the problem solving activity at a convenient level of abstraction, while managing the intricate lowlevel details without sacrificing performance. This thesis investiga ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
Programming parallel computers remains a difficult task. An ideal programming environment should enable the user to concentrate on the problem solving activity at a convenient level of abstraction, while managing the intricate lowlevel details without sacrificing performance. This thesis investigates a model of parallel programming based on the BirdMeertens Formalism (BMF). This is a set of higherorder functions, many of which are implicitly parallel. Programs are expressed in terms of functions borrowed from BMF. A parallel implementation is defined for each of these functions for a particular topology, and the associated execution costs are derived. The topologies which have been considered include the hypercube, 2D torus, tree and the linear array. An analyser estimates the costs associated with different implementations of a given program and selects a costeffective one for a given topology. All the analysis is performed at compiletime which has the advantage of reducing run...
A Provably TimeEfficient Parallel Implementation of Full Speculation
 In Proceedings of the 23rd ACM Symposium on Principles of Programming Languages
, 1996
"... Speculative evaluation, including leniency and futures, is often used to produce high degrees of parallelism. Existing speculative implementations, however, may serialize computation because of their implementation of queues of suspended threads. We give a provably efficient parallel implementation ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
Speculative evaluation, including leniency and futures, is often used to produce high degrees of parallelism. Existing speculative implementations, however, may serialize computation because of their implementation of queues of suspended threads. We give a provably efficient parallel implementation of a speculative functional language on various machine models. The implementation includes proper parallelization of the necessary queuing operations on suspended threads. Our target machine models are a butterfly network, hypercube, and PRAM. To prove the efficiency of our implementation, we provide a cost model using a profiling semantics and relate the cost model to implementations on the parallel machine models. 1 Introduction Futures, lenient languages, and several implementations of graph reduction for lazy languages all use speculative evaluation (callbyspeculation [15]) to expose parallelism. The basic idea of speculative evaluation, in this context, is that the evaluation of a...