## A Parallel Programming Methodology Based on Paradigms (1995)

Venue: | In Transputer and Occam Developments |

Citations: | 9 - 3 self |

### BibTeX

@INPROCEEDINGS{Rabhi95aparallel,

author = {Fethi A. Rabhi},

title = {A Parallel Programming Methodology Based on Paradigms},

booktitle = {In Transputer and Occam Developments},

year = {1995},

pages = {239--252},

publisher = {Press}

}

### Years of Citing Articles

### OpenURL

### Abstract

Today's efforts are mainly concentrated on providing "standard" parallel languages to ensure the portability of programs across various architectures. It is now believed that the next level of abstraction that will be addressed is the application level. This paper argues that there is an intermediate level that consist of common parallel programming paradigms. It describes some of these paradigms and explains the basic principles behind a "paradigm-oriented" programming approach. Finally, it points to future directions which can make it feasible to build parallel CASE tools that achieve automatic parallel code generation. 1 Introduction This paper is concerned the process of developing portable applications that are suitable for general purpose parallel computers. Until very recently, the most efficient way to develop efficient code has been to program directly at the machine code level. Efforts have been made in order to provide a higher abstraction level without a significant loss i...

### Citations

546 | On Visual Formalisms
- Harel
- 1988
(Show Context)
Citation Context ...ions in a functional language [23] or relations in a logic language although these models cannot cope with non-deterministic input without extensions. Process description languages (e.g. state charts =-=[18, 19]-=-), objectoriented languages and imperative languages with input/output operations (e.g. occam) are also suitable. It is inherently more difficult to parameterise a dynamic process network. An example ... |

537 |
The Implementation of Functional Programming Languages
- Jones
- 1987
(Show Context)
Citation Context ...tually need to migrate from heavily loaded processors to free processors. These problems have been extensively studied in parallel implementations of Lisp [17], logic [8, 37] and functional languages =-=[30]-=-. For conservative parallelism and when the tree decomposition is not data dependent (e.g. Quicksort has a data dependent decomposition), it is possible to achieve some kind of static partitioning [31... |

402 | STATEMATE: A Working Environment for the Development of Complex Reactive Systems
- Harel, Lachover, et al.
- 1990
(Show Context)
Citation Context ...ions in a functional language [23] or relations in a logic language although these models cannot cope with non-deterministic input without extensions. Process description languages (e.g. state charts =-=[18, 19]-=-), objectoriented languages and imperative languages with input/output operations (e.g. occam) are also suitable. It is inherently more difficult to parameterise a dynamic process network. An example ... |

232 |
Data parallel algorithms
- Hillis, Steele
- 1986
(Show Context)
Citation Context ...ransformation Algorithms 6.1 Description The last paradigm considered, called iterative transformation, is the only one suitable for massively parallel computing. It includes data parallel algorithms =-=[21]-=-, iterative combination algorithms [9], iterative relaxation algorithms [12] or Compute-Aggregate-Broadcast algorithms [27]. Hey [20] refers to some of them as geometric parallelism. Applications of t... |

134 |
A Users' Guide to PVM Parallel Virtual Machine
- Beguelin, Dongarra, et al.
- 1991
(Show Context)
Citation Context ...a suitable parallel language, from which a compiler generates code for a particular machine. Languages support a variety of parallel constructs ranging from message-passing protocols (e.g. occam, PVM =-=[3]-=-) to data parallel operations (e.g. C* [15], HPF [25]). There are also some languages (e.g. LISP, Prolog) in which parallelism can be implicitly derived from their ordinary syntax. It is now believed ... |

115 |
High Performance Fortran
- Loveman
- 1993
(Show Context)
Citation Context ...generates code for a particular machine. Languages support a variety of parallel constructs ranging from message-passing protocols (e.g. occam, PVM [3]) to data parallel operations (e.g. C* [15], HPF =-=[25]-=-). There are also some languages (e.g. LISP, Prolog) in which parallelism can be implicitly derived from their ordinary syntax. It is now believed that the next level of abstraction that will be addre... |

111 |
PARLOG: Parallel Programming in Logic
- Clark, Gregory
- 1984
(Show Context)
Citation Context ...d-Conquer is a category of recursively partitioned algorithms where all the sub-problems need to be solved to compute the solution, thus exploiting a conservative form of parallelism (AND-parallelism =-=[8]-=-). Divide-and-Conquer is important to many applications such as sorting, the Fast Fourier Transform and computing convex hulls [24]. Most existing classifications include Divide-and-Conquer as a parad... |

103 |
Dibâ€”a distributed implementation of backtracking
- Finkel, Manber
- 1987
(Show Context)
Citation Context ...ed so all the other attempts have to be interrupted. These algorithms are called Generate-and-Solve in [12] or Branch and Bound. Examples include the N-queens problem and combinatorial search problems=-=[13]-=-. Unlike in Divide-and-Conquer algorithms, subproblems are solved without knowing if their results will be useful or not, thus exploiting speculative parallelism (OR-parallelism [37]). These algorithm... |

98 |
Designing efficient algorithms for parallel computers
- Quinn
- 1986
(Show Context)
Citation Context ...ks ffl Distributed Independent ffl Iterative Transformation There are other classifications for parallel algorithms, some of which are more detailed[24, 5], others are closer to existing architectures=-=[28]-=-. The classification presented here is independent from any machine model, although one particular machine will be more suited to a class of parallel algorithms than another. Depending on which paradi... |

63 |
Functional Programming for Loosely-Coupled Multiprocessors
- Kelly
- 1989
(Show Context)
Citation Context ... defined through the values of the attributes that correspond to the geometric shape (e.g. a k-ary n-cube is defined by the attributes k and n). Computations can be functions in a functional language =-=[23]-=- or relations in a logic language although these models cannot cope with non-deterministic input without extensions. Process description languages (e.g. state charts [18, 19]), objectoriented language... |

60 |
Implementation of Multilisp: Lisp on a Multiprocessor
- Halstead
- 1984
(Show Context)
Citation Context ...laced on local queues so these processes eventually need to migrate from heavily loaded processors to free processors. These problems have been extensively studied in parallel implementations of Lisp =-=[17]-=-, logic [8, 37] and functional languages [30]. For conservative parallelism and when the tree decomposition is not data dependent (e.g. Quicksort has a data dependent decomposition), it is possible to... |

58 |
Algebraic identities for program calculation
- Bird
- 1989
(Show Context)
Citation Context ...broadcast and reduction operators, or in a more implicit way by allowing expressions mixing scalar and array variables or by using a set of high level operators such as in the Bird Meertens formalism =-=[2]-=-. Dynamic iterative transformation algorithms require, in addition to the above mentioned parameters, an indication of how objects can be combined to form other objects. This can be achieved by explic... |

55 |
Algorithmic skeletons: a structured approach to the management of parallel computation
- Cole
- 1988
(Show Context)
Citation Context ...n between problems and languages called paradigms, illustrated in figure 2. This has been inspired by our recent work on the parallel implementation of functional languages and by others such as Cole =-=[9] and Darlington [10]-=-. In the figure, paradigms are referred to by their common names such as "pipelines" and "data parallel" but they are to be defined later in this paper in a more general (though st... |

40 |
Crystal: Theory and Pragmatics of Generating Efficient Parallel Code
- Chen, Choo, et al.
- 1991
(Show Context)
Citation Context ...ationship between the old coordinate system and the new one, or by specifying how objects are combined depending on some of their internal properties. Examples of functional notations include Crystal =-=[6]-=- and GAMMA [1]. Logic languages are also suitable for providing a basis to such a notation. 6.3 Implementations In static algorithms, the calculation that occurs on an object's local data constitutes ... |

32 |
ELLPACK: A numerical simulation programming environment for parallel MIMD machines
- Houstis, Rice, et al.
- 1990
(Show Context)
Citation Context ...GES MACHINES Figure 1: Stages in developing parallel programs a wide range of "known-to-be-highly-parallel" operations such as a matrix multiplication, searching a record in a database etc. =-=//ELLPACK [22]-=- is an example of such a library. In contrast, this paper will argue that jumping to that level is moving a bit too fast! We believe that there is an intermediate algorithmic level of abstraction betw... |

28 |
Experiments in MIMD Parallelism
- Hey
- 1989
(Show Context)
Citation Context ...massively parallel computing. It includes data parallel algorithms [21], iterative combination algorithms [9], iterative relaxation algorithms [12] or Compute-Aggregate-Broadcast algorithms [27]. Hey =-=[20]-=- refers to some of them as geometric parallelism. Applications of this paradigm are typically found in the areas of numerical analysis, image processing, molecular dynamics and discrete events simulat... |

27 |
Computational Models for Parallel Computers
- Kung
- 1988
(Show Context)
Citation Context ...igms: ffl Recursively Partitioned ffl Process Networks ffl Distributed Independent ffl Iterative Transformation There are other classifications for parallel algorithms, some of which are more detailed=-=[24, 5]-=-, others are closer to existing architectures[28]. The classification presented here is independent from any machine model, although one particular machine will be more suited to a class of parallel a... |

23 | Exploiting parallelism in functional languages: A "paradigm-oriented" approach
- Rabhi
- 1995
(Show Context)
Citation Context ...LANGUAGES MACHINES etc. Figure 2: Stages in developing parallel programs with paradigms The rest of the paper explains the basic features of such a methodology. For each of the paradigms described in =-=[33]-=-, we describe a variety of options in specifying the parameters of the problem and the range of architectures for which it is suitable. 2 A Paradigm-Oriented Programming Environment A sequential parad... |

18 |
Programming paradigms for nonshared memory parallel computers
- Nelson, Snyder
- 1987
(Show Context)
Citation Context ...table for massively parallel computing. It includes data parallel algorithms [21], iterative combination algorithms [9], iterative relaxation algorithms [12] or Compute-Aggregate-Broadcast algorithms =-=[27]-=-. Hey [20] refers to some of them as geometric parallelism. Applications of this paradigm are typically found in the areas of numerical analysis, image processing, molecular dynamics and discrete even... |

11 |
A high-level language for the description of parallel algorithms
- Paalvast, H
- 1989
(Show Context)
Citation Context ... that they only allow a limited set of indices (mainly n-dimensional arrays) whereas there are other notations that could allow neighbouring operations in an arbitrary coordinate system to take place =-=[29, 32]-=-. Global operations on the set of data could be carried out through explicit broadcast and reduction operators, or in a more implicit way by allowing expressions mixing scalar and array variables or b... |

9 |
Parsec: A software development environment for performance oriented parallel programming
- FELDCAMP, WAGNER
- 1993
(Show Context)
Citation Context ...umber of slaves n which determines how the division of the work is achieved. Specifying these data structures and functions is quite simple and can be achieved using any formalism or language. Parsec =-=[11]-=- is an example of an environment for developing distributed independent applications. 5.3 Implementations Each slave processor reads the input (representing the problem description) from the master pr... |

8 |
BACS: Basel Algorithm Classification Scheme
- Burkhart
- 1993
(Show Context)
Citation Context ...igms: ffl Recursively Partitioned ffl Process Networks ffl Distributed Independent ffl Iterative Transformation There are other classifications for parallel algorithms, some of which are more detailed=-=[24, 5]-=-, others are closer to existing architectures[28]. The classification presented here is independent from any machine model, although one particular machine will be more suited to a class of parallel a... |

8 |
Structured parallel functional programming
- Darlington, Field, et al.
- 1991
(Show Context)
Citation Context ...and languages called paradigms, illustrated in figure 2. This has been inspired by our recent work on the parallel implementation of functional languages and by others such as Cole [9] and Darlington =-=[10]. In the figure, par-=-adigms are referred to by their common names such as "pipelines" and "data parallel" but they are to be defined later in this paper in a more general (though still very intuitive) ... |

7 | Introduction to Gamma
- Banatre, M'etayer
- 1991
(Show Context)
Citation Context ...een the old coordinate system and the new one, or by specifying how objects are combined depending on some of their internal properties. Examples of functional notations include Crystal [6] and GAMMA =-=[1]-=-. Logic languages are also suitable for providing a basis to such a notation. 6.3 Implementations In static algorithms, the calculation that occurs on an object's local data constitutes a process whic... |

5 |
Paper: Divide-and-conquer and parallel graph reduction. Parallel Comput
- Rabhi, Manson
- 1991
(Show Context)
Citation Context ...30]. For conservative parallelism and when the tree decomposition is not data dependent (e.g. Quicksort has a data dependent decomposition), it is possible to achieve some kind of static partitioning =-=[31]-=-. Another approach is to convert the algorithm into a different paradigm such as Distributed Independent (see Section 5) or Process Network (see Section 4). For example, [34] describes how to turn a d... |

4 |
Design principles of a distributed memory architecture for parallel graph reduction
- Bevan
- 1989
(Show Context)
Citation Context ...ication (e.g. NEWS communication on the CM-2 [35]). The implementation model described in [32] achieves neighbourhood communication in an asynchronous manner by relying on a graph reduction mechanism =-=[4]-=- for guaranteeing that every value requested will eventually be received. Synchronisation between the iteration steps could be achieved through message-passing or special hardware, such as the control... |

2 |
An Illustration of the Parallel Communicating
- Manson, Sahib
- 1993
(Show Context)
Citation Context ...ess network are: ffl A description of the network topology ffl A description of the computation inside each node in the network. Arbitrary network topologies are best described visually (e.g. a graph =-=[26]-=-, a dataflow diagram [16], Petri Nets), whereas regular topologies would be defined through the values of the attributes that correspond to the geometric shape (e.g. a k-ary n-cube is defined by the a... |

1 |
A Generic Divide-And-Conquer Kernel for the Meiko Computing Surface
- Clare
- 1990
(Show Context)
Citation Context ...these parameters, due to the flexibility in defining functions that manipulate complex data structures [9, 33], although there have been attempts in defining such functions using imperative languages =-=[7]-=-. Functional languages cannot easily represent speculative computations due to their deterministic nature and the absence of side effects (particularly needed in Branch and Bound algorithms). Although... |

1 |
grain parallelism - Three case studies, In The Characteristics of Parallel Algorithms, Jamieson L.H. et al
- Finkel, Large
- 1987
(Show Context)
Citation Context ...ts can be made, a partial solution is discarded. Sometimes, the first solution obtained is accepted so all the other attempts have to be interrupted. These algorithms are called Generate-and-Solve in =-=[12]-=- or Branch and Bound. Examples include the N-queens problem and combinatorial search problems[13]. Unlike in Divide-and-Conquer algorithms, subproblems are solved without knowing if their results will... |

1 |
Executable Specifications and
- Gaskell, Phillips
- 1994
(Show Context)
Citation Context ...scription of the network topology ffl A description of the computation inside each node in the network. Arbitrary network topologies are best described visually (e.g. a graph [26], a dataflow diagram =-=[16]-=-, Petri Nets), whereas regular topologies would be defined through the values of the attributes that correspond to the geometric shape (e.g. a k-ary n-cube is defined by the attributes k and n). Compu... |

1 |
A Paradigm-Oriented parallel Programming Environment for SIT Algorithms
- Rabhi, Schwarz, et al.
- 1994
(Show Context)
Citation Context ... that they only allow a limited set of indices (mainly n-dimensional arrays) whereas there are other notations that could allow neighbouring operations in an arbitrary coordinate system to take place =-=[29, 32]-=-. Global operations on the set of data could be carried out through explicit broadcast and reduction operators, or in a more implicit way by allowing expressions mixing scalar and array variables or b... |

1 | A Synthesis of a Dynamic Message-Passing Algorithm for Quicksort - Sharp, Harrison, et al. - 1991 |

1 |
Syre J.C., The PEPSys model: combining backtracking
- Westphal, Robert, et al.
- 1987
(Show Context)
Citation Context ...al search problems[13]. Unlike in Divide-and-Conquer algorithms, subproblems are solved without knowing if their results will be useful or not, thus exploiting speculative parallelism (OR-parallelism =-=[37]-=-). These algorithms also often rely on global data because some search trees might be pruned if a better solution has been found in another search tree. 3.2 Parameters To specify a recursively partiti... |