### Rapport n o RR-2010-01Systematic Development of Functional Bulk Synchronous Parallel Programs

, 2010

"... With the current generalization of parallel architectures arises the concern of applying formal methods to parallelism, which allows specifications of parallel programs to be precisely stated and the correctness of an implementation to be verified. However, the complexity of parallel, compared to se ..."

Abstract
- Add to MetaCart

(Show Context)
With the current generalization of parallel architectures arises the concern of applying formal methods to parallelism, which allows specifications of parallel programs to be precisely stated and the correctness of an implementation to be verified. However, the complexity of parallel, compared to sequential, programs makes them more error-prone and difficult to verify. This calls for a strongly structured form of parallelism, which should not only ease programming by providing abstractions that conceal much of the complexity of parallel computation, but also provide a systematic way of developing practical programs from specification. Bulk Synchronous Parallelism (BSP) is a model of computation which offers a high degree of abstraction like PRAM models but yet a realistic cost model based on a structured parallelism. We propose a framework for refining a sequential specification toward a functional BSP program, the whole process being done with the help of a proof assistant. The main technical contributions of this paper are as follows: We define BH, a new homomorphic skeleton, which captures the essence of BSP computation in an algorithmic level, and

### Mathematical Engineering

- in Proc. Annual European Conference on Parallel Processing (Euro-Par 2003), LNCS 2790 (Springer-Verlag
, 2003

"... Trees are useful data structures, but to design e#cient parallel programs over trees is known to be more di#cult than to do over lists. Although several important tree skeletons have been proposed to simplify parallel programming on trees, few studies have been reported on how to systematically u ..."

Abstract
- Add to MetaCart

Trees are useful data structures, but to design e#cient parallel programs over trees is known to be more di#cult than to do over lists. Although several important tree skeletons have been proposed to simplify parallel programming on trees, few studies have been reported on how to systematically use them in solving practical problems; it is neither clear how to make a good combination of skeletons to solve a given problem, nor obvious how to find suitable operators used in a single skeleton. In this paper, we report our first attempt to resolve these problems, proposing two important transformations, the tree di#usion transformation and the tree context preservation transformation. The tree di#usion transformation allows one to use familiar recursive definitions to develop his parallel programs, while the tree context preservation transformation shows how to derive associative operators that are required when using tree skeletons. We illustrate our approach by deriving an e#cient parallel program for solving a nontrivial problem called the party planning problem, the tree version of the famous maximum-weight-sum problem.

### tokyo.ac.jp

"... Generate-Test-Aggregate (GTA for short) is a novel programming model for MapReduce, dramatically simplifying the development of efficient parallel algorithms. Under the GTA model, a parallel computation is encoded into a simple pattern: generate all candi-dates, test them to filter out invalid ones, ..."

Abstract
- Add to MetaCart

(Show Context)
Generate-Test-Aggregate (GTA for short) is a novel programming model for MapReduce, dramatically simplifying the development of efficient parallel algorithms. Under the GTA model, a parallel computation is encoded into a simple pattern: generate all candi-dates, test them to filter out invalid ones, and aggregate valid ones to make the result. Once users specify their parallel computations in the GTA style, they get efficient MapReduce programs for free owing to an automatic optimization given by the GTA theory. In this paper, we report our implementation of a GTA library to support programming in the GTA model. In this library, we pro-vide a compact programming interface for hiding the complexity of GTA’s internal transformation, so that many problems can be encoded in the GTA style easily and straightforwardly. The GTA transformation and optimization mechanism implemented inside is a black-box to the end users, while users can extend the library by modifying existing (or implementing new) generators, testers or aggregators through standard programming interfaces of the GTA library. This GTA programming library supports both sequential or parallel execution on single computer and on-cluster execution with MapReduce computing engines. We evaluate our library by giving the results of our experiments on large data to show the efficiency, scalability and usefulness of this GTA library.

### Under consideration for publication in Formal Aspects of Computing Filter-embedding Semiring Fusion for Programming with MapReduce1

"... We show that MapReduce, the de facto standard for large scale data-intensive parallel programming, can be equipped with a programming theory in calculational form. By integrating the generate-and-test program-ming paradigm and semirings for aggregation of results, we propose a novel parallel program ..."

Abstract
- Add to MetaCart

We show that MapReduce, the de facto standard for large scale data-intensive parallel programming, can be equipped with a programming theory in calculational form. By integrating the generate-and-test program-ming paradigm and semirings for aggregation of results, we propose a novel parallel programming framework for MapReduce. The framework consists of two important calculation theorems: the shortcut fusion theorem of semiring homomorphisms bridges the gap between specications and efficient implementations, and the lter-embedding theorem helps to develop parallel programs in a systematic and incremental way.

### Systematic Development of Correct Bulk Synchronous Parallel Programs

"... Abstract — With the current generalisation of parallel archi-tectures arises the concern of applying formal methods to parallelism. The complexity of parallel, compared to sequential, programs makes them more error-prone and difficult to verify. Bulk Synchronous Parallelism (BSP) is a model of compu ..."

Abstract
- Add to MetaCart

(Show Context)
Abstract — With the current generalisation of parallel archi-tectures arises the concern of applying formal methods to parallelism. The complexity of parallel, compared to sequential, programs makes them more error-prone and difficult to verify. Bulk Synchronous Parallelism (BSP) is a model of computation which offers a high degree of abstraction like PRAM models but yet a realistic cost model based on a structured parallelism. We propose a framework for refining a sequential specification toward a functional BSP program, the whole process being done with the help of the Coq proof assistant. To do so we define BH, a new homomorphic skeleton, which captures the essence of BSP computation in an algorithmic level, and also serves as a bridge in mapping from high level specification to low level BSP parallel programs. I.

### Generate, Test, and Aggregate A Calculation-based Framework for Systematic Parallel Programming with MapReduce?

"... Abstract. MapReduce, being inspired by the map and reduce primi-tives available in many functional languages, is the de facto standard for large scale data-intensive parallel programming. Although it has suc-ceeded in popularizing the use of the two primitives for hiding the details of parallel comp ..."

Abstract
- Add to MetaCart

(Show Context)
Abstract. MapReduce, being inspired by the map and reduce primi-tives available in many functional languages, is the de facto standard for large scale data-intensive parallel programming. Although it has suc-ceeded in popularizing the use of the two primitives for hiding the details of parallel computation, little effort has been made to emphasize the programming methodology behind, which has been intensively studied in the functional programming and program calculation fields. We show that MapReduce can be equipped with a programming theory in calcula-tional form. By integrating the generate-and-test programing paradigm and semirings for aggregation of results, we propose a novel parallel pro-gramming framework for MapReduce. The framework consists of two important calculation theorems: the shortcut fusion theorem of semir-ing homomorphisms bridges the gap between specifications and efficient implementations, and the filter-embedding theorem helps to develop par-allel programs in a systematic and incremental way. We give nontrivial examples that demonstrate how to apply our framework. 1

### Experimentation, Theory

"... Tree contraction algorithms, whose idea was first proposed by Miller and Reif, are important parallel algorithms to implement efficient parallel programs manipulating trees. Despite their efficiency, the tree contraction algorithms have not been widely used due to the difficulties in deriving the tr ..."

Abstract
- Add to MetaCart

(Show Context)
Tree contraction algorithms, whose idea was first proposed by Miller and Reif, are important parallel algorithms to implement efficient parallel programs manipulating trees. Despite their efficiency, the tree contraction algorithms have not been widely used due to the difficulties in deriving the tree contracting operations. In particular, the derivation of the tree contracting operations is much difficult when multiple values are referred and updated in each step of the contractions. Such computations often appear in dynamic programming problems on trees. In this paper, we propose an algebraic approach to deriving tree contraction programs from recursive tree programs, by focusing on the properties of commutative semirings. We formalize a new condition for implementing tree reductions with the tree contraction algorithms, and give a systematic derivation of the tree contracting operations. Based on it, we implemented a code generator for tree reductions, which has an optimization mechanism that can remove unnecessary computations in the derived parallel programs. As far as we are aware, this is the first step towards an automatic parallelization system for the development of efficient tree programs.

### Abstract Parallel skeletons for manipulating general trees

, 2006

"... Trees are important datatypes that are often used in representing structured data such as XML. Though trees are widely used in sequential programming, it is hard to write efficient parallel programs manipulating trees, because of their irregular and ill-balanced structures. In this paper, we propose ..."

Abstract
- Add to MetaCart

Trees are important datatypes that are often used in representing structured data such as XML. Though trees are widely used in sequential programming, it is hard to write efficient parallel programs manipulating trees, because of their irregular and ill-balanced structures. In this paper, we propose a solution based on the skeletal approach. We formalize a set of skeletons (abstracted computational patterns) for rose trees (general trees of arbitrary shapes) based on the theory of Constructive Algorithmics. Our skeletons for rose trees are extensions of those proposed for lists and binary trees. We show that we can implement the skeletons efficiently in parallel, by combining the parallel binary-tree skeletons for which efficient parallel implementations are already known. As far as we are aware, we are the first who have formalized and implemented a set of simple but expressive parallel skeletons for rose trees. Ó 2006 Elsevier B.V. All rights reserved.

### Parallel Processing Letters, ❢c World Scientific Publishing Company SYSTEMATIC DERIVATION OF TREE CONTRACTION ALGORITHMS ∗

, 2004

"... While tree contraction algorithms play an important role in efficient tree computation in parallel, it is difficult to develop such algorithms due to the strict conditions imposed on contracting operators. In this paper, we propose a systematic method of deriving efficient tree contraction algorithm ..."

Abstract
- Add to MetaCart

(Show Context)
While tree contraction algorithms play an important role in efficient tree computation in parallel, it is difficult to develop such algorithms due to the strict conditions imposed on contracting operators. In this paper, we propose a systematic method of deriving efficient tree contraction algorithms from recursive functions on trees. We identify a general recursive form that can be parallelized into efficient tree contraction algorithms, and present a derivation strategy for transforming general recursive functions to the parallelizable form. We illustrate our approach by deriving a novel parallel algorithm for the maximum connected-set sum problem on arbitrary trees, the tree-version of the well-known maximum segment sum problem.