Results 1 
7 of
7
Gradual Refinement Blending Pattern Matching with Data Abstraction
"... Abstract. Pattern matching is advantageous for understanding and reasoning about function definitions, but it tends to tightly couple the interface and implementation of a datatype. Significant effort has been invested in tackling this loss of modularity; however, decoupling patterns from concrete r ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Pattern matching is advantageous for understanding and reasoning about function definitions, but it tends to tightly couple the interface and implementation of a datatype. Significant effort has been invested in tackling this loss of modularity; however, decoupling patterns from concrete representations while maintaining soundness of reasoning has been a challenge. Inspired by the development of invertible programming, we propose an approach to abstract datatypes based on a rightinvertible language rinv—every function has a right (or pre) inverse. We show how this new design is able to permit a smooth incremental transition from programs with algebraic datatypes and pattern matching, to ones with proper encapsulation (implemented as abstract datatypes), while maintaining simple and sound reasoning.
Automatic Parallelization of Canonical Loops
"... Abstract. This paper presents a compilation technique that performs automatic parallelization of canonical loops. Canonical loops are a pattern observed in many well known algorithms, such as frequent itemsets, Kmeans and K nearest neighbors. Automatic parallelization allows application developers ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. This paper presents a compilation technique that performs automatic parallelization of canonical loops. Canonical loops are a pattern observed in many well known algorithms, such as frequent itemsets, Kmeans and K nearest neighbors. Automatic parallelization allows application developers to focus on the algorithmic details of the problem they are solving, leaving for the compiler the task of generating correct and efficient parallel code. Our method splits tasks and data among stream processing elements and uses a novel technique based on labeledstreams to minimize the communication between filters. Experiments performed on a cluster of 36 computers indicate that, for the three algorithms mentioned above, our method produces code that scales linearly on the number of available processors. These experiments also show that the automatically generated code is competitive when compared to hand tuned programs. 1.
Rapport n o RR2013012Programming with BSP Homomorphisms
, 2013
"... Algorithmic skeletons in conjunction with list homomorphisms play an important role in formal development of parallel algorithms. We have designed a notion of homomorphism dedicated to bulk synchronous parallelism. In this paper we derive two application using this theory: sparse matrix vector multi ..."
Abstract
 Add to MetaCart
(Show Context)
Algorithmic skeletons in conjunction with list homomorphisms play an important role in formal development of parallel algorithms. We have designed a notion of homomorphism dedicated to bulk synchronous parallelism. In this paper we derive two application using this theory: sparse matrix vector multiplication and the all nearest smaller values problem. We implement a support for BSP homomorphism in the Orléans Skeleton Library and experiment it with these two applications.
Towards Systematic Parallel Programming over MapReduce
"... Abstract. MapReduce is a useful and popular programming model for dataintensive distributed parallel computing. But it is still a challenge to develop parallel programs with MapReduce systematically, since it is usually not easy to derive a proper divideandconquer algorithm that matches MapReduce ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. MapReduce is a useful and popular programming model for dataintensive distributed parallel computing. But it is still a challenge to develop parallel programs with MapReduce systematically, since it is usually not easy to derive a proper divideandconquer algorithm that matches MapReduce. In this paper, we propose a homomorphismbased framework named Screwdriver for systematic parallel programming with MapReduce, making use of the program calculation theory of list homomorphisms. Screwdriver is implemented as a Java library on top of Hadoop. For any problem which can be resolved by two sequential functions that satisfy the requirements of the third homomorphism theorem, Screwdriver can automatically derive a parallel algorithm as a list homomorphism and transform the initial sequential programs to an efficient MapReduce program. Users need neither to care about parallelism nor to have deep knowledge of MapReduce. In addition to the simplicity of the programming model of our framework, such a calculational approach enables us to resolve many problems that it would be nontrivial to resolve directly with MapReduce. 1
Rapport n o RR201001Systematic Development of Functional Bulk Synchronous Parallel Programs
, 2010
"... With the current generalization of parallel architectures arises the concern of applying formal methods to parallelism, which allows specifications of parallel programs to be precisely stated and the correctness of an implementation to be verified. However, the complexity of parallel, compared to se ..."
Abstract
 Add to MetaCart
(Show Context)
With the current generalization of parallel architectures arises the concern of applying formal methods to parallelism, which allows specifications of parallel programs to be precisely stated and the correctness of an implementation to be verified. However, the complexity of parallel, compared to sequential, programs makes them more errorprone and difficult to verify. This calls for a strongly structured form of parallelism, which should not only ease programming by providing abstractions that conceal much of the complexity of parallel computation, but also provide a systematic way of developing practical programs from specification. Bulk Synchronous Parallelism (BSP) is a model of computation which offers a high degree of abstraction like PRAM models but yet a realistic cost model based on a structured parallelism. We propose a framework for refining a sequential specification toward a functional BSP program, the whole process being done with the help of a proof assistant. The main technical contributions of this paper are as follows: We define BH, a new homomorphic skeleton, which captures the essence of BSP computation in an algorithmic level, and
Functional Bulk Synchronous Parallel Programs
, 2010
"... With the current generalization of parallel architectures arises the concern of applying formal methods to parallelism, which allows specifications of parallel programs to be precisely stated and the correctness of an implementation to be verified. However, the complexity of parallel, compared to se ..."
Abstract
 Add to MetaCart
(Show Context)
With the current generalization of parallel architectures arises the concern of applying formal methods to parallelism, which allows specifications of parallel programs to be precisely stated and the correctness of an implementation to be verified. However, the complexity of parallel, compared to sequential, programs makes them more errorprone and difficult to verify. This calls for a strongly structured form of parallelism, which should not only ease programming by providing abstractions that conceal much of the complexity of parallel computation, but also provide a systematic way of developing practical programs from specification. Bulk Synchronous Parallelism (BSP) is a model of computation which offers a high degree of abstraction like PRAM models but yet a realistic cost model based on a structured parallelism. We propose a framework for refining a sequential specification toward a functional BSP program, the whole process being done with the help of a proof assistant. The main technical contributions of this paper are as follows: We define BH, a new homomorphic skeleton, which captures the essence of BSP computation in an algorithmic level, and