Results 1 
7 of
7
Compiling For Massively Parallel Architectures: A Perspective
, 1994
"... : The problem of automatically generating programs for massively parallel computers is a very complicated one, mainly because there are many architectures, each of them seeming to pose its own particular compilation problem. The purpose of this paper is to propose a framework in which to discuss the ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
: The problem of automatically generating programs for massively parallel computers is a very complicated one, mainly because there are many architectures, each of them seeming to pose its own particular compilation problem. The purpose of this paper is to propose a framework in which to discuss the compilation process, and to show that the features which affect it are few and generate a small number of combinations. The paper is oriented toward finegrained parallelization of static control programs, with emphasis on dataflow analysis, scheduling and placement. When going from there to more general programs and to coarser parallelism, one encounters new problems, some of which are discussed in the conclusion. KEYWORDS Massively Parallel Compilers, Automatic Parallelization. @ARTICLEFeau:95, AUTHOR = Paul Feautrier, TITLE =Compiling for Massively Parallel Architectures: a Perspective, JOURNAL = Microprogramming and Microprocessors, YEAR = 1995, NOTE = to appear 1 A FRAMEWO...
BSλ_p: Functional BSP Programs on Enumerated Vectors
, 2000
"... The BS#p calculus is a calculus of functional BSP programs on enumerated parallel vectors. This confluent calculus is defined and a parallel cost model is associated with a weak callbyvalue strategy. ..."
Abstract

Cited by 9 (8 self)
 Add to MetaCart
The BS#p calculus is a calculus of functional BSP programs on enumerated parallel vectors. This confluent calculus is defined and a parallel cost model is associated with a weak callbyvalue strategy.
Concrete Data Structures and Functional Parallel Programming
, 1997
"... We present a framework for designing parallel programming languages whose semantics is functional and where communications are explicit. To this end, we specialize Brookes and Geva's generalized concrete data structures with a notion of explicit data layout and obtain a CCC of distributed structures ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
We present a framework for designing parallel programming languages whose semantics is functional and where communications are explicit. To this end, we specialize Brookes and Geva's generalized concrete data structures with a notion of explicit data layout and obtain a CCC of distributed structures called arrays. We find that arrays' symmetric replicated structures, suggested by the dataparallel SPMD paradigm, are incompatible with sum types. We then outline a functional language with explicitlydistributed (monomorphic) concrete types, including higherorder, sum and recursive ones. In this language, programs can be as large as the network and can observe communication events in other programs. Such flexibility is missing from current dataparallel languages and amounts to a fusion with their socalled annotations, directives or metalanguages. 1 Explicit communications and functional programming Faced with the mismatch between parallel programming languages and the requirements o...
Contribution to Semantics of a DataParallel Logic Programming Language
 Post International Logic Programming Symposium Workshop on Parallel Logic Programming Systems
, 1995
"... . We propose an alternate approach to the usual introduction of parallelism in logic programming. Instead of detecting the intrinsic parallelism by an automatic and complex dataflow analysis, or upgrading standard logic languages by explicit concurrent control structures leading to taskoriented la ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
. We propose an alternate approach to the usual introduction of parallelism in logic programming. Instead of detecting the intrinsic parallelism by an automatic and complex dataflow analysis, or upgrading standard logic languages by explicit concurrent control structures leading to taskoriented languages, we tightly integrate the concepts of the dataparallel programming model and of logic programming in a kernel language, called DPLog. It offers a simple centralized and synchronous vision to the programmer. We give this language a declarative and a distributed asynchronous operational semantics. The equivalence theorem of these semantics establishes the soundness of the implementation. The expressiveness of the language is illustrated on examples. Keywords : Logic programming  Dataparallel languages  Design of programming languages  Semantics  MIMD architectures Introduction The introduction of parallelism in programming languages enables to extend the expressiveness ...
Array Structures and DataParallel Algorithms
 In
, 1996
"... We apply Brookes and Geva's theory of generalised concrete data structures and computational comonads to the semantics of higherorder dataparallel functional languages. This yields a mathematical framework for describing the interaction between higherorder functions, explicitly distributed data ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We apply Brookes and Geva's theory of generalised concrete data structures and computational comonads to the semantics of higherorder dataparallel functional languages. This yields a mathematical framework for describing the interaction between higherorder functions, explicitly distributed data and asynchronous algorithms. Concrete data structures (or CDS) allow the construction of several Cartesian closed categories, standard models for typed functional languages. Brookes and Geva have studied generalised CDSs and socalled parallel algorithms as meanings for lambdacalculus terms. An inputoutput function may correspond to many algorithms. Their construction is adapted to dataparallel functional languages through concrete array structures with explicit data layout. We construct a subcategory of array gCDS preserved by exponentiation through isomorphisms relating higherorder objects to their local parts. This formalism brings notions of data locality, synchronisation and denotat...
Automatic Distribution of Data and Computations
 In Technical Report 2000/3
, 2000
"... The most critical factor in the performance of a distributed memory computer is the access frequency to remote data. This frequency may be reduced by a clever distribution of data and computations among processors and their memories. In the context of data parallel languages { as for instance, HPF { ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The most critical factor in the performance of a distributed memory computer is the access frequency to remote data. This frequency may be reduced by a clever distribution of data and computations among processors and their memories. In the context of data parallel languages { as for instance, HPF { nding the proper distribution is the responsibility of the programmer. This paper explores another possibility, namely having the compiler determine the distribution using only information available in the source program. The paper shows that, with the help of elementary linear algebra techniques, one may nd satisfactory placements provided the source program is limited to DO loops and arrays with ane subscripts.
Contribution to the Design and the Semantics of a DataParallel Logic Programming Language
, 1995
"... . We propose an alternate approach to the usual introduction of parallelism in logic programming. Instead of detecting the intrinsic parallelism by an automatic and complex dataflow analysis, or upgrading standard logic languages by explicit concurrent control structures leading to taskoriented la ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
. We propose an alternate approach to the usual introduction of parallelism in logic programming. Instead of detecting the intrinsic parallelism by an automatic and complex dataflow analysis, or upgrading standard logic languages by explicit concurrent control structures leading to taskoriented languages, we tightly integrate the concepts of the dataparallel programming model and of logic programming in a kernel language, called DPLog. It offers a simple centralized and synchronous vision to the programmer. We give this language a declarative and a distributed asynchronous operational semantics. The equivalence theorem of these semantics establishes the soundness of the implementation. The expressiveness of the language is illustrated on examples. This document is an extended version of [18] which incorporates a missing proof. R'esum'e. Nous proposons une alternative aux approches classiques de l'introduction du parall'elisme en programmation logique. Au lieu de d'etecter le parall...