Results 1 
9 of
9
Regions: An Abstraction for Expressing Array Computation
 IN ACM SIGAPL/SIGPLAN INTERNATIONAL CONFERENCE ON ARRAY PROGRAMMING LANGUAGES
, 1998
"... Most array languages, such as Fortran 90, Matlab, and APL, provide support for referencing arrays by extending the traditional array subscripting construct found in scalar languages. We present an alternative approach that exploits the concept of regions  a representation of index sets that can b ..."
Abstract

Cited by 25 (14 self)
 Add to MetaCart
Most array languages, such as Fortran 90, Matlab, and APL, provide support for referencing arrays by extending the traditional array subscripting construct found in scalar languages. We present an alternative approach that exploits the concept of regions  a representation of index sets that can be named, manipulated with highlevel operators, and syntactically separated from array references. This paper develops the concept of regionbased programming and describes its benefits in the context of an idealized array language called RL. We show that regions simplify programming, reduce the likelihood of errors, and enable code reuse. Furthermore, we describe how regions accentuate the locality of array expressions and how ...
The Design and Implementation of a RegionBased Parallel Language
 UNIVERSITY OF WASHINGTON
, 2001
"... This dissertation describes the design and implementation of ZPL. ..."
Abstract

Cited by 20 (9 self)
 Add to MetaCart
This dissertation describes the design and implementation of ZPL.
An Implementation of the LPAR Parallel Programming Model for Scientific Computations
, 1993
"... LPAR is a portable coarsegrain parallel programming model for nonuniform structured scientific applications running on MIMD message passing architectures. Nonuniform applications, which include Nbody methods and adaptive multilevel mesh methods, rely on complex dynamic data structures and are pa ..."
Abstract

Cited by 16 (7 self)
 Add to MetaCart
LPAR is a portable coarsegrain parallel programming model for nonuniform structured scientific applications running on MIMD message passing architectures. Nonuniform applications, which include Nbody methods and adaptive multilevel mesh methods, rely on complex dynamic data structures and are particularly difficult to implement on parallel computers. This paper introduces the LPAR programming abstractions and discusses some important implementation issues. We also present performance results on the Intel iPSC/860 and nCUBE/2 for a vortex dynamics application developed using LPAR. 1 Introduction Recent developments in numerical methods for solving partial differential equations have emphasized elaborate, dynamic, nonuniform data structures. These methods attempt to place computational effort and accuracy in regions of high error or rapidly changing solutions. They are particularly attractive for solving local, nonuniform, timedependent problems. Typical applications include fast...
A Programming Model for BlockStructured Scientific Calculations on SMP Clusters
 Calculations on SMP Clusters. Ph. D. Dissertation, UCSD
, 1998
"... [None] ..."
Runtime Data Distribution for BlockStructured Applications on Distributed Memory Computers
 In Proceedings of the 7th SIAM Conference on Parallel Processing for Scientific Computing
, 1995
"... In many scientific applications running on parallel computers, efficient data decompositions exhibit a block structure which can be regular or irregular. We present a runtime strategy for both regular and irregular blockstructured applications which provides a threestage mapping including alignme ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
In many scientific applications running on parallel computers, efficient data decompositions exhibit a block structure which can be regular or irregular. We present a runtime strategy for both regular and irregular blockstructured applications which provides a threestage mapping including alignment, firstclass decomposition objects, and block data movement. We have implemented this strategy using a small set of primitive operations, and present these along with performance results. In contrast to data parallel Fortran dialects, our system executes completely at runtime and supports coarsegrained userdefined data decompositions. 1 Introduction It is wellknown that on distributed memory parallel computers, the distribution of data across processors can have a substantial impact on a program's performance. Under the messagepassing programming model, the programmer must manage all aspects of data distribution by hand. Programming languages and runtime libraries that manage the l...
Vcal: a Calculus for the Compilation of Data Parallel Languages
 8th Intl. Workship, Languages and Compilers for Parallel Computing, number 1033 in LNCS
, 1995
"... Vcal is a calculus designed to support the compilation of data parallel languages that allows to describe program transformations and optimizations as semantics preserving rewrite rules. In Vcal the program transformation and optimization phase of a compiler is organized in three independent p ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
Vcal is a calculus designed to support the compilation of data parallel languages that allows to describe program transformations and optimizations as semantics preserving rewrite rules. In Vcal the program transformation and optimization phase of a compiler is organized in three independent passes: in the first pass a set of rewrite rules are applied that attempt to identify the potential parallelism of an algorithm.
Data Field Haskell
, 1999
"... . Data fields provide a flexible and highly general model for indexed collections of data. Data Field Haskell is a Haskell dialect that provides an instance of data fields. It can be used for very generic collectionoriented programming, with a special emphasis on multidimensional structures. We ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
. Data fields provide a flexible and highly general model for indexed collections of data. Data Field Haskell is a Haskell dialect that provides an instance of data fields. It can be used for very generic collectionoriented programming, with a special emphasis on multidimensional structures. We give a brief description of the data field model and its underlying theory. We then describe Data Field Haskell, and an implementation. 1 Introduction Indexed data structures are important in many computing applications. The canonical indexed data structure is the array, but other indexed structures like hash tables and explicitly parallel entities are also common. In many applications the indexing capability provides an important part of the model: when solving partial differential equations, for instance, the index is often closely related to a physical coordinate, and explicitly parallel algorithms often use processor ID's as indices. Since the time of APL [5] it has been recognised ...
Development and Verification of Parallel Algorithms in the Data Field Model
 Proc. 2nd Int. Workshop on Constructive Methods for Parallel Programming
, 2000
"... . Data fields are partial functions provided with explicit domain information. They provide a very general, formal model for collections of data. Algorithms computing data collections can be described in this formalism at various levels of abstraction: in particular, explicit data distributions a ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
. Data fields are partial functions provided with explicit domain information. They provide a very general, formal model for collections of data. Algorithms computing data collections can be described in this formalism at various levels of abstraction: in particular, explicit data distributions are easy to model. Parallel versions of algorithms can then be formally verified against algorithm specifications in the model. Functions computing data fields can be directly programmed in the language Data Field Haskell. In this paper we give a brief introduction to the data field model. We then describe Data Field Haskell and make a small case study of how an algorithm and a parallel version of it both can be specified in the language. We then verify the correctness of the parallel version in the data field model. 1 Introduction Many computing applications require indexed data structures. In many applications the indexing capability provides an important part of the model. On the ...
Using a Calculus for the Compilation of Data Parallel Languages
"... Parallelizing compiler systems employ complex analyses and require a profound knowledge of the properties of a target architecture. In most systems this knowledge and analysis is hardcoded into a large program. ..."
Abstract
 Add to MetaCart
Parallelizing compiler systems employ complex analyses and require a profound knowledge of the properties of a target architecture. In most systems this knowledge and analysis is hardcoded into a large program.