Results 1 
5 of
5
Delinearization: an Efficient Way to Break Multiloop Dependence Equations
 In Proc. the SIGPLAN'92 Conference on Programming Language Design and Implementation
, 1992
"... Exact and efficient data dependence testing is a key to success of loopparallelizing compiler for computationally intensive programs. A number of algorithms has been created to test array references contained in parameter loops for dependence but most of them are unable to answer the following ques ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
Exact and efficient data dependence testing is a key to success of loopparallelizing compiler for computationally intensive programs. A number of algorithms has been created to test array references contained in parameter loops for dependence but most of them are unable to answer the following question correctly: Are references C(i 1 + 10j 1 ) and C(i 2 + 10j 2 + 5), 0 i 1 ; i 2 4; 0 j 1 ; j 2 9 independent? The technique introduced in this paper recognizes that i 1 ; i 2 and j 1 ; j 2 make different order contributions to the subscript index, and breaks dependence equation i 1 + 10j 1 = i 2 + 10j 2 + 5 into two equations i 1 = i 2 + 5 and 10j 1 = 10j 2 which then can be solved independently. Since resulting equations contain less variables it is less expensive to solve them. We call this technique delinearization because it is reverse of the linearization much discussed in the literature. In the introduction we demonstrate that linearized references are used not infrequently in ...
Data Parallel Programming: A Survey and a Proposal for a New Model
, 1993
"... We give a brief description of what we consider to be data parallel programming and processing, trying to pinpoint the typical problems and pitfalls that occur. We then proceed with a short annotated history of data parallel programming, and sketch a taxonomy in which data parallel languages can be ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We give a brief description of what we consider to be data parallel programming and processing, trying to pinpoint the typical problems and pitfalls that occur. We then proceed with a short annotated history of data parallel programming, and sketch a taxonomy in which data parallel languages can be classified. Finally we present our own model of data parallel programming, which is based on the view of parallel data collections as functions. We believe that this model has a number of distinct advantages, such as being abstract, independent of implicitly assumed machine models, and general.
Achieving Speedups for APL on an SIMD Distributed Memory Machine
, 1990
"... The potential speedup for SIMD parallel implementations of APL programs is considered. Both analytical and (simulated) empirical studies are presented. The approach is to recognize that nearly 95% of the operators appearing in APL programs are either scalar primitive, reduction or indexing and so th ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
The potential speedup for SIMD parallel implementations of APL programs is considered. Both analytical and (simulated) empirical studies are presented. The approach is to recognize that nearly 95% of the operators appearing in APL programs are either scalar primitive, reduction or indexing and so the performance of these operators gives a good estimate of the amount of speedup a full program might receive. Substantial speedups are demonstrated for these operators and the empirical evidence accords with the analytical estimates. Keywords: APL, data parallel, parallelism, parallel programming, SIMD computers This research has been funded by the Office of Naval Research Contract No. N0001486K0264 and the National Science Foundation Grant No. DCR 8416878. List of Figures 1 4 \Theta 4 mesh. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2 Algorithm for performing the scalar primitive operator in parallel on the mesh. . . . 10 3 Algorithm for per...
Semantic Analysis of Straight Line C Code with Pointers
, 1992
"... this paper, however, we focus on the difficult problems of automatic detection of parallelism. The most important one are: ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
this paper, however, we focus on the difficult problems of automatic detection of parallelism. The most important one are:
Performance Implications of Virtualisation of Massively Parallel Algorithm Implementation
, 1994
"... In this paper we investigate the accuracy of performance prediction for virtualised implementations of parallel algorithms on massively parallel SIMD architectures. Virtualisation is the process by which algorithms which assume n processors are implemented in a system with p processors, where n ? p. ..."
Abstract
 Add to MetaCart
In this paper we investigate the accuracy of performance prediction for virtualised implementations of parallel algorithms on massively parallel SIMD architectures. Virtualisation is the process by which algorithms which assume n processors are implemented in a system with p processors, where n ? p. Virtualisation is implemented in some form by any parallel environment that allows algorithms to assume more processors than are physically available on the machine. The main contributions of this paper are the adaption and practical evaluation of the best known algorithms for merging and sorting. We show that the Valiant/Kruskal merging algorithm can be implemented efficiently on the Maspar system; the actual running times shadow the theoretical bounds. Our results also show that some algorithms perform closer to their theoretically predicted performance than others. This work has implications for both algorithm designers and compiler writers since it provides insights into the effects of ...