Results 1  10
of
12
The Omega Test: a fast and practical integer programming algorithm for dependence analysis
 Communications of the ACM
, 1992
"... The Omega testi s ani nteger programmi ng algori thm that can determi ne whether a dependence exi sts between two array references, and i so, under what condi7: ns. Conventi nalwi[A m holds thati nteger programmiB techni:36 are far too expensi e to be used for dependence analysi6 except as a method ..."
Abstract

Cited by 450 (15 self)
 Add to MetaCart
The Omega testi s ani nteger programmi ng algori thm that can determi ne whether a dependence exi sts between two array references, and i so, under what condi7: ns. Conventi nalwi[A m holds thati nteger programmiB techni:36 are far too expensi e to be used for dependence analysi6 except as a method of last resort for si:8 ti ns that cannot be deci:A by si[976 methods. We present evi[77B that suggests thiwi sdomi s wrong, and that the Omega testi s competi ti ve wi th approxi mate algori thms usedi n practi ce and sui table for usei n producti on compi lers. Experi ments suggest that, for almost all programs, the average ti me requi red by the Omega test to determi ne the di recti on vectors for an array pai ri s less than 500 secs on a 12 MIPS workstati on. The Omega testi based on an extensi n of Four i0Motzki var i ble eli937 ti n (aliB: r programmiA method) toi nteger programmi ng, and has worstcase exponenti al ti me complexi ty. However, we show that for manysiB7 ti ns i whi h ...
Practical Dependence Testing
, 1991
"... Precise and efficient dependence tests are essential to the effectiveness of a parallelizing compiler. This paper proposes a dependence testing scheme based on classifying pairs of subscripted variable references. Exact yet fast dependence tests are presented for certain classes of array references, ..."
Abstract

Cited by 138 (16 self)
 Add to MetaCart
Precise and efficient dependence tests are essential to the effectiveness of a parallelizing compiler. This paper proposes a dependence testing scheme based on classifying pairs of subscripted variable references. Exact yet fast dependence tests are presented for certain classes of array references, as well as empirical results showing that these references dominate scientific Fortran codes. These dependence tests are being implemented at Rice University in both PFC, a parallelizing compiler, and ParaScope, a parallel programming environment.
An Efficient Data Dependence Analysis for Parallelizing Compilers
, 1990
"... this paper, we extend the existing numerical methods to overcome these difficulties. A geometrical analysis reveals that we can take advantage of the regular shape of the convex sets derived from multidimensional arrays in a data dependence test. The general methods proposed before assume very gene ..."
Abstract

Cited by 52 (3 self)
 Add to MetaCart
this paper, we extend the existing numerical methods to overcome these difficulties. A geometrical analysis reveals that we can take advantage of the regular shape of the convex sets derived from multidimensional arrays in a data dependence test. The general methods proposed before assume very general convex sets; this assumption causes their inefficiency. We have implemented a new algorithm called the ltest and performed some measurements. Results were quite encouraging (see Section 4). As in earlier numerical methods, the proposed scheme uses Diophantine equations and bounds of real functions. The major difference lies in the way multiple dimensions are treated. In earlier numerical methods, data areas accessed by two array references are examined dimension by dimension. If the examination of any dimension shows that the two areas representing the subscript expressions are disjoint, there is no data dependence between the two references. However, if each pair of areas appears to overlap in each individual dimension, it is unclear whether there is an overlapped area  3  when all dimensions are considered simultaneously. In this case, a data dependence has to be assumed. Our algorithm treats all dimensions simultaneously. Based on the subscripts, it selects a few suitable "viewing angles" so that it gets an exact view of the data areas. Selection of the viewing angles is rather straightforward and only a few angles are needed in most cases. We present the rest of our paper as follows. In Section 2, we give some examples to illustrate the difficulties in data dependence analysis on multidimensional array references. Some measurement results on a large set of real programs are presented to show the actual frequency of such difficult cases. In Section 3, we describe...
Semantical Analysis and Mathematical Programming Application to Parallelization and Vectorization
 Parallel and Distributed Algorithms
, 1989
"... This paper investigates a new algorithm for solving systems of linear inequalities in the presence of integer parameters. The applications are to various problems in the analysis of scientific programs. We give methods for computing dependences, for dataflow analysis and for several code genera ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
This paper investigates a new algorithm for solving systems of linear inequalities in the presence of integer parameters. The applications are to various problems in the analysis of scientific programs. We give methods for computing dependences, for dataflow analysis and for several code generation questions. These techniques all are relevant to the automatic and semiautomatic construction of programs for parallel and vector supercomputers. 1 Introduction It is a well known fact that scientific programs spend most of their running time in executing loops operating on arrays. Hence if a restructuring compiler email : Paul.Feautrier@prism.uvsq.fr 1 is to be a success, it must be able to do a very thorough analysis of the adressing patterns in such loops. If taken in full generality, the problem is intractable. In this paper, we delimit a class of programs for which this analysis is possible: programs with socalled static control and linear indices. There are reasons to bel...
Tests des D'ependances et Transformations de Programme
, 1993
"... The parallelization of sequential programs requires several stages : analysis of dependence relations, representation of these dependences and application of transformations using this representation to find a parallel schedule for the program instructions. The success of parallelization depends on ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
The parallelization of sequential programs requires several stages : analysis of dependence relations, representation of these dependences and application of transformations using this representation to find a parallel schedule for the program instructions. The success of parallelization depends on the precision of the dependences test and dependence representation used. In this thesis, we present and compare different dependence test algorithms and different data dependence abstractions. The algorithm of the PIPS parallelizer is based on a approximate feasibility test using FourierMotzkin elimination. Our experiments show that, in practice, it is accurate enough for treating dependences systems, and that its practical complexity is polynomial. Different dependence abstractions have different precision. For deciding whether a transformation is legal, several abstractions are admissible, meaning they contain enough information for knowing if this transformation is legal. The minimal a...
Analysis Of Standard And New Algorithms For The Integer And Linear Constraint Satisfaction Problem
, 1992
"... The integer and linear constraint satisfaction problem, which consists in proving the emptiness of the set of integer points satisfying a set of linear constraints or the existence of a solution, is very frequent in the field of computer science (vectorization, code scheduling, etc.). Most methods p ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
The integer and linear constraint satisfaction problem, which consists in proving the emptiness of the set of integer points satisfying a set of linear constraints or the existence of a solution, is very frequent in the field of computer science (vectorization, code scheduling, etc.). Most methods proposed in the literature deal with various specific instances of this problem. In this paper, the problem is considered in its general form. Some standard methods are analyzed. Some new algorithms are proposed, either to simplify the problem, or to solve exactly (by cutting plane methods) the reduced form of the problem. A sequence of such algorithms is implemented in the automatic parallelizer available under PIAF, an Interactive Programming environment for FORTRAN. However, these algorithms could be applied to more "difficult" problems than the usual ones appearing in data dependence analysis.
RunTime Dependence Testing by Integer Sequence Analysis
, 1992
"... A simple runtime data dependence test is presented which is based on a new formulation of the dependence problem. This test makes it possible to discern independence in the case of a potential selfoutput dependence in a loop (a case where the GCD test is useless) and in certain potential anti and ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
A simple runtime data dependence test is presented which is based on a new formulation of the dependence problem. This test makes it possible to discern independence in the case of a potential selfoutput dependence in a loop (a case where the GCD test is useless) and in certain potential anti and flowdependences. The test handles subscript expression forms which arise in linearized arrays, making it possible to handle coupled subscripts with ease and do dependence testing on multiple dimensions at once. The test is useful for arbitrarily deep loop nests, and even allows the testing of a group of dependences in one step. Keywords: parallelizing compilers, data dependence, integer sequences, linearization. 1 Introduction The parallelizing compilers of today still have not been generally successful in producing executable code for real programs which makes consistently good use of the expensive parallel hardware they compile for. This is apparently not because of any lack of parallel...
Extracting data flow information for parallelizing FORTRAN nested loop kernels
, 1994
"... Currently available parallelizing FORTRAN compilers expend a large amount of effort in determining data independent statements in a program such that these statements can be scheduled in parallel without need for synchronisation. This thesis hypothesises that it is just as important to derive exact ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Currently available parallelizing FORTRAN compilers expend a large amount of effort in determining data independent statements in a program such that these statements can be scheduled in parallel without need for synchronisation. This thesis hypothesises that it is just as important to derive exact data flow information about the data dependencies where they exist. We focus on the specific problem of imperative nested loop parallelization by describing a direct method for determining the distance vectors of the interloop data dependencies in an nnested loop kernel. These distance vectors define dependence arcs between iterations which are represented as points in ndimensional euclidean space. To demonstrate some of the benefits gained from deriving such exact data flow information about a nested loop computation we show how implicit task graph information about the computation can be deduced. Deriving the implicit task graph of the computation enables the parallelization of a class ...