Results 1 
4 of
4
Compilation Techniques for Sparse Matrix Computations
 In Proceedings of the 1993 International Conference on Supercomputing
, 1993
"... The problem of compiler optimization of sparse codes is well known and no satisfactory solutions have been found yet. One of the major obstacles is formed by the fact that sparse programs deal explicitly with the particular data structures selected for storing sparse matrices. This explicit data str ..."
Abstract

Cited by 31 (5 self)
 Add to MetaCart
The problem of compiler optimization of sparse codes is well known and no satisfactory solutions have been found yet. One of the major obstacles is formed by the fact that sparse programs deal explicitly with the particular data structures selected for storing sparse matrices. This explicit data structure handling obscures the functionality of a code to such a degree that the optimization of the code is prohibited, e.g. by the introduction of indirect addressing. The method presented in this paper postpones data structure selection until the compile phase, thereby allowing the compiler to combine code optimization with explicit data structure selection. Not only enables this method the compiler to generate efficient code for sparse computations, also the task of the programmer is greatly reduced in complexity. Index Terms: Compilation Techniques, Optimization, Program Transformations, Restructuring Compilers, Sparse Computations, Sparse Matrices. 1 Introduction A significant part of ...
Advanced Compiler Optimizations for Sparse Computations
 Journal of Parallel and Distributed Computing
, 1995
"... Regular data dependence checking on sparse codes usually results in very conservative estimates of actual dependences that will occur at runtime. Clearly, this is caused by the usage of compact data structures that are necessary to exploit sparsity in order to reduce storage requirements and comput ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
Regular data dependence checking on sparse codes usually results in very conservative estimates of actual dependences that will occur at runtime. Clearly, this is caused by the usage of compact data structures that are necessary to exploit sparsity in order to reduce storage requirements and computational time. However, if the compiler is presented with dense code and automatically converts it into code that operates on sparse data structures, then the dependence information obtained by analysis on the original code can be used to exploit potential concurrency in the generated code. In this paper we present synchronization generating and manipulating techniques that are based on this concept. 1 Introduction Nowadays compiler support usually fails to optimize sparse codes because compact storage formats are used for sparse matrices in order to exploit sparsity with respect to storage requirements and computational time. This exploitation results in complicated code in which, for exam...
Implementation of FourierMotzkin Elimination
, 1994
"... Every transformation of a perfectly nested loop consisting of a combination of loop interchanging, loop skewing and loop reversal can be modeled by a linear transformation represented by a unimodular matrix. This modeling offers more flexibility than the traditional stepwise application of loop tra ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
Every transformation of a perfectly nested loop consisting of a combination of loop interchanging, loop skewing and loop reversal can be modeled by a linear transformation represented by a unimodular matrix. This modeling offers more flexibility than the traditional stepwise application of loop transformations because we can directly construct a unimodular matrix for a particular goal. In this paper, we present implementation issues arising when this framework is incorporated in a compiler. 1 Introduction Inherent to the application of program transformations in an optimizing or restructuring compiler is the socalled `phase ordering problem', i.e. the problem of finding an effective order in which particular transformations must be applied. This problem is still an important research topic [WS90]. An important step forwards in solving the phase ordering problem has been accomplished by the observation that any combination of the iterationlevel loop transformations loop interchangin...
On Automatic Data Structure Selection and Code Generation for Sparse Computations
 Lecture Notes in Computer Science
, 1993
"... Traditionally restructuring compilers were only able to apply program transformations in order to exploit certain characteristics of the target architecture. Adaptation of data structures was limited to e.g. linearization or transposing of arrays. However, as more complex data structures are require ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
Traditionally restructuring compilers were only able to apply program transformations in order to exploit certain characteristics of the target architecture. Adaptation of data structures was limited to e.g. linearization or transposing of arrays. However, as more complex data structures are required to exploit characteristics of the data operated on, current compiler support appears to be inappropriate. In this paper we present the implementation issues of a restructuring compiler that automatically converts programs operating on dense matrices into sparse code, i.e. after a suited data structure has been selected for every dense matrix that in fact is sparse, the original code is adapted to operate on these data structures. This simplifies the task of the programmer and, in general, enables the compiler to apply more optimizations. Index Terms: Restructuring Compilers, Sparse Computations, Sparse Matrices. 1 Introduction Development and maintenance of sparse codes is a complex tas...