Results 1  10
of
15
Selfadapting numerical software for next generation applications
 Int. J. High Perf. Comput. Appl
, 2002
"... The challenge for the development of next generation software is the successful management of the complex grid environment while delivering to the scientist the full power of flexible compositions of the available algorithmic alternatives. SelfAdapting Numerical Software (SANS) systems are intended ..."
Abstract

Cited by 24 (6 self)
 Add to MetaCart
The challenge for the development of next generation software is the successful management of the complex grid environment while delivering to the scientist the full power of flexible compositions of the available algorithmic alternatives. SelfAdapting Numerical Software (SANS) systems are intended to meet this significant challenge. A SANS system comprises intelligent next generation numerical software that domain scientists – with disparate levels of knowledge of algorithmic and programmatic complexities of the underlying numerical software – can use to easily express and efficiently solve their problem. The components of a SANS system are: • A SANS agent with: – An intelligent component that automates method selection based on data, algorithm and system attributes. – A system component that provides intelligent management of and access to the computational grid. – A history database that records relevant information generated by the intelligent component and maintains past performance data of the interaction (e.g., algorithmic, hardware specific, etc.) between SANS components. • A simple scripting language that allows a structured multilayered implementation of the SANS while ensuring portability and extensibility of the user interface and underlying libraries. • An XML/CCAbased vocabulary of metadata to describe behavioural properties of both data and algorithms. • System components, including a runtime adaptive scheduler, and prototype libraries that automate the process of architecturedependent tuning to optimize performance on different platforms. A SANS system can dramatically improve the ability of computational scientists to model complex, interdisciplinary phenomena with maximum efficiency and a minimum of extradomain expertise. SANS innovations (and their generalizations) will provide to the scientific and engineering community a dynamic computational environment in which the most effective library components are automatically selected based on the problem characteristics, data attributes, and the state of the grid. 1
Self Adapting Software for Numerical Linear Algebra and LAPACK for Clusters
 Parallel Computing
, 2003
"... This article describes the context, design, and recent development of the LAPACK for Clusters (LFC) project. It has been developed in the framework of SelfAdapting Numerical Software (SANS) since we believe such an approach can deliver the con venience and ease of use of existing sequential enviro ..."
Abstract

Cited by 23 (16 self)
 Add to MetaCart
This article describes the context, design, and recent development of the LAPACK for Clusters (LFC) project. It has been developed in the framework of SelfAdapting Numerical Software (SANS) since we believe such an approach can deliver the con venience and ease of use of existing sequential environments bundled with the power and versatility of highlytuned parallel codes that execute on clusters. Accomplishing this task is far from trivial as we argue in the paper by presenting pertinent case studies and possible usage scenarios.
A Relational Approach to the Compilation of Sparse Matrix Programs
 In Proceedings of EUROPAR
, 1997
"... . We present a relational algebra based framework for compiling efficient sparse matrix code from dense DOANY loops and a specification of the representation of the sparse matrix. We present experimental data that demonstrates that the code generated by our compiler achieves performance competitive ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
. We present a relational algebra based framework for compiling efficient sparse matrix code from dense DOANY loops and a specification of the representation of the sparse matrix. We present experimental data that demonstrates that the code generated by our compiler achieves performance competitive with that of handwritten codes for important computational kernels. 1 Introduction Sparse matrix computations are ubiquitous in computational science. However, the development of highperformance software for sparse matrix computations is a tedious and errorprone task, for two reasons. First, there are no standard ways of storing sparse matrices, since a variety of formats are used to avoid storing zeros, and the best choice for the format is dependent on the problem and the architecture. Second, for most algorithms, it takes a lot of code reorganization to produce an efficient sparse program that is tuned to a particular format. We illustrate these points by describing two formats  a...
The Automatic Generation of Sparse Primitives
 ACM Transactions on Mathematical Software
, 1996
"... this paper, we discuss some of our experiences with this new approach. ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
this paper, we discuss some of our experiences with this new approach.
On Automatic Data Structure Selection and Code Generation for Sparse Computations
 Lecture Notes in Computer Science
, 1993
"... Traditionally restructuring compilers were only able to apply program transformations in order to exploit certain characteristics of the target architecture. Adaptation of data structures was limited to e.g. linearization or transposing of arrays. However, as more complex data structures are require ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
Traditionally restructuring compilers were only able to apply program transformations in order to exploit certain characteristics of the target architecture. Adaptation of data structures was limited to e.g. linearization or transposing of arrays. However, as more complex data structures are required to exploit characteristics of the data operated on, current compiler support appears to be inappropriate. In this paper we present the implementation issues of a restructuring compiler that automatically converts programs operating on dense matrices into sparse code, i.e. after a suited data structure has been selected for every dense matrix that in fact is sparse, the original code is adapted to operate on these data structures. This simplifies the task of the programmer and, in general, enables the compiler to apply more optimizations. Index Terms: Restructuring Compilers, Sparse Computations, Sparse Matrices. 1 Introduction Development and maintenance of sparse codes is a complex tas...
Compiling Parallel Code for Sparse Matrix Applications
 In Supercomputing
, 1997
"... We have developed a framework based on relational algebra for compiling efficient sparse matrix code from dense DOANY loops and a specification of the representation of the sparse matrix. In this paper, we show how this framework can be used to generate parallel code, and present experimental data ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
We have developed a framework based on relational algebra for compiling efficient sparse matrix code from dense DOANY loops and a specification of the representation of the sparse matrix. In this paper, we show how this framework can be used to generate parallel code, and present experimental data that demonstrates that the code generated by our Bernoulli compiler achieves performance competitive with that of handwritten codes for important computational kernels. Keywords: parallelizing compilers, sparse matrix computations 1 Introduction Sparse matrix computations are ubiquitous in computational science. However, the development of highperformance software for sparse matrix computations is a tedious and errorprone task, for two reasons. First, there is no standard way of storing sparse matrices, since a variety of formats are used to avoid storing zeros, and the best choice for the format is dependent on the problem and the architecture. Second, for most algorithms, it takes a lo...
A Framework for Sparse Matrix Code Synthesis from Highlevel Specifications
, 2000
"... We present compiler technology for synthesizing sparse matrix code from (i) dense matrix code, and (ii) a description of the index structure of a sparse matrix. Our approach is to embed statement instances into a Cartesian product of statement iteration and data spaces, and to produce efficient spar ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
We present compiler technology for synthesizing sparse matrix code from (i) dense matrix code, and (ii) a description of the index structure of a sparse matrix. Our approach is to embed statement instances into a Cartesian product of statement iteration and data spaces, and to produce efficient sparse code by identifying common enumerations for multiple references to sparse matrices. The approach works for imperfectlynested codes with dependences, and produces sparse code competitive with handwritten library code for the Basic Linear Algebra Subroutines (BLAS). 1 Introduction Many applications that require highperformance computing perform computations on sparse matrices. For example, the finiteelement method for solving partial differential equations approximately requires the solution of large linear systems of the form Ax = b where A is a large sparse matrix. Some websearch engines and datamining codes compute eigenvectors of large sparse matrices that represent how often cer...
Reshaping Access Patterns for Generating Sparse Codes
 In Proc. 7th Ann. Workshop on Languages and Compilers for Parallel Computing
, 1994
"... In a new approach to the development of sparse codes, the programmer defines a particular algorithm on dense matrices which are actually sparse. The sparsity of the matrices as indicated by the programmer is only dealt with at compiletime. The compiler selects appropriate compact data structure and ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
In a new approach to the development of sparse codes, the programmer defines a particular algorithm on dense matrices which are actually sparse. The sparsity of the matrices as indicated by the programmer is only dealt with at compiletime. The compiler selects appropriate compact data structure and automatically converts the algorithm into code that takes advantage of the sparsity of the matrices. In order to achieve efficient sparse codes, the compiler must be able to reshape some access patterns before a data structure is selected. In this paper, we discuss a reshaping method that is based on unimodular transformations. Index Terms: Program Transformations, Restructuring Compilers, Sparse Matrices. 1 Introduction Because of the inherent complexity of sparse codes, it is worthwhile to consider whether sparse codes can be generated automatically. In [7, 9] we have proposed an approach in which the algorithm is defined on dense matrices and automatically converted into sparse code. ...
Sparse Code Generation for Imperfectly Nested Loops With Dependences
 11th ACM Int’l Conf. on Supercomputing
, 1997
"... Standard restructuring compiler tools are based on polyhedral algebra and cannot be used to analyze or restructure sparse matrix codes. We have recently shown that tools based on relational algebra can be used to generate an efficient sparse matrix program from the corresponding dense matrix program ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Standard restructuring compiler tools are based on polyhedral algebra and cannot be used to analyze or restructure sparse matrix codes. We have recently shown that tools based on relational algebra can be used to generate an efficient sparse matrix program from the corresponding dense matrix program and a specification of the sparse matrix format. This work was restricted to DOALL loops and loops with reductions. In this paper, we extend this approach to loops with dependences. Although our results are restricted to Compressed Hyperplane Storage formats, they apply to both perfectly nested loops and imperfectly nested loops. 1 INTRODUCTION Although sparse matrix computations are ubiquitous in computational science, research in restructuring compilers has focused almost exclusively on dense matrix programs. This is because the tools used in restructuring compilers are based on the algebra of polyhedra, and can be used only when array subscripts are affine functions of loop index vari...
Automatic Parallelization of the Conjugate Gradient Algorithm
 In The Eight International Workshop on Languages and Compilers for Parallel Computing, LNCS #1033
, 1995
"... The conjugate gradient (CG) method is a popular Krylov space method for solving systems of linear equations of the form Ax = b, where A is a symmetric positivedefinite matrix. This method can be applied regardless of whether A is dense or sparse. In this paper, we show how restructuring compile ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
The conjugate gradient (CG) method is a popular Krylov space method for solving systems of linear equations of the form Ax = b, where A is a symmetric positivedefinite matrix. This method can be applied regardless of whether A is dense or sparse. In this paper, we show how restructuring compiler technology can be applied to transform a sequential, dense matrix CG program into a parallel, sparse matrix CG program. On the IBM SP2, the performance of our compiled code is comparable to that of handwritten code from the PETSc library at Argonne.