Results 1  10
of
11
Software libraries for linear algebra computations on high performance computers
 SIAM REVIEW
, 1995
"... This paper discusses the design of linear algebra libraries for high performance computers. Particular emphasis is placed on the development of scalable algorithms for MIMD distributed memory concurrent computers. A brief description of the EISPACK, LINPACK, and LAPACK libraries is given, followed b ..."
Abstract

Cited by 73 (17 self)
 Add to MetaCart
This paper discusses the design of linear algebra libraries for high performance computers. Particular emphasis is placed on the development of scalable algorithms for MIMD distributed memory concurrent computers. A brief description of the EISPACK, LINPACK, and LAPACK libraries is given, followed by an outline of ScaLAPACK, which is a distributed memory version of LAPACK currently under development. The importance of blockpartitioned algorithms in reducing the frequency of data movement between different levels of hierarchical memory is stressed. The use of such algorithms helps reduce the message startup costs on distributed memory concurrent computers. Other key ideas in our approach are the use of distributed versions of the Level 3 Basic Linear Algebra Subprograms (BLAS) as computational building blocks, and the use of Basic Linear Algebra Communication Subprograms (BLACS) as communication building blocks. Together the distributed BLAS and the BLACS can be used to construct highe...
The Design of a Parallel Dense Linear Algebra Software Library: Reduction to Hessenberg, Tridiagonal, and Bidiagonal Form
, 1995
"... ..."
The Design of Linear Algebra Libraries for High Performance Computers
, 1993
"... This paper discusses the design of linear algebra libraries for high performance computers. Particular emphasis is placed on the development of scalable algorithms for MIMD distributed memory concurrent computers. A brief description of the EISPACK, LINPACK, and LAPACK libraries is given, followe ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
(Show Context)
This paper discusses the design of linear algebra libraries for high performance computers. Particular emphasis is placed on the development of scalable algorithms for MIMD distributed memory concurrent computers. A brief description of the EISPACK, LINPACK, and LAPACK libraries is given, followed by an outline of ScaLAPACK, which is a distributed memory version of LAPACK currently under development. The importance of blockpartitioned algorithms in reducing the frequency of data movementbetween di#erent levels of hierarchical memory is stressed. The use of such algorithms helps reduce the message startup costs on distributed memory concurrent computers. Other key ideas in our approach are the use of distributed versions of the Level 3 Basic Linear Algebra Subgrams #BLAS# as computational building blocks, and the use of Basic Linear Algebra Communication Subprograms #BLACS# as communication building blocks. Together the distributed BLAS and the BLACS can be used to construct ...
Templates for Linear Algebra Problems
, 1995
"... The increasing availability of advancedarchitecture computers is having a very significant effect on all spheres of scientific computation, including algorithm research and software development in numerical linear algebra. Linear algebra  in particular, the solution of linear systems of equation ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
The increasing availability of advancedarchitecture computers is having a very significant effect on all spheres of scientific computation, including algorithm research and software development in numerical linear algebra. Linear algebra  in particular, the solution of linear systems of equations and eigenvalue problems  lies at the heart of most calculations in scientific computing. This chapter discusses some of the recent developments in linear algebra designed to help the user on advancedarchitecture computers. Much of the work in developing linear algebra software for advancedarchitecture computers is motivated by the need to solve large problems on the fastest computers available. In this chapter, we focus on four basic issues: (1) the motivation for the work; (2) the development of standards for use in linear algebra and the building blocks for a library; (3) aspects of templates for the solution of large sparse systems of linear algorithm; and (4) templates for the solu...
CRPC Research into Linear Algebra Software for High Performance Computers
, 1994
"... In this paper we look at a number of approaches being investigated in the Center for Research on Parallel Computation (CRPC) to develop linear algebra software for highperformance computers. These approaches are exemplified by the LAPACK, templates, and ARPACK projects. LAPACK is a software library ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
In this paper we look at a number of approaches being investigated in the Center for Research on Parallel Computation (CRPC) to develop linear algebra software for highperformance computers. These approaches are exemplified by the LAPACK, templates, and ARPACK projects. LAPACK is a software library for performing dense and banded linear algebra computations, and was designed to run efficiently on high performance computers. We focus on the design of the distributed memory version of LAPACK, and on an objectoriented interface to LAPACK. The templates project aims at making the task of developing sparse linear algebra software simpler and easier. Reusable software templates are provided that the user can then customize to modify and optimize a particular algorithm, and hence build a more complex applications. ARPACK is a software package for solving large scale eigenvalue problems, and is based on an implicitly restarted variant of the Arnoldi scheme. The paper focuses on issues impact...
Numerical linear algebra algorithms and software www.elsevier.nl/locate/cam
"... The increasing availability of advancedarchitecture computers has a signi cant e ect on all spheres of scienti c computation, including algorithm research and software development in numerical linear algebra. Linear algebra – in particular, the solution of linear systems of equations – lies at the ..."
Abstract
 Add to MetaCart
(Show Context)
The increasing availability of advancedarchitecture computers has a signi cant e ect on all spheres of scienti c computation, including algorithm research and software development in numerical linear algebra. Linear algebra – in particular, the solution of linear systems of equations – lies at the heart of most calculations in scienti c computing. This paper discusses some of the recent developments in linear algebra designed to exploit these advancedarchitecture computers. We discuss two broad classes of algorithms: those for dense, and those for sparse matrices. c ○ 2000 Elsevier Science
THE DESIGN OF APARALLEL DENSE LINEAR ALGEBRA SOFTWARE LIBRARY: REDUCTION TO HESSENBERG, TRIDIAGONAL, AND BIDIAGONAL FORM
, 1995
"... Prepared by the ..."
(Show Context)
Constructing Numerical Software Libraries for HighPerformance Computing Environments
"... In this paper we look at a number of approaches being investigated in the ScaLAPACK Project to develop linear algebra software for highperformance computers. The focus is on issues impacting the design of scalable libraries for performing dense and sparse linear algebra computations on multicom ..."
Abstract
 Add to MetaCart
(Show Context)
In this paper we look at a number of approaches being investigated in the ScaLAPACK Project to develop linear algebra software for highperformance computers. The focus is on issues impacting the design of scalable libraries for performing dense and sparse linear algebra computations on multicomputers. 1 Introduction Linear algebra lies at the heart of many problems in computational science. It provides critical underpinning for much of the work on higherlevel optimization algorithms and numerical solution of partial differential equations. It has proved to be a rich source of basic problems for work on compiler management of memory hierarchies Department of Computer Science; University of Tennessee;107 Ayres Hall;Knoxville, TN 379961301 y Mathematical Sciences Section; Oak Ridge National Laboratory; P. O. Box 2008, Bldg. 6012; Oak Ridge, TN 378316367 and compiling for distributedmemory machines. Finally, it is serving as a testbed for our ideas on how to design, build, an...