Results 1 
3 of
3
Basic concepts for distributed sparse linear algebra operations, tech
, 1994
"... We introduce basic concepts for describing the communication patterns in common operations such as the matrix times vector and matrix transpose times vector product, where the matrix is sparse and stored on distributed processors. At first we will describe ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We introduce basic concepts for describing the communication patterns in common operations such as the matrix times vector and matrix transpose times vector product, where the matrix is sparse and stored on distributed processors. At first we will describe
Solving Irregular Sparse Linear Systems On A Multicomputer Using The CGNR Method
"... . The efficient solution of irregular sparse linear systems on a distributed memory parallel computer is still a major challenge. Direct methods are concerned with unbalanced load processing or data distribution as well as difficulties pertaining to reuse efficient sequential codes. Iterative method ..."
Abstract
 Add to MetaCart
(Show Context)
. The efficient solution of irregular sparse linear systems on a distributed memory parallel computer is still a major challenge. Direct methods are concerned with unbalanced load processing or data distribution as well as difficulties pertaining to reuse efficient sequential codes. Iterative methods of the Krylov family are well suited for parallel computing but can provide disappointing convergence for general sparse problems. Therefore finding efficient parallel preconditioners is often required to obtain acceptable convergence rates. In this paper we explore the use of a preconditioned Conjugate Gradient algorithm for the parallel solution of irregular sparse nonsymmetric systems. A first step is the choice of a high quality algorithm for matrix partitioning. For this purpose we have selected the Metis package, developed by Karypis and Kumar of the University of Minnesota. A second step is the choice of the preconditioner. We have selected the Block Jacobi preconditioner for its in...
Basic Concepts for Distributed Sparse Linear Algebra Operations
, 1994
"... Introduction We introduce basic concepts for describing the communication patterns in common operations such as the matrix times vector and matrix transpose times vector product, where the matrix is sparse and stored on distributed processors. At first we will describe a simple onedimensional part ..."
Abstract
 Add to MetaCart
Introduction We introduce basic concepts for describing the communication patterns in common operations such as the matrix times vector and matrix transpose times vector product, where the matrix is sparse and stored on distributed processors. At first we will describe a simple onedimensional partitioning of the matrix, then we will describe the more general case where arbitrary elements are assigned to processors. 2 Onedimensional matrix partitioning We start by describing a onedimensional partitioning of the matrix, that is, a distribution of the matrix rows or columns to the processors. The discussion will describe only the distribution by rows, but the translation to a column partitioning is easily made. We assume that there exists a map map : N = f1; : : : ; Ng ! P = f1; : : : ; Pg where N is the number of problem variables, and