Results 1  10
of
13
Parallel Numerical Linear Algebra
, 1993
"... We survey general techniques and open problems in numerical linear algebra on parallel architectures. We first discuss basic principles of parallel processing, describing the costs of basic operations on parallel machines, including general principles for constructing efficient algorithms. We illust ..."
Abstract

Cited by 542 (26 self)
 Add to MetaCart
We survey general techniques and open problems in numerical linear algebra on parallel architectures. We first discuss basic principles of parallel processing, describing the costs of basic operations on parallel machines, including general principles for constructing efficient algorithms. We illustrate these principles using current architectures and software systems, and by showing how one would implement matrix multiplication. Then, we present direct and iterative algorithms for solving linear systems of equations, linear least squares problems, the symmetric eigenvalue problem, the nonsymmetric eigenvalue problem, and the singular value decomposition. We consider dense, band and sparse matrices.
Developments and Trends in the Parallel Solution of Linear Systems
, 1999
"... In this review paper, we consider some important developments and trends in algorithm design for the solution of linear systems concentrating on aspects that involve the exploitation of parallelism. We briefly discuss the solution of dense linear systems, before studying the solution of sparse equat ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
In this review paper, we consider some important developments and trends in algorithm design for the solution of linear systems concentrating on aspects that involve the exploitation of parallelism. We briefly discuss the solution of dense linear systems, before studying the solution of sparse equations by direct and iterative methods. We consider preconditioning techniques for iterative solvers and discuss some of the present research issues in this field.
Parallel iterative solution methods for linear systems arising from discretized PDE's
 Lecture Notes on Parallel Iterative Methods for discretized PDE's. AGARD Special Course on Parallel Computing in CFD, available from http://www.math.ruu.nl/people/vorst/#lec
, 1995
"... In these notes we will present anoverview of a number of related iterative methods for the solution of linear systems of equations. These methods are socalled Krylov projection type methods and they include popular methods as Conjugate Gradients, BiConjugate Gradients, CGS, BiCGSTAB, QMR, LSQR an ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
In these notes we will present anoverview of a number of related iterative methods for the solution of linear systems of equations. These methods are socalled Krylov projection type methods and they include popular methods as Conjugate Gradients, BiConjugate Gradients, CGS, BiCGSTAB, QMR, LSQR and GMRES. We will showhow these methods can be derived from simple basic iteration formulas. We will not give convergence proofs, but we will refer for these, as far as available, to litterature. Iterative methods are often used in combination with socalled preconditioning operators (approximations for the inverses of the operator of the system to be solved). Since these preconditioners are not essential in the derivation of the iterative methods, we will not givemuch attention to them in these notes. However, in most of the actual iteration schemes, we have included them in order to facilitate the use of these schemes in actual computations. For the application of the iterative schemes one usually thinks of linear sparse systems, e.g., like those arising in the nite element or nite di erence approximations of (systems of) partial di erential equations. However, the structure of the operators plays no explicit role in any oftheseschemes, and these schemes might also successfully be used to solve certain large dense linear systems. Depending on the situation that might be attractive in terms of numbers of oating point operations. It will turn out that all of the iterative are parallelizable in a straight forward manner. However, especially for computers with a memory hierarchy (i.e., like cache or vector registers), and for distributed memory computers, the performance can often be improved signi cantly through rescheduling of the operations. We will discuss parallel implementations, and occasionally we will report on experimental ndings.
Linear System Solvers: Sparse Iterative Methods
 PARALLEL NUMERICAL ALGORITHMS, ICASE/LARC INTERDISCIPLINARY SERIES IN SCIENCE AND ENGINEERING
, 1997
"... In this chapter we will present an overview of a number of related iterative methods for the solution of linear systems of equations. These methods are socalled Krylov projection type methods and they include popular methods as Conjugate Gradients, BiConjugate Gradients, LSQR and GMRES. We will sk ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In this chapter we will present an overview of a number of related iterative methods for the solution of linear systems of equations. These methods are socalled Krylov projection type methods and they include popular methods as Conjugate Gradients, BiConjugate Gradients, LSQR and GMRES. We will sketch how these methods can be derived from simple basic iteration formulas, and how they are interrelated. Iterative schemes are usually considered as an alternative for the solution of linear sparse systems, like those arising in, e.g., finite element or finite difference approximation of (systems of) partial differential equations. The structure of the operators plays no explicit role in any of these schemes, and the operator may be given even as a rule or a subroutine. Although these methods seem to be almost trivially parallellizable at first glance, this is sometimes a point of concern because of the inner products involved. We will consider this point in some detail. Iterative methods ...
Lecture Notes on Iterative Methods
, 1994
"... Introduction In these notes we will present an overview of a number of related iterative methods for the solution of linear systems of equations. These methods are socalled Krylov projection type methods and they include popular methods as Conjugate Gradients, BiConjugate Gradients, LSQR and GMRE ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Introduction In these notes we will present an overview of a number of related iterative methods for the solution of linear systems of equations. These methods are socalled Krylov projection type methods and they include popular methods as Conjugate Gradients, BiConjugate Gradients, LSQR and GMRES. We will show how these methods can be derived from simple basic iteration formulas. We will not give convergence proofs, but we will refer for these, as far as available, to litterature. Iterative methods are often used in combination with socalled preconditioning operators (approximations for the inverses of the operator of the system to be solved). Since these preconditioners are not essential in the derivation of these iterative methods, we will not discuss on them explicitly in these notes. However, in most of the actual iteration schemes, we have included them in order to facilitate the use of these schemes in actual computations. For the application of the iterative schemes
AND
, 1991
"... The GMRES method by Saad and Schultz is one of the most popular iterative methods for the solution of large sparse nonsymmetric linear systems of equations. The implementation proposed by Saad and Schultz uses the Arnoldi process and the modified GramSchmidt (MGS) method to compute orthonormal bas ..."
Abstract
 Add to MetaCart
The GMRES method by Saad and Schultz is one of the most popular iterative methods for the solution of large sparse nonsymmetric linear systems of equations. The implementation proposed by Saad and Schultz uses the Arnoldi process and the modified GramSchmidt (MGS) method to compute orthonormal bases of certain Krylov subspaces. The MGS method requires many vectorvector operations, which can be difficult to implement efficiently on vector and parallel computers due to the low granularity of these operations. We present a new implementation of the GMRES method in which, for each Krylov subspace used, we first determine a Newton basis, and then orthogonalize it by computing a QR factorization of the matrix whose columns are the vectors of the Newton basis. In this way we replace the vectorvector operations of the MGS method by the task of computing a QR factorization of a dense matrix. This makes the implementation more flexible, and provides a possibility to adapt the computations to the computer at hand in order to achieve better performance. 1.
CONJUGATE GRADIENT (CG)TYPE METHOD FOR THE SOLUTION OF NEWTON’S EQUATION WITHIN OPTIMIZATION FRAMEWORKS
, 2004
"... A conjugate gradient (CG)type algorithm CG Plan is introduced for calculating an approximate solution of Newton’s equation within largescale optimization frameworks. The approximate solution must satisfy suitable properties to ensure global convergence. In practice, the CG algorithm is widely used ..."
Abstract
 Add to MetaCart
A conjugate gradient (CG)type algorithm CG Plan is introduced for calculating an approximate solution of Newton’s equation within largescale optimization frameworks. The approximate solution must satisfy suitable properties to ensure global convergence. In practice, the CG algorithm is widely used, but it is not suitable when the Hessian matrix is indefinite, as it can stop prematurely. CG Plan is a symmetric variant of the composite step BiCG method of Bank and Chan, suitably adapted for optimization problems. It is an alternative to CG that copes with the indefinite case. We show convergence for CG Plan, then prove that the practical implementation always provides a gradient related direction within a truncated Newton method (algorithm TN Plan). Some preliminary numerical results support the theory.