Results 1  10
of
12
Applied Numerical Linear Algebra
 Society for Industrial and Applied Mathematics
, 1997
"... We survey general techniques and open problems in numerical linear algebra on parallel architectures. We rst discuss basic principles of parallel processing, describing the costs of basic operations on parallel machines, including general principles for constructing e cient algorithms. We illustrate ..."
Abstract

Cited by 532 (26 self)
 Add to MetaCart
We survey general techniques and open problems in numerical linear algebra on parallel architectures. We rst discuss basic principles of parallel processing, describing the costs of basic operations on parallel machines, including general principles for constructing e cient algorithms. We illustrate these principles using current architectures and software systems, and by showing how one would implement matrix multiplication. Then, we present direct and iterative algorithms for solving linear systems of equations, linear least squares problems, the symmetric eigenvalue problem, the nonsymmetric eigenvalue problem, and the singular value decomposition. We consider dense, band and sparse matrices.
GMRESR: A family of nested GMRES methods
 Num. Lin. Alg. with Appl
, 1991
"... Recently Eirola and Nevanlinna have proposed an iterative solution method for unsymmetric linear systems, in which the preconditioner is updated from step to step. Following their ideas we suggest variants of GMRES, in which a preconditioner is constructed at each iteration step by a suitable approx ..."
Abstract

Cited by 58 (16 self)
 Add to MetaCart
Recently Eirola and Nevanlinna have proposed an iterative solution method for unsymmetric linear systems, in which the preconditioner is updated from step to step. Following their ideas we suggest variants of GMRES, in which a preconditioner is constructed at each iteration step by a suitable approximation process, e.g., by GMRES itself. Keywords: GMRES, nonsymmetric linear systems, iterative solver, ENmethod This version is dated June 23, 1992 Introduction The GMRES method, proposed in [13], is a popular method for the iterative solution of sparse linear systems with an unsymmetric nonsingular matrix. In its original form, socalled full GMRES, it is optimal in the sense that it minimizes the residual over the current Krylov subspace. However, it is often too expensive since the required orthogonalization per iteration step grows quadratically with the number of steps. For that reason, one often uses in practice variants of GMRES. The most wellknown variant, already suggested i...
Developments and Trends in the Parallel Solution of Linear Systems
 Parallel Computing
, 1999
"... In this review paper, we consider some important developments and trends in algorithm design for the solution of linear systems concentrating on aspects that involve the exploitation of parallelism. We briefly discuss the solution of dense linear systems, before studying the solution of sparse equat ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
In this review paper, we consider some important developments and trends in algorithm design for the solution of linear systems concentrating on aspects that involve the exploitation of parallelism. We briefly discuss the solution of dense linear systems, before studying the solution of sparse equations by direct and iterative methods. We consider preconditioning techniques for iterative solvers and discuss some of the present research issues in this field. Keywords: linear systems, dense matrices, sparse matrices, tridiagonal systems, parallelism, direct methods, iterative methods, Krylov methods, preconditioning. AMS(MOS) subject classifications: 65F05, 65F50. 1 Introduction Solution methods for systems of linear equations Ax = b; (1) where A is a coefficient matrix of order n and x and b are nvectors, are usually grouped into two distinct classes: direct methods and iterative methods. However, CCLRC  Rutherford Appleton Laboratory, Oxfordshire, England and CERFACS, Toulouse,...
Parallel iterative solution methods for linear systems arising from discretized PDE's
 Lecture Notes on Parallel Iterative Methods for discretized PDE's. AGARD Special Course on Parallel Computing in CFD, available from http://www.math.ruu.nl/people/vorst/#lec
, 1995
"... In these notes we will present anoverview of a number of related iterative methods for the solution of linear systems of equations. These methods are socalled Krylov projection type methods and they include popular methods as Conjugate Gradients, BiConjugate Gradients, CGS, BiCGSTAB, QMR, LSQR an ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
In these notes we will present anoverview of a number of related iterative methods for the solution of linear systems of equations. These methods are socalled Krylov projection type methods and they include popular methods as Conjugate Gradients, BiConjugate Gradients, CGS, BiCGSTAB, QMR, LSQR and GMRES. We will showhow these methods can be derived from simple basic iteration formulas. We will not give convergence proofs, but we will refer for these, as far as available, to litterature. Iterative methods are often used in combination with socalled preconditioning operators (approximations for the inverses of the operator of the system to be solved). Since these preconditioners are not essential in the derivation of the iterative methods, we will not givemuch attention to them in these notes. However, in most of the actual iteration schemes, we have included them in order to facilitate the use of these schemes in actual computations. For the application of the iterative schemes one usually thinks of linear sparse systems, e.g., like those arising in the nite element or nite di erence approximations of (systems of) partial di erential equations. However, the structure of the operators plays no explicit role in any oftheseschemes, and these schemes might also successfully be used to solve certain large dense linear systems. Depending on the situation that might be attractive in terms of numbers of oating point operations. It will turn out that all of the iterative are parallelizable in a straight forward manner. However, especially for computers with a memory hierarchy (i.e., like cache or vector registers), and for distributed memory computers, the performance can often be improved signi cantly through rescheduling of the operations. We will discuss parallel implementations, and occasionally we will report on experimental ndings.
Parallel Krylov Methods for Econometric Model Simulation
 Computational Economics
, 2000
"... This paper investigates parallel solution methods to simulate largescale macroeconometric models with forwardlooking variables. The method chosen is the NewtonKrylov algorithm. We concentrate on a parallel solution to the sparse linear system arising in the Newton algorithm, and we empirically ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
This paper investigates parallel solution methods to simulate largescale macroeconometric models with forwardlooking variables. The method chosen is the NewtonKrylov algorithm. We concentrate on a parallel solution to the sparse linear system arising in the Newton algorithm, and we empirically analyze the scalability of the GMRES method, which belongs to the class of socalled Krylov subspace methods. The results obtained using an implementation of the PETSc 2.0 software library on an IBM SP2 show a near linear scalability for the problem tested. Keywords: Parallel computing, NewtonKrylov methods, sparse matrices, forwardlooking models, GMRES, scalability. JEL Classification: C63, C88, C30. 1 Introduction There are many engineering problems for which parallel computing has proven efficient. Economic problems are, however, often quite different in both structure and quantification. This is particularly true for systems of equations representing large economic models, wh...
Linear System Solvers: Sparse Iterative Methods
 PARALLEL NUMERICAL ALGORITHMS, ICASE/LARC INTERDISCIPLINARY SERIES IN SCIENCE AND ENGINEERING
, 1997
"... In this chapter we will present an overview of a number of related iterative methods for the solution of linear systems of equations. These methods are socalled Krylov projection type methods and they include popular methods as Conjugate Gradients, BiConjugate Gradients, LSQR and GMRES. We will sk ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In this chapter we will present an overview of a number of related iterative methods for the solution of linear systems of equations. These methods are socalled Krylov projection type methods and they include popular methods as Conjugate Gradients, BiConjugate Gradients, LSQR and GMRES. We will sketch how these methods can be derived from simple basic iteration formulas, and how they are interrelated. Iterative schemes are usually considered as an alternative for the solution of linear sparse systems, like those arising in, e.g., finite element or finite difference approximation of (systems of) partial differential equations. The structure of the operators plays no explicit role in any of these schemes, and the operator may be given even as a rule or a subroutine. Although these methods seem to be almost trivially parallellizable at first glance, this is sometimes a point of concern because of the inner products involved. We will consider this point in some detail. Iterative methods ...
Lecture Notes on Iterative Methods
, 1994
"... Introduction In these notes we will present an overview of a number of related iterative methods for the solution of linear systems of equations. These methods are socalled Krylov projection type methods and they include popular methods as Conjugate Gradients, BiConjugate Gradients, LSQR and GMRE ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Introduction In these notes we will present an overview of a number of related iterative methods for the solution of linear systems of equations. These methods are socalled Krylov projection type methods and they include popular methods as Conjugate Gradients, BiConjugate Gradients, LSQR and GMRES. We will show how these methods can be derived from simple basic iteration formulas. We will not give convergence proofs, but we will refer for these, as far as available, to litterature. Iterative methods are often used in combination with socalled preconditioning operators (approximations for the inverses of the operator of the system to be solved). Since these preconditioners are not essential in the derivation of these iterative methods, we will not discuss on them explicitly in these notes. However, in most of the actual iteration schemes, we have included them in order to facilitate the use of these schemes in actual computations. For the application of the iterative schemes
Implementation Aspects
"... e inner products, vector updates and matrix vector product are easily parallelized and vectorized. The more successful preconditionings, i.e, based upon incomplete LU decomposition, are not easily parallelizable. For that reason one is often satisfied with the use of only diagonal scaling as a preco ..."
Abstract
 Add to MetaCart
e inner products, vector updates and matrix vector product are easily parallelized and vectorized. The more successful preconditionings, i.e, based upon incomplete LU decomposition, are not easily parallelizable. For that reason one is often satisfied with the use of only diagonal scaling as a preconditioner on highly parallel computers, such as the CM2 [24]. On distributed memory computers we need large grained parallelism in order to reduce synchronization overhead. This can be achieved by combining the work required for a successive number of iteration steps. The idea is to construct first in parallel a straight forward Krylov basis for the search subspace in which an update for the current solution will be determined. Once this basis has been computed, the vectors are orthogonalized, as is done in Krylov subspace methods. The construction as well as the orthogonalization can be done with large grained parallelism, and has su#cient degree of parallelism in it. This approach has be
CONJUGATE GRADIENT (CG)TYPE METHOD FOR THE SOLUTION OF NEWTON’S EQUATION WITHIN OPTIMIZATION FRAMEWORKS
, 2004
"... A conjugate gradient (CG)type algorithm CG Plan is introduced for calculating an approximate solution of Newton’s equation within largescale optimization frameworks. The approximate solution must satisfy suitable properties to ensure global convergence. In practice, the CG algorithm is widely used ..."
Abstract
 Add to MetaCart
A conjugate gradient (CG)type algorithm CG Plan is introduced for calculating an approximate solution of Newton’s equation within largescale optimization frameworks. The approximate solution must satisfy suitable properties to ensure global convergence. In practice, the CG algorithm is widely used, but it is not suitable when the Hessian matrix is indefinite, as it can stop prematurely. CG Plan is a symmetric variant of the composite step BiCG method of Bank and Chan, suitably adapted for optimization problems. It is an alternative to CG that copes with the indefinite case. We show convergence for CG Plan, then prove that the practical implementation always provides a gradient related direction within a truncated Newton method (algorithm TN Plan). Some preliminary numerical results support the theory.