Results 1  10
of
20
Differences in the effects of rounding errors in Krylov solvers for symmetric indefinite linear systems
, 1999
"... The 3term Lanczos process leads, for a symmetric matrix, to bases for Krylov subspaces of increasing dimension. The Lanczos basis, together with the recurrence coefficients, can be used for the solution of symmetric indefinite linear systems, by solving the reduced system in one way or another. Thi ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
The 3term Lanczos process leads, for a symmetric matrix, to bases for Krylov subspaces of increasing dimension. The Lanczos basis, together with the recurrence coefficients, can be used for the solution of symmetric indefinite linear systems, by solving the reduced system in one way or another. This leads to wellknown methods: MINRES, GMRES, and SYMMLQ. We will discuss in what way and to what extent these approaches differ in their sensitivity to rounding errors. In our analysis we will assume that the Lanczos basis is generated in exactly the same way for the different methods, and we will not consider the errors in the Lanczos process itself. We will show that the method of solution may lead, under certain circumstances, to large additional errors, that are not corrected by continuing the iteration process. Our findings are supported and illustrated by numerical examples. 1 Introduction We will consider iterative methods for the construction of approximate solutions, starting with...
Residual Replacement Strategies for Krylov Subspace Iterative Methods for the Convergence of True Residuals
, 1999
"... In this paper, a strategy is proposed for alternative computations of the residual vectors in Krylov subspace methods, which improves the agreement of the computed residuals and the true residuals to the level of O(u)kAkkxk. Building on earlier ideas on residual replacement and on insights in the fi ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
In this paper, a strategy is proposed for alternative computations of the residual vectors in Krylov subspace methods, which improves the agreement of the computed residuals and the true residuals to the level of O(u)kAkkxk. Building on earlier ideas on residual replacement and on insights in the finite precision behaviour of the Krylov subspace methods, computable error bounds are derived for iterations that involve occasionally replacing the computed residuals by the true residuals, and they are used to monitor the deviation of the two residuals and hence to select residual replacement steps, so that the recurrence relations for the computed residuals, which control the convergence of the method, are perturbed within safe bounds. Numerical examples are presented to demonstrate the effectiveness of this new residual replacement scheme. 1 Introduction Krylov subspace iterative methods for solving a large linear system Ax = b typically consist of iterations that recursively update appr...
Developments and Trends in the Parallel Solution of Linear Systems
 Parallel Computing
, 1999
"... In this review paper, we consider some important developments and trends in algorithm design for the solution of linear systems concentrating on aspects that involve the exploitation of parallelism. We briefly discuss the solution of dense linear systems, before studying the solution of sparse equat ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
In this review paper, we consider some important developments and trends in algorithm design for the solution of linear systems concentrating on aspects that involve the exploitation of parallelism. We briefly discuss the solution of dense linear systems, before studying the solution of sparse equations by direct and iterative methods. We consider preconditioning techniques for iterative solvers and discuss some of the present research issues in this field. Keywords: linear systems, dense matrices, sparse matrices, tridiagonal systems, parallelism, direct methods, iterative methods, Krylov methods, preconditioning. AMS(MOS) subject classifications: 65F05, 65F50. 1 Introduction Solution methods for systems of linear equations Ax = b; (1) where A is a coefficient matrix of order n and x and b are nvectors, are usually grouped into two distinct classes: direct methods and iterative methods. However, CCLRC  Rutherford Appleton Laboratory, Oxfordshire, England and CERFACS, Toulouse,...
BICGSTAB AS AN INDUCED DIMENSION REDUCTION METHOD
"... Abstract. The Induced Dimension Reduction method [12] was proposed in 1980 as an iterative method for solving large nonsymmetric linear systems of equations. IDR can be considered as the predecessor of methods like CGS (Conjugate Gradient Squared) [9]) and BiCGSTAB (BiConjugate Gradients STABilize ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Abstract. The Induced Dimension Reduction method [12] was proposed in 1980 as an iterative method for solving large nonsymmetric linear systems of equations. IDR can be considered as the predecessor of methods like CGS (Conjugate Gradient Squared) [9]) and BiCGSTAB (BiConjugate Gradients STABilized, [11]). All three methods are based on efficient short recurrences. An important similarity between the methods is that they use orthogonalisations with respect to a fixed ‘shadow residual’. Of the three methods, BiCGSTAB has gained the most popularity, and is probably still the most widely used short recurrence method for solving nonsymmetric systems. Recently, Sonneveld and van Gijzen revived the interest for IDR. In [10], they demonstrate that a higher dimensional shadow space, defined by the n × s matrix e R0, can easily be incorporated into IDR, yielding a highly effective method. The original IDR method is closely related to BiCGSTAB. It is therefore natural to ask whether BiCGSTAB can be extended in a way similar to IDR. To answer this question we explore the relation between IDR and BiCGSTAB and use our findings to derive a variant of BiCGSTAB that uses a higher dimensional shadow space. Keywords: BiCGSTAB, BiCG, iterative linear solvers, Krylov subspace methods, IDR.
Templates for Linear Algebra Problems
, 1995
"... The increasing availability of advancedarchitecture computers is having a very significant effect on all spheres of scientific computation, including algorithm research and software development in numerical linear algebra. Linear algebra  in particular, the solution of linear systems of equation ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
The increasing availability of advancedarchitecture computers is having a very significant effect on all spheres of scientific computation, including algorithm research and software development in numerical linear algebra. Linear algebra  in particular, the solution of linear systems of equations and eigenvalue problems  lies at the heart of most calculations in scientific computing. This chapter discusses some of the recent developments in linear algebra designed to help the user on advancedarchitecture computers. Much of the work in developing linear algebra software for advancedarchitecture computers is motivated by the need to solve large problems on the fastest computers available. In this chapter, we focus on four basic issues: (1) the motivation for the work; (2) the development of standards for use in linear algebra and the building blocks for a library; (3) aspects of templates for the solution of large sparse systems of linear algorithm; and (4) templates for the solu...
Closer to the solution: Iterative linear solvers
 eds, ‘State of the Art in Numerical Analysis
, 1997
"... this paper, in particular Hermitian problems can be treated as real symmetric ones, by replacing A ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
this paper, in particular Hermitian problems can be treated as real symmetric ones, by replacing A
Parallel iterative solution methods for linear systems arising from discretized PDE's
 Lecture Notes on Parallel Iterative Methods for discretized PDE's. AGARD Special Course on Parallel Computing in CFD, available from http://www.math.ruu.nl/people/vorst/#lec
, 1995
"... In these notes we will present anoverview of a number of related iterative methods for the solution of linear systems of equations. These methods are socalled Krylov projection type methods and they include popular methods as Conjugate Gradients, BiConjugate Gradients, CGS, BiCGSTAB, QMR, LSQR an ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
In these notes we will present anoverview of a number of related iterative methods for the solution of linear systems of equations. These methods are socalled Krylov projection type methods and they include popular methods as Conjugate Gradients, BiConjugate Gradients, CGS, BiCGSTAB, QMR, LSQR and GMRES. We will showhow these methods can be derived from simple basic iteration formulas. We will not give convergence proofs, but we will refer for these, as far as available, to litterature. Iterative methods are often used in combination with socalled preconditioning operators (approximations for the inverses of the operator of the system to be solved). Since these preconditioners are not essential in the derivation of the iterative methods, we will not givemuch attention to them in these notes. However, in most of the actual iteration schemes, we have included them in order to facilitate the use of these schemes in actual computations. For the application of the iterative schemes one usually thinks of linear sparse systems, e.g., like those arising in the nite element or nite di erence approximations of (systems of) partial di erential equations. However, the structure of the operators plays no explicit role in any oftheseschemes, and these schemes might also successfully be used to solve certain large dense linear systems. Depending on the situation that might be attractive in terms of numbers of oating point operations. It will turn out that all of the iterative are parallelizable in a straight forward manner. However, especially for computers with a memory hierarchy (i.e., like cache or vector registers), and for distributed memory computers, the performance can often be improved signi cantly through rescheduling of the operations. We will discuss parallel implementations, and occasionally we will report on experimental ndings.
On Lanczostype methods for Wilson fermions
"... . Numerical simulations of lattice gauge theories with fermions rely heavily on the iterative solution of huge sparse linear systems of equations. Due to short recurrences, which mean small memory requirement, Lanczostype methods (including suitable versions of the conjugate gradient method when ap ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
. Numerical simulations of lattice gauge theories with fermions rely heavily on the iterative solution of huge sparse linear systems of equations. Due to short recurrences, which mean small memory requirement, Lanczostype methods (including suitable versions of the conjugate gradient method when applicable) are best suited for this type of problem. The Wilson formulation of the lattice Dirac operator leads to a matrix with special symmetry properties that makes the application of the classical biconjugate gradient (BiCG) particularly attractive, but other methods, for example BiCGStab and BiCGStab2 have also been widely used. We discuss some of the pros and cons of these methods. In particular, we review the specic simplication of BiCG, clarify some details, and discuss general results on the roundo behavior. 1 The symmetry properties of the Wilson fermion matrix In the Wilson formulation of the lattice Dirac operator, where the Green's function of a single quark with bare mass ...
The Main Effects of Rounding Errors in Krylov Solvers for Symmetric Linear Systems
, 1997
"... The 3term Lanczos process leads, for a symmetric matrix, to bases for Krylov subspaces of increasing dimension. The Lanczos basis, together with the recurrence coefficients, can be used for the solution of linear systems, by solving the reduced system in one way or another. This leads to wellknown ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The 3term Lanczos process leads, for a symmetric matrix, to bases for Krylov subspaces of increasing dimension. The Lanczos basis, together with the recurrence coefficients, can be used for the solution of linear systems, by solving the reduced system in one way or another. This leads to wellknown methods: MINRES (GMRES), CG, CR, and SYMMLQ. We will discuss in what way and to what extent the various approaches are sensitive to rounding errors. In our analysis we will assume that the Lanczos basis is generated in exactly the same way for the different methods (except CR), and we will not consider the errors in the Lanczos process itself. These errors may lead to large perturbations with respect to the exact process, but convergence takes still place. Our attention is focussed to what happens in the solution phase. We will show that the way of solution may lead, under circumstances, to large additional errors, that are not corrected by continuing the iteration process. Our findings are...