Results 11  20
of
47
New insights in GMRESlike methods with variable preconditioners
, 1993
"... In this paper we compare two recently proposed methods, FGMRES [5] and GMRESR [7], for the iterative solution of sparse linear systems with an unsymmetric nonsingular matrix. Both methods compute minimal residual approximations using preconditioners, which may be different from step to step. The ins ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
In this paper we compare two recently proposed methods, FGMRES [5] and GMRESR [7], for the iterative solution of sparse linear systems with an unsymmetric nonsingular matrix. Both methods compute minimal residual approximations using preconditioners, which may be different from step to step. The insights resulting from this comparison lead to better variants of both methods. Keywords: FGMRES, GMRESR, non symmetric linear systems, iterative solver. AMS(MOS) subject classification. 65F10 1 Introduction Recently two new iterative methods, FGMRES [5] and GMRESR [7] have been proposed to solve sparse linear systems with an unsymmetric and nonsingular matrix. Both methods are based on the same idea: the use of a preconditioner, which may be different in every iteration. However, the resulting algorithms lead to somewhat different results. In [5] the GMRES method is given for a fixed preconditioner. Thereafter, it is shown that a slightly adapted algorithm: FGMRES can be used in combination...
Convergence Properties of Block GMRES and Matrix Polynomials
, 1994
"... This paper studies convergence properties of the block gmres algorithm when applied to nonsymmetric systems with multiple righthand sides. A convergence theory is developed based on a representation of the method using matrixvalued polynomials. Relations between the roots of the residual polynomia ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
This paper studies convergence properties of the block gmres algorithm when applied to nonsymmetric systems with multiple righthand sides. A convergence theory is developed based on a representation of the method using matrixvalued polynomials. Relations between the roots of the residual polynomial for block gmres and the matrix "pseudospectrum are derived, and illustrated with numerical experiments. The role of invariant subspaces in the effectiveness of block methods is also discussed. 1. INTRODUCTION AND SUMMARY Block iterative methods have been proposed as an attractive approach for handling eigenvalue problems and linear systems [10, 21, 39]. They promise favorable convergence properties and effective exploitation of parallel computer architectures [21, 22]. Block methods are natural candidates Work performed at the Center for Supercomputing Research and Development, University of Illinois at UrbanaChampaign, with support from fellowship #2043597 from the Consiglio Naziona...
Differences in the effects of rounding errors in Krylov solvers for symmetric indefinite linear systems
, 1999
"... The 3term Lanczos process leads, for a symmetric matrix, to bases for Krylov subspaces of increasing dimension. The Lanczos basis, together with the recurrence coefficients, can be used for the solution of symmetric indefinite linear systems, by solving the reduced system in one way or another. Thi ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
The 3term Lanczos process leads, for a symmetric matrix, to bases for Krylov subspaces of increasing dimension. The Lanczos basis, together with the recurrence coefficients, can be used for the solution of symmetric indefinite linear systems, by solving the reduced system in one way or another. This leads to wellknown methods: MINRES, GMRES, and SYMMLQ. We will discuss in what way and to what extent these approaches differ in their sensitivity to rounding errors. In our analysis we will assume that the Lanczos basis is generated in exactly the same way for the different methods, and we will not consider the errors in the Lanczos process itself. We will show that the method of solution may lead, under certain circumstances, to large additional errors, that are not corrected by continuing the iteration process. Our findings are supported and illustrated by numerical examples. 1 Introduction We will consider iterative methods for the construction of approximate solutions, starting with...
Error Analysis Of Krylov Methods In A Nutshell
 SIAM J. Sci. Comput
, 1998
"... . Error and residual bounds for the matrix iteration methods BiCG, QMR, FOM, and GMRES are derived in a simple and unified way. Key words. Krylov subspace methods, conjugate gradient type methods, Arnoldi method, Lanczos method, error bounds AMS(MOS) subject classifications. 65F10 1. Introduction. ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
. Error and residual bounds for the matrix iteration methods BiCG, QMR, FOM, and GMRES are derived in a simple and unified way. Key words. Krylov subspace methods, conjugate gradient type methods, Arnoldi method, Lanczos method, error bounds AMS(MOS) subject classifications. 65F10 1. Introduction. In this short note we present error bounds for iterative methods for the solution of linear systems Ax = b ; where A is a nonsingular complex matrix of (large) dimension N , and b 2 C N is a given vector. To simplify the presentation we assume throughout that b is of unit length, kbk = 1 in the Euclidean norm. In the literature, some residual and error bounds for the most common Krylov subspace methods have been obtained previously [2, 3, 4, 6, 9, 13, 14, 17, 18, 19, 22]. The derivations of these results appear rather unrelated to each other, and the existing results make it difficult to compare the approximation properties of the different methods. In the present note we derive in a un...
Relaxation strategies for nested Krylov methods
 Journal of Computational and Applied Mathematics
, 2003
"... There are classes of linear problems for which the matrixvector product is a time consuming operation because an expensive approximation method is required to compute it to a given accuracy. In recent years di#erent authors have investigated the use of, what is called, relaxation strategies for ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
There are classes of linear problems for which the matrixvector product is a time consuming operation because an expensive approximation method is required to compute it to a given accuracy. In recent years di#erent authors have investigated the use of, what is called, relaxation strategies for various Krylov subspace methods. These relaxation strategies aim to minimize the amount of work that is spent in the computation of the matrixvector product without compromising the accuracy of the method or the convergence speed too much. In order to achieve this goal, the accuracy of the matrixvector product is decreased when the iterative process comes closer to the solution. In this paper we show that a further significant reduction in computing time can be obtained by combining a relaxation strategy with the nesting of inexact Krylov methods. Flexible Krylov subspace methods allow variable preconditioning and therefore can be used in the outer most loop of our overall method. We analyze for several flexible Krylov methods strategies for controlling the accuracy of both the inexact matrixvector products and of the inner iterations. The results of our analysis will be illustrated with an example that models global ocean circulation.
Treatment Of NearBreakdown In The Cgs Algorithm
 Numer. Algorithms
, 1994
"... Lanczos method for solving the system of linear equations Ax = b consists in constructing a sequence of vectors (x k ) such that r k = b \Gamma Ax k = P k (A)r 0 where r 0 = b \Gamma Ax 0 . P k is an orthogonal polynomial which is computed recursively. The conjugate gradient squared algorithm (CGS) ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
Lanczos method for solving the system of linear equations Ax = b consists in constructing a sequence of vectors (x k ) such that r k = b \Gamma Ax k = P k (A)r 0 where r 0 = b \Gamma Ax 0 . P k is an orthogonal polynomial which is computed recursively. The conjugate gradient squared algorithm (CGS) consists in taking r k = P k (A)r 0 . In the recurrence relation for P k , the coefficients are given as ratios of scalar products. When a scalar product in a denominator is zero, then a breakdown occurs in the algorithm. When such a scalar product is close to zero, then rounding errors can affect seriously the algorithm, a situation known as nearbreakdown. In this paper it is shown how to avoid nearbreakdown in the CGS algorithm in order to obtain a more stable method.
Iterative Solution of Linear Systems in the 20th Century
 JOURNAL OF COMPUTATIONAL AND APPLIED MATHEMATICS
, 2000
"... This paper sketches the main research developments in the area of iterative methods for solving linear systems during the 20th century. Although iterative methods for solving linear systems find their origin in the early nineteenth century (work by Gauss),the field has seen an explosion of activity ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
This paper sketches the main research developments in the area of iterative methods for solving linear systems during the 20th century. Although iterative methods for solving linear systems find their origin in the early nineteenth century (work by Gauss),the field has seen an explosion of activity spurred by demand due to extraordinary technological advances in engineering and sciences. The past five decades have been particularly rich in new developments,ending with the availability of large toolbox of specialized algorithms for solving the very large problems which arise in scientific and industrial computational models. As in any other scientific area,research in iterative methods has been a journey characterized by a chain of contributions building on each other. It is the aim of this paper not only to sketch the most significant of these contributions during the past century,but also to relate them to one another.
New lookahead Lanczostype algorithms for linear systems
, 1997
"... A breakdown (due to a division by zero) can arise in the algorithms for implementing Lanczos' method because of the nonexistence of some formal orthogonal polynomials or because the recurrence relationship used is not appropriate. Such a breakdown can be avoided by jumping over the polynomials inv ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
A breakdown (due to a division by zero) can arise in the algorithms for implementing Lanczos' method because of the nonexistence of some formal orthogonal polynomials or because the recurrence relationship used is not appropriate. Such a breakdown can be avoided by jumping over the polynomials involved. This strategy was already used in some algorithms such as the MRZ and its variants. In this paper, we propose new implementations of the recurrence relations of these algorithms which only need the storage of a fixed number of vectors, independent of the length of the jump. These new algorithms are based on Horner's rule and on a different way for computing the coefficients of the recurrence relationships. Moreover, these new algorithms seem to be more stable than the old ones and they provide better numerical results. Numerical examples and comparisons with other algorithms will be given.
By How Much Can Residual Minimization Accelerate The Convergence Of Orthogonal Residual Methods?
, 2001
"... We capitalize upon the known relationship between pairs of orthogonal and minimal residual methods (or, biorthogonal and quasiminimal residual methods) in order to estimate how much smaller the residuals or quasiresiduals of the minimizing methods can be compared to the those of the corresponding ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
We capitalize upon the known relationship between pairs of orthogonal and minimal residual methods (or, biorthogonal and quasiminimal residual methods) in order to estimate how much smaller the residuals or quasiresiduals of the minimizing methods can be compared to the those of the corresponding Galerkin or PetrovGalerkin method. Examples of such pairs are the conjugate gradient (CG) and the conjugate residual (CR) methods, the full orthogonalization method (FOM) and the generalized minimal residual (GMRes) method, the CGNE and CGNR versions of applying CG to the normal equations, as well as the biconjugate gradient (BiCG) and the quasiminimal residual (QMR) methods. Also the pairs consisting of the (bi)conjugate gradient squared (CGS) and the transposefree QMR (TFQMR) methods can be added to this list if the residuals at halfsteps are included, and further examples can be created easily. The analysis is more generally applicable to the minimal residual (MR) and quasiminimal residual (QMR) smoothing processes, which are known to provide the transition from the results of the first method of such a pair to those of the second one. By an interpretation of these smoothing processes in coordinate space we deepen the understanding of some of the underlying relationships and introduce a unifying framework for minimal residual and quasiminimal residual smoothing. This framework includes the general notion of QMRtype methods.
LookAhead In BiCGSTAB And Other Product Methods For Linear Systems
, 1995
"... The Lanczos method for solving Ax = b consists in constructing the sequence of vectors x k such that r k = b \Gamma Ax k = P k (A)r 0 where P k is the orthogonal polynomial of degree at most k with respect to the linear functional c whose moments are c(¸ i ) = c i = (y ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
The Lanczos method for solving Ax = b consists in constructing the sequence of vectors x k such that r k = b \Gamma Ax k = P k (A)r 0 where P k is the orthogonal polynomial of degree at most k with respect to the linear functional c whose moments are c(¸ i ) = c i = (y