Results 1  10
of
33
Truncation Strategies For Optimal Krylov Subspace Methods
 SIAM J. Numer. Anal
, 1999
"... Optimal Krylov subspace methods like GMRES and GCR have to compute an orthogonal basis for the entire Krylov subspace to compute the minimal residual approximation to the solution. Therefore, when the number of iterations becomes large, the amount of work and the storage requirements become excessiv ..."
Abstract

Cited by 45 (6 self)
 Add to MetaCart
Optimal Krylov subspace methods like GMRES and GCR have to compute an orthogonal basis for the entire Krylov subspace to compute the minimal residual approximation to the solution. Therefore, when the number of iterations becomes large, the amount of work and the storage requirements become excessive. In practice one has to limit the resources. The most obvious ways to do this are to restart GMRES after some number of iterations and to keep only some number of the most recent vectors in GCR. This may lead to very poor convergence and even stagnation. Therefore, we will describe a method that reveals which subspaces of the Krylov space were important for convergence thus far and exactly how important they are. This information is then used to select which subspace to keep for orthogonalizing future search directions. Numerical results indicate this to be a very e#ective strategy. Key words. GMRES, GCR, restart, truncation, Krylov subspace methods, iterative methods, nonHermitian linear systems AMS subject classifications. Primary, 65F10; Secondary, 15A18, 65N22 PII. S0036142997315950 1.
Restarted GMRES preconditioned by deflation
 Journal of Computational and Applied Mathematics
, 1995
"... This paper presents a new preconditioning technique for the restarted GMRES algorithm. It is based on an invariant subspace approximation which is updated at each cycle. Numerical examples show that this deflation technique gives a more robust scheme than the restarted algorithm, at a low cost o ..."
Abstract

Cited by 44 (6 self)
 Add to MetaCart
This paper presents a new preconditioning technique for the restarted GMRES algorithm. It is based on an invariant subspace approximation which is updated at each cycle. Numerical examples show that this deflation technique gives a more robust scheme than the restarted algorithm, at a low cost of operations and memory. Keywords: GMRES, preconditioning, invariant subspace, deflation. Subject Classification: 65F10, 65F15 1
GMRES with deflated restarting
 SIAM J. Sci. Comput
"... Abstract. A modification is given of the GMRES iterative method for nonsymmetric systems of linear equations. The new method deflates eigenvalues using Wu and Simon’s thick restarting approach. It has the efficiency of implicit restarting, but is simpler and does not have the same numerical concerns ..."
Abstract

Cited by 38 (7 self)
 Add to MetaCart
Abstract. A modification is given of the GMRES iterative method for nonsymmetric systems of linear equations. The new method deflates eigenvalues using Wu and Simon’s thick restarting approach. It has the efficiency of implicit restarting, but is simpler and does not have the same numerical concerns. The deflation of small eigenvalues can greatly improve the convergence of restarted GMRES. Also, it is demonstrated that using harmonic Ritz vectors is important, because then the whole subspace is a Krylov subspace that contains certain important smaller subspaces.
On the performance of various adaptive preconditioned GMRES strategies
, 1997
"... This paper compares the performance on linear systems of equations of three similar adaptive accelerating strategies for restarted GMRES. The underlying idea is to adaptively use spectral information gathered from the Arnoldi process. The first strategy retains approximations to some eigenvectors fr ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
This paper compares the performance on linear systems of equations of three similar adaptive accelerating strategies for restarted GMRES. The underlying idea is to adaptively use spectral information gathered from the Arnoldi process. The first strategy retains approximations to some eigenvectors from the previous restart and adds them to the Krylov subspace. The second strategy uses also approximated eigenvectors to define a preconditioner at each restart. This paper designs a third new strategy which combines elements of both previous approaches. Numerical results show that this new method is both more efficient and more robust
On the occurrence of superlinear convergence of exact and inexact Krylov subspace methods
 SIAM Rev
, 2005
"... We present a general analytical model which describes the superlinear convergence of Krylov subspace methods. We take an invariant subspace approach, so that our results apply also to inexact methods, and to nondiagonalizable matrices. Thus, we provide a unified treatment of the superlinear conve ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
We present a general analytical model which describes the superlinear convergence of Krylov subspace methods. We take an invariant subspace approach, so that our results apply also to inexact methods, and to nondiagonalizable matrices. Thus, we provide a unified treatment of the superlinear convergence of GMRES, Conjugate Gradients, block versions of these, and inexact subspace methods. Numerical experiments illustrate the bounds obtained.
New insights in GMRESlike methods with variable preconditioners
, 1993
"... In this paper we compare two recently proposed methods, FGMRES [5] and GMRESR [7], for the iterative solution of sparse linear systems with an unsymmetric nonsingular matrix. Both methods compute minimal residual approximations using preconditioners, which may be different from step to step. The ins ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
In this paper we compare two recently proposed methods, FGMRES [5] and GMRESR [7], for the iterative solution of sparse linear systems with an unsymmetric nonsingular matrix. Both methods compute minimal residual approximations using preconditioners, which may be different from step to step. The insights resulting from this comparison lead to better variants of both methods. Keywords: FGMRES, GMRESR, non symmetric linear systems, iterative solver. AMS(MOS) subject classification. 65F10 1 Introduction Recently two new iterative methods, FGMRES [5] and GMRESR [7] have been proposed to solve sparse linear systems with an unsymmetric and nonsingular matrix. Both methods are based on the same idea: the use of a preconditioner, which may be different in every iteration. However, the resulting algorithms lead to somewhat different results. In [5] the GMRES method is given for a fixed preconditioner. Thereafter, it is shown that a slightly adapted algorithm: FGMRES can be used in combination...
Differences in the effects of rounding errors in Krylov solvers for symmetric indefinite linear systems
, 1999
"... The 3term Lanczos process leads, for a symmetric matrix, to bases for Krylov subspaces of increasing dimension. The Lanczos basis, together with the recurrence coefficients, can be used for the solution of symmetric indefinite linear systems, by solving the reduced system in one way or another. Thi ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
The 3term Lanczos process leads, for a symmetric matrix, to bases for Krylov subspaces of increasing dimension. The Lanczos basis, together with the recurrence coefficients, can be used for the solution of symmetric indefinite linear systems, by solving the reduced system in one way or another. This leads to wellknown methods: MINRES, GMRES, and SYMMLQ. We will discuss in what way and to what extent these approaches differ in their sensitivity to rounding errors. In our analysis we will assume that the Lanczos basis is generated in exactly the same way for the different methods, and we will not consider the errors in the Lanczos process itself. We will show that the method of solution may lead, under certain circumstances, to large additional errors, that are not corrected by continuing the iteration process. Our findings are supported and illustrated by numerical examples. 1 Introduction We will consider iterative methods for the construction of approximate solutions, starting with...
SOR as a Preconditioner
 APPL. NUMER. MATH
, 1995
"... Introduction It is wellknown (see, e.g. [2] and [4]) that the use of red/black or multicolor orderings to parallelize SSOR or ILU preconditioning may seriously degrade the rate of convergence of the conjugate gradient method, as compared with the natural ordering. The SOR iteration itself, howeve ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
Introduction It is wellknown (see, e.g. [2] and [4]) that the use of red/black or multicolor orderings to parallelize SSOR or ILU preconditioning may seriously degrade the rate of convergence of the conjugate gradient method, as compared with the natural ordering. The SOR iteration itself, however, does not suffer this degradation. Indeed, if the coefficient matrix is consistently ordered with property A, the asymptotic rates of convergence of the natural and red/black orderings are identical (Young[9]); moreover, in practice one quite often sees faster convergence in the red/black ordering than in the natural ordering. This suggests the possible use of SOR as a parallel preconditioner. It cannot be a preconditioner for the conjugate gradient method on symmetric positive definite systems since the corresponding preconditioned matrix is not symmetric. But this restriction does not apply to nonsymmetric systems and conjugategradient type methods such as GMRES
Recycling Subspace Information for Diffuse Optical Tomography
 SIAM J. Sci. Comput
, 2004
"... We discuss the efficient solution of a large sequence of slowly varying linear systems arising in computations for diffuse optical tomographic imaging. In particular, we analyze a number of strategies for recycling Krylov subspace information for the most efficient solution. We reconstruct threedim ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
We discuss the efficient solution of a large sequence of slowly varying linear systems arising in computations for diffuse optical tomographic imaging. In particular, we analyze a number of strategies for recycling Krylov subspace information for the most efficient solution. We reconstruct threedimensional...
Direct methods and ADIpreconditioned Krylov subspace methods for generalized Lyapunov equations
 Numer. Lin. Alg. Appl
"... Prepared using nlaauth.cls [Version: 2002/09/18 v1.02] ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Prepared using nlaauth.cls [Version: 2002/09/18 v1.02]