Results 1  10
of
16
Recycling Krylov Subspaces for Sequences of Linear Systems
 SIAM J. Sci. Comput
, 2004
"... Many problems in engineering and physics require the solution of a large sequence of linear systems. We can reduce the cost of solving subsequent systems in the sequence by recycling information from previous systems. We consider two dierent approaches. For several model problems, we demonstrate tha ..."
Abstract

Cited by 44 (3 self)
 Add to MetaCart
Many problems in engineering and physics require the solution of a large sequence of linear systems. We can reduce the cost of solving subsequent systems in the sequence by recycling information from previous systems. We consider two dierent approaches. For several model problems, we demonstrate that we can reduce the iteration count required to solve a linear system by a factor of two. We consider both Hermitian and nonHermitian problems, and present numerical experiments to illustrate the eects of subspace recycling.
Implicitly restarted GMRES and Arnoldi methods for nonsymmetric systems of equations
 SIAM J. Matrix Anal. Appl
"... Abstract. The generalized minimum residual method (GMRES) is well known for solving large nonsymmetric systems of linear equations. It generally uses restarting, which slows the convergence. However, some information can be retained at the time of the restart and used in the next cycle. We present a ..."
Abstract

Cited by 32 (7 self)
 Add to MetaCart
Abstract. The generalized minimum residual method (GMRES) is well known for solving large nonsymmetric systems of linear equations. It generally uses restarting, which slows the convergence. However, some information can be retained at the time of the restart and used in the next cycle. We present algorithms that use implicit restarting in order to retain this information. Approximate eigenvectors determined from the previous subspace are included in the new subspace. This deflates the smallest eigenvalues and thus improves the convergence. The subspace that contains the approximate eigenvectors is itself a Krylov subspace, but not with the usual starting vector. The implicitly restarted FOM algorithm includes standard Ritz vectors in the subspace. The eigenvalue portion of its calculations is equivalent to Sorensen’s IRA algorithm. The implicitly restarted GMRES algorithm uses harmonic Ritz vectors. This algorithm also gives a new approach to computing interior eigenvalues. Key words. GMRES, implicit restarting, iterative methods, nonsymmetric systems, harmonic
Convergence of Restarted Krylov Subspaces to Invariant Subspaces
 SIAM J. Matrix Anal. Appl
, 2001
"... The performance of Krylov subspace eigenvalue algorithms for large matrices can be measured by the angle between a desired invariant subspace and the Krylov subspace. We develop general bounds for this convergence that include the eects of polynomial restarting and impose no restrictions concerning ..."
Abstract

Cited by 29 (5 self)
 Add to MetaCart
The performance of Krylov subspace eigenvalue algorithms for large matrices can be measured by the angle between a desired invariant subspace and the Krylov subspace. We develop general bounds for this convergence that include the eects of polynomial restarting and impose no restrictions concerning the diagonalizability of the matrix or its degree of nonnormality. Associated with a desired set of eigenvalues is a maximum \reachable invariant subspace" that can be developed from the given starting vector. Convergence for this distinguished subspace is bounded in terms involving a polynomial approximation problem. Elementary results from potential theory lead to convergence rate estimates and suggest restarting strategies based on optimal approximation points (e.g., Leja or Chebyshev points); exact shifts are evaluated within this framework. Computational examples illustrate the utility of these results. Origins of superlinear eects are also described.
Expressions And Bounds For The GMRES Residual
 BIT
, 1999
"... . Expressions and bounds are derived for the residual norm in GMRES. It is shown that the minimal residual norm is large as long as the Krylov basis is wellconditioned.For scaled Jordan blocks the minimal residual norm is expressed in terms of eigenvalues and departure from normality.For normal mat ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
. Expressions and bounds are derived for the residual norm in GMRES. It is shown that the minimal residual norm is large as long as the Krylov basis is wellconditioned.For scaled Jordan blocks the minimal residual norm is expressed in terms of eigenvalues and departure from normality.For normal matrices the minimal residual norm is expressed in terms of products of relative eigenvalue di#erences. Key words. linear system, Krylov methods, GMRES, MINRES, Vandermonde matrix, eigenvalues, departure from normality AMS subject classi#cation. 15A03, 15A06, 15A09, 15A12, 15A18, 15A60, 65F10, 65F15, 65F20, 65F35. 1. Introduction.. The generalised minimal residual method #GMRES# #31, 36# #and MINRES for Hermitian matrices #30## is an iterative method for solving systems of linear equations Ax = b. The approximate solution in iteration i minimises the twonorm of the residual b , Az over the Krylov space spanfb;Ab;:::;A i,1 bg. The goal of this paper is to express this minimal residual norm...
Differences in the effects of rounding errors in Krylov solvers for symmetric indefinite linear systems
, 1999
"... The 3term Lanczos process leads, for a symmetric matrix, to bases for Krylov subspaces of increasing dimension. The Lanczos basis, together with the recurrence coefficients, can be used for the solution of symmetric indefinite linear systems, by solving the reduced system in one way or another. Thi ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
The 3term Lanczos process leads, for a symmetric matrix, to bases for Krylov subspaces of increasing dimension. The Lanczos basis, together with the recurrence coefficients, can be used for the solution of symmetric indefinite linear systems, by solving the reduced system in one way or another. This leads to wellknown methods: MINRES, GMRES, and SYMMLQ. We will discuss in what way and to what extent these approaches differ in their sensitivity to rounding errors. In our analysis we will assume that the Lanczos basis is generated in exactly the same way for the different methods, and we will not consider the errors in the Lanczos process itself. We will show that the method of solution may lead, under certain circumstances, to large additional errors, that are not corrected by continuing the iteration process. Our findings are supported and illustrated by numerical examples. 1 Introduction We will consider iterative methods for the construction of approximate solutions, starting with...
The influence of interface conditions on convergence of KrylovSchwarz domain decomposition for the advectiondiffusion equation
, 1995
"... Several variants of Schwarz domain decomposition, which differ in the choice of interface ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Several variants of Schwarz domain decomposition, which differ in the choice of interface
A New Adaptive Gmres Algorithm For Achieving High Accuracy
 Numer. Linear Algebra Appl
"... . GMRES(k) is widely used for solving nonsymmetric linear systems. However, it is inadequate either when it converges only for k close to the problem size or when numerical error in the modified GramSchmidt process used in the GMRES orthogonalization phase dramatically affects the algorithm perform ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
. GMRES(k) is widely used for solving nonsymmetric linear systems. However, it is inadequate either when it converges only for k close to the problem size or when numerical error in the modified GramSchmidt process used in the GMRES orthogonalization phase dramatically affects the algorithm performance. An adaptive version of GMRES(k) which tunes the restart value k based on criteria estimating the GMRES convergence rate for the given problem is proposed here. This adaptive GMRES(k) procedure outperforms standard GMRES(k), several other GMRESlike methods, and QMR on actual large scale sparse structural mechanics postbuckling and analog circuit simulation problems. There are some applications, such as homotopy methods for high Reynolds number viscous flows, solid mechanics postbuckling analysis, and analog circuit simulation, where very high accuracy in the linear system solutions is essential. In this context, the modified GramSchmidt process in GMRES can fail causing the entire GMR...
Comparison of multigrid and incomplete lu shiftedLaplace preconditioners for the inhomogeneous helmholtz equation
 Appl. Numer. Math
"... Helmholtz equation ✩ ..."
A Different Approach To Bounding The Minimal Residual Norm In Krylov Methods
, 1998
"... In the context of Krylov methods for solving systems of linear equations, expressions and bounds are derived for the norm of the minimal residual, like the one produced by GMRES or MINRES. It is shown that the minimal residual norm is large as long as the Krylov basis is wellconditioned. In the con ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
In the context of Krylov methods for solving systems of linear equations, expressions and bounds are derived for the norm of the minimal residual, like the one produced by GMRES or MINRES. It is shown that the minimal residual norm is large as long as the Krylov basis is wellconditioned. In the context of nonnormal matrices, examples are given where the minimal residual norm is a function of the departure of the matrix from normality, and where the decrease of the residual norm depends on how large the departure from normality is compared to the eigenvalues. With regard to normal matrices, the Krylov matrix is unitarily equivalent to a rowscaled Vandermonde matrix and the minimal residual norm in iteration i is proportional to a product of i relative eigenvalue separations. Arguments are given for why normal matrices with complex eigenvalues can produce larger residual norms than Hermitian matrices, and why indefinite matrices can produce larger residual norms than definite matric...
The Main Effects of Rounding Errors in Krylov Solvers for Symmetric Linear Systems
, 1997
"... The 3term Lanczos process leads, for a symmetric matrix, to bases for Krylov subspaces of increasing dimension. The Lanczos basis, together with the recurrence coefficients, can be used for the solution of linear systems, by solving the reduced system in one way or another. This leads to wellknown ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The 3term Lanczos process leads, for a symmetric matrix, to bases for Krylov subspaces of increasing dimension. The Lanczos basis, together with the recurrence coefficients, can be used for the solution of linear systems, by solving the reduced system in one way or another. This leads to wellknown methods: MINRES (GMRES), CG, CR, and SYMMLQ. We will discuss in what way and to what extent the various approaches are sensitive to rounding errors. In our analysis we will assume that the Lanczos basis is generated in exactly the same way for the different methods (except CR), and we will not consider the errors in the Lanczos process itself. These errors may lead to large perturbations with respect to the exact process, but convergence takes still place. Our attention is focussed to what happens in the solution phase. We will show that the way of solution may lead, under circumstances, to large additional errors, that are not corrected by continuing the iteration process. Our findings are...