Results 1  10
of
47
Numerical solution of saddle point problems
 ACTA NUMERICA
, 2005
"... Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has b ..."
Abstract

Cited by 180 (30 self)
 Add to MetaCart
Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has been a surge of interest in saddle point problems, and numerous solution techniques have been proposed for solving this type of systems. The aim of this paper is to present and discuss a large selection of solution methods for linear systems in saddle point form, with an emphasis on iterative methods for large and sparse problems.
Recent computational developments in Krylov subspace methods for linear systems
 NUMER. LINEAR ALGEBRA APPL
, 2007
"... Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are metho ..."
Abstract

Cited by 48 (12 self)
 Add to MetaCart
Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are methods specifically tailored to systems with special properties such as special forms of symmetry and those depending on one or more parameters.
Theory of inexact Krylov subspace methods and applications to scientific computing
, 2002
"... Abstract. We provide a general frameworkfor the understanding of inexact Krylov subspace methods for the solution of symmetric and nonsymmetric linear systems of equations, as well as for certain eigenvalue calculations. This frameworkallows us to explain the empirical results reported in a series o ..."
Abstract

Cited by 47 (6 self)
 Add to MetaCart
Abstract. We provide a general frameworkfor the understanding of inexact Krylov subspace methods for the solution of symmetric and nonsymmetric linear systems of equations, as well as for certain eigenvalue calculations. This frameworkallows us to explain the empirical results reported in a series of CERFACS technical reports by Bouras, Frayssé, and Giraud in 2000. Furthermore, assuming exact arithmetic, our analysis can be used to produce computable criteria to bound the inexactness of the matrixvector multiplication in such a way as to maintain the convergence of the Krylov subspace method. The theory developed is applied to several problems including the solution of Schur complement systems, linear systems which depend on a parameter, and eigenvalue problems. Numerical experiments for some of these scientific applications are reported.
GMRES on (Nearly) Singular Systems
 SIAM J. Matrix Anal. Appl
, 1994
"... . We consider the behavior of the gmres method for solving a linear system Ax = b when A is singular or nearly so, i.e., illconditioned. The (near) singularity of A may or may not affect the performance of gmres, depending on the nature of the system and the initial approximate solution. For singu ..."
Abstract

Cited by 39 (2 self)
 Add to MetaCart
. We consider the behavior of the gmres method for solving a linear system Ax = b when A is singular or nearly so, i.e., illconditioned. The (near) singularity of A may or may not affect the performance of gmres, depending on the nature of the system and the initial approximate solution. For singular A, we give conditions under which the gmres iterates converge safely to a leastsquares solution or to the pseudoinverse solution. These results also apply to any residual minimizing Krylov subspace method that is mathematically equivalent to gmres. A practical procedure is outlined for efficiently and reliably detecting singularity or illconditioning when it becomes a threat to the performance of gmres. Key words. gmres method, residual minimizing methods, Krylov subspace methods, iterative linear algebra methods, singular or illconditioned linear systems AMS(MOS) subject classifications. 65F10 1. Introduction. The generalized minimal residual (gmres) method of Saad and Schultz [1...
Geometric Aspects in the Theory of Krylov Subspace Methods
 Acta Numerica
, 1999
"... The recent development of Krylov subspace methods for the solution of operator equations has shown that two basic construction principles, the orthogonal residual (OR) and minimal residual (MR) approaches, underlie the most commonly used algorithms. It is shown that these can both be formulated ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
The recent development of Krylov subspace methods for the solution of operator equations has shown that two basic construction principles, the orthogonal residual (OR) and minimal residual (MR) approaches, underlie the most commonly used algorithms. It is shown that these can both be formulated as techniques for solving an approximation problem on a sequence of nested subspaces of a Hilbert space, a problem not necessarily related to an operator equation. Most of the familiar Krylov subspace algorithms result when these subspaces form a Krylov sequence. The wellknown relations among the iterates and residuals of OR/MR pairs are shown to hold also in this rather general setting. We further show that a common error analysis for these methods involving the canonical angles between subspaces allows many of the recently developed error bounds to be derived in a simple manner. An application of this analysis to compact perturbations of the identity shows that OR/MR pairs of Krylov subspace methods converge qsuperlinearly when applied to such operator equations.
A restarted Krylov subspace method for the evaluation of matrix functions
 SIAM J. Numer. Anal
"... Abstract. We show how the Arnoldi algorithm for approximating a function of a matrix times a vector can be restarted in a manner analogous to restarted Krylov subspace methods for solving linear systems of equations. The resulting restarted algorithm reduces to other known algorithms for the recipro ..."
Abstract

Cited by 29 (4 self)
 Add to MetaCart
Abstract. We show how the Arnoldi algorithm for approximating a function of a matrix times a vector can be restarted in a manner analogous to restarted Krylov subspace methods for solving linear systems of equations. The resulting restarted algorithm reduces to other known algorithms for the reciprocal and the exponential functions. We further show that the restarted algorithm inherits the superlinear convergence property of its unrestarted counterpart for entire functions and present the results of numerical experiments.
Inexact Krylov subspace methods for linear systems
, 2002
"... There is a class of linear problems for which the computation of the matrixvector product is very expensive since a time consuming approximation method is necessary to compute it with some prescribed relative precision. In this paper we investigate the e#ect of an approximately computed matrixvect ..."
Abstract

Cited by 26 (4 self)
 Add to MetaCart
There is a class of linear problems for which the computation of the matrixvector product is very expensive since a time consuming approximation method is necessary to compute it with some prescribed relative precision. In this paper we investigate the e#ect of an approximately computed matrixvector product on the convergence and accuracy of several Krylov subspace solvers. The obtained insight is used to tune the precision of the matrixvector product in every iteration so that an overall e#cient process is obtained. This gives the empirical relaxation strategy of Bouras and Fraysse proposed in [2]. These strategies can lead to considerable savings over the standard approach of using a fixed relative precision for the matrixvector product in every step. We will argue that the success of a relaxation strategy depends on the underlying way the Krylov subspace is constructed and not on the optimality properties for the residuals. Our analysis leads to an improved version of a strategy of Bouras, Fraysse, and Giraud [3] for the Conjugate Gradient method in case of Hermitian indefinite matrices.
Matrices that Generate the Same Krylov Residual Spaces
 in Recent Advances in Iterative Methods
, 1994
"... Given an n by n nonsingular matrix A and an nvector v, we consider the spaces of the form AK k (A; v), k = 1; :::; n, where K k (A; v) is the k th Krylov space, equal to spanfv; Av; :::; A k\Gamma1 vg. We characterize the set of matrices B that, with the given vector v, generate the same sp ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
Given an n by n nonsingular matrix A and an nvector v, we consider the spaces of the form AK k (A; v), k = 1; :::; n, where K k (A; v) is the k th Krylov space, equal to spanfv; Av; :::; A k\Gamma1 vg. We characterize the set of matrices B that, with the given vector v, generate the same spaces; i.e., those matrices B for which BK k (B; v) = AK k (A; v), for all k = 1; :::; n. It is shown that any such sequence of spaces can be generated by a unitary matrix. If zero is outside the field of values of A, then there is a Hermitian positive definite matrix that generates the same spaces, and, moreover, if A is close to Hermitian then there is a nearby Hermitian matrix that generates the same spaces. It is also shown that any such sequence of spaces can be generated by a matrix having any desired eigenvalues. Implications about the convergence rate of the GMRES method are discussed. A new proof is given that if zero is outside the field of values of A, then convergence of...
Flexible innerouter Krylov subspace methods
 SIAM J. NUMER. ANAL
, 2003
"... Flexible Krylov methodsrefersto a classof methodswhich accept preconditioning that can change from one step to the next. Given a Krylov subspace method, such as CG, GMRES, QMR, etc. for the solution of a linear system Ax = b, instead of having a fixed preconditioner M and the (right) preconditione ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
Flexible Krylov methodsrefersto a classof methodswhich accept preconditioning that can change from one step to the next. Given a Krylov subspace method, such as CG, GMRES, QMR, etc. for the solution of a linear system Ax = b, instead of having a fixed preconditioner M and the (right) preconditioned equation AM −1 y = b (Mx = y), one may have a different matrix, say Mk, at each step. In this paper, the case where the preconditioner itself is a Krylov subspace method is studied. There are several papers in the literature where such a situation is presented and numerical examples given. A general theory is provided encompassing many of these cases, including truncated methods. The overall space where the solution is approximated is no longer a Krylov subspace but a subspace of a larger Krylov space. We show how this subspace keeps growing as the outer iteration progresses, thus providing a convergence theory for these innerouter methods. Numerical tests illustrate some important implementation aspects that make the discussed innerouter methods very appealing in practical circumstances.
Expressions And Bounds For The GMRES Residual
 BIT
, 1999
"... . Expressions and bounds are derived for the residual norm in GMRES. It is shown that the minimal residual norm is large as long as the Krylov basis is wellconditioned.For scaled Jordan blocks the minimal residual norm is expressed in terms of eigenvalues and departure from normality.For normal mat ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
. Expressions and bounds are derived for the residual norm in GMRES. It is shown that the minimal residual norm is large as long as the Krylov basis is wellconditioned.For scaled Jordan blocks the minimal residual norm is expressed in terms of eigenvalues and departure from normality.For normal matrices the minimal residual norm is expressed in terms of products of relative eigenvalue di#erences. Key words. linear system, Krylov methods, GMRES, MINRES, Vandermonde matrix, eigenvalues, departure from normality AMS subject classi#cation. 15A03, 15A06, 15A09, 15A12, 15A18, 15A60, 65F10, 65F15, 65F20, 65F35. 1. Introduction.. The generalised minimal residual method #GMRES# #31, 36# #and MINRES for Hermitian matrices #30## is an iterative method for solving systems of linear equations Ax = b. The approximate solution in iteration i minimises the twonorm of the residual b , Az over the Krylov space spanfb;Ab;:::;A i,1 bg. The goal of this paper is to express this minimal residual norm...