Results 1  10
of
33
Numerical solution of saddle point problems
 ACTA NUMERICA
, 2005
"... Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has b ..."
Abstract

Cited by 275 (24 self)
 Add to MetaCart
(Show Context)
Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has been a surge of interest in saddle point problems, and numerous solution techniques have been proposed for solving this type of systems. The aim of this paper is to present and discuss a large selection of solution methods for linear systems in saddle point form, with an emphasis on iterative methods for large and sparse problems.
Recent computational developments in Krylov subspace methods for linear systems
 NUMER. LINEAR ALGEBRA APPL
, 2007
"... Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are metho ..."
Abstract

Cited by 65 (12 self)
 Add to MetaCart
Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are methods specifically tailored to systems with special properties such as special forms of symmetry and those depending on one or more parameters.
Theory of inexact Krylov subspace methods and applications to scientific computing
, 2002
"... Abstract. We provide a general frameworkfor the understanding of inexact Krylov subspace methods for the solution of symmetric and nonsymmetric linear systems of equations, as well as for certain eigenvalue calculations. This frameworkallows us to explain the empirical results reported in a series o ..."
Abstract

Cited by 61 (7 self)
 Add to MetaCart
(Show Context)
Abstract. We provide a general frameworkfor the understanding of inexact Krylov subspace methods for the solution of symmetric and nonsymmetric linear systems of equations, as well as for certain eigenvalue calculations. This frameworkallows us to explain the empirical results reported in a series of CERFACS technical reports by Bouras, Frayssé, and Giraud in 2000. Furthermore, assuming exact arithmetic, our analysis can be used to produce computable criteria to bound the inexactness of the matrixvector multiplication in such a way as to maintain the convergence of the Krylov subspace method. The theory developed is applied to several problems including the solution of Schur complement systems, linear systems which depend on a parameter, and eigenvalue problems. Numerical experiments for some of these scientific applications are reported.
Analysis of Acceleration Strategies for Restarted Minimal Residual Methods
, 2000
"... We provide an overview of existing strategies which compensate for the deterioration of convergence of minimum residual (MR) Krylov subspace methods due to restarting. We evaluate the popular practice of using nearly invariant subspaces to either augment Krylov subspaces or to construct precondit ..."
Abstract

Cited by 35 (6 self)
 Add to MetaCart
We provide an overview of existing strategies which compensate for the deterioration of convergence of minimum residual (MR) Krylov subspace methods due to restarting. We evaluate the popular practice of using nearly invariant subspaces to either augment Krylov subspaces or to construct preconditioners which invert on these subspaces. In the case where these spaces are exactly invariant, the augmentation approach is shown to be superior. We further show how a strategy recently introduced by de Sturler for truncating the approximation space of an MR method can be interpreted as a controlled loosening of the condition for global MR approximation based on the canonical angles between subspaces. For the special case of Krylov subspace methods, we give a concise derivation of the role of Ritz and harmonic Ritz values and vectors in the polynomial description of Krylov spaces as well as of the use of the implicitly updated Arnoldi method for manipulating Krylov spaces.
Flexible innerouter Krylov subspace methods
 SIAM J. NUMER. ANAL
, 2003
"... Flexible Krylov methodsrefersto a classof methodswhich accept preconditioning that can change from one step to the next. Given a Krylov subspace method, such as CG, GMRES, QMR, etc. for the solution of a linear system Ax = b, instead of having a fixed preconditioner M and the (right) preconditione ..."
Abstract

Cited by 28 (2 self)
 Add to MetaCart
Flexible Krylov methodsrefersto a classof methodswhich accept preconditioning that can change from one step to the next. Given a Krylov subspace method, such as CG, GMRES, QMR, etc. for the solution of a linear system Ax = b, instead of having a fixed preconditioner M and the (right) preconditioned equation AM −1 y = b (Mx = y), one may have a different matrix, say Mk, at each step. In this paper, the case where the preconditioner itself is a Krylov subspace method is studied. There are several papers in the literature where such a situation is presented and numerical examples given. A general theory is provided encompassing many of these cases, including truncated methods. The overall space where the solution is approximated is no longer a Krylov subspace but a subspace of a larger Krylov space. We show how this subspace keeps growing as the outer iteration progresses, thus providing a convergence theory for these innerouter methods. Numerical tests illustrate some important implementation aspects that make the discussed innerouter methods very appealing in practical circumstances.
On the occurrence of superlinear convergence of exact and inexact Krylov subspace methods
 SIAM Rev
, 2005
"... We present a general analytical model which describes the superlinear convergence of Krylov subspace methods. We take an invariant subspace approach, so that our results apply also to inexact methods, and to nondiagonalizable matrices. Thus, we provide a unified treatment of the superlinear conve ..."
Abstract

Cited by 24 (9 self)
 Add to MetaCart
(Show Context)
We present a general analytical model which describes the superlinear convergence of Krylov subspace methods. We take an invariant subspace approach, so that our results apply also to inexact methods, and to nondiagonalizable matrices. Thus, we provide a unified treatment of the superlinear convergence of GMRES, Conjugate Gradients, block versions of these, and inexact subspace methods. Numerical experiments illustrate the bounds obtained.
The many proofs of an identity on the norm of oblique projections
 Numer. Algorithms
"... Given an oblique projector P on a Hilbert space, i.e., an operator satisfying P 2 = P, which is neither null nor the identity, it holds that ‖P ‖ = ‖I − P ‖. This useful equality, while not widelyknown, has been proven repeatedly in the literature. Many published proofs are reviewed, and simpler o ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
(Show Context)
Given an oblique projector P on a Hilbert space, i.e., an operator satisfying P 2 = P, which is neither null nor the identity, it holds that ‖P ‖ = ‖I − P ‖. This useful equality, while not widelyknown, has been proven repeatedly in the literature. Many published proofs are reviewed, and simpler ones are presented.
How Descriptive Are GMRES Convergence Bounds?
 Oxford University Computing Laboratory
, 1999
"... . Eigenvalues with the eigenvector condition number, the field of values, and pseudospectra have all been suggested as the basis for convergence bounds for minimum residual Krylov subspace methods applied to nonnormal coefficient matrices. This paper analyzes and compares these bounds, illustrating ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
. Eigenvalues with the eigenvector condition number, the field of values, and pseudospectra have all been suggested as the basis for convergence bounds for minimum residual Krylov subspace methods applied to nonnormal coefficient matrices. This paper analyzes and compares these bounds, illustrating with six examples the success and failure of each one. Refined bounds based on eigenvalues and the field of values are suggested to handle lowdimensional nonnormality. It is observed that pseudospectral bounds can capture multiple convergence stages. Unfortunately, computation of pseudospectra can be rather expensive. This motivates an adaptive technique for estimating GMRES convergence based on approximate pseudospectra taken from the Arnoldi process that is the basis for GMRES. Key words. Krylov subspace methods, GMRES convergence, nonnormal matrices, pseudospectra, field of values AMS subject classifications. 15A06, 65F10, 15A18, 15A60, 31A15 1. Introduction. Popular algorithms for...
FQMR: A flexible quasiminimal residual method with inexact preconditioning
 SIAM J. Sci. Comput
, 2001
"... Abstract. A flexible version of the QMR algorithm is presented which allows for the use of a different preconditioner at each step of the algorithm. In particular, inexact solutions of the preconditioned equations are allowed, as well as the use of an (inner) iterative method as a preconditioner. Se ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
Abstract. A flexible version of the QMR algorithm is presented which allows for the use of a different preconditioner at each step of the algorithm. In particular, inexact solutions of the preconditioned equations are allowed, as well as the use of an (inner) iterative method as a preconditioner. Several theorems are presented relating the norm of the residual of the new method with the norm of the residual of other methods, including QMR and flexible GMRES (FGMRES). In addition, numerical experiments are presented which illustrate the convergence of flexible QMR (FQMR), and show that in certain cases FQMR can produce approximations with lower residual norms than QMR. Key words. Krylov subspace methods, flexible preconditioning, innerouter iterations AMS subject classification. 65F10 PII. S106482750037336X 1. Introduction. The quasiminimal residual (QMR) method [11] is a wellestablished Krylov subspace method for solving large systems of nonHermitian linear equations of the form