Results 1  10
of
16
Recent computational developments in Krylov subspace methods for linear systems
 NUMER. LINEAR ALGEBRA APPL
, 2007
"... Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are metho ..."
Abstract

Cited by 51 (12 self)
 Add to MetaCart
Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are methods specifically tailored to systems with special properties such as special forms of symmetry and those depending on one or more parameters.
MINRESQLP: A Krylov subspace method for indefinite or singular symmetric systems
 SIAMJ.SCI.COMPUT.,TOAPPEAR
, 2011
"... CG, SYMMLQ, and MINRES are Krylov subspace methods for solving symmetric systems of linear equations. When these methods are applied to an incompatible system (that is, a singular symmetric leastsquares problem), CG could break down and SYMMLQ’s solution could explode, while MINRES would give a le ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
CG, SYMMLQ, and MINRES are Krylov subspace methods for solving symmetric systems of linear equations. When these methods are applied to an incompatible system (that is, a singular symmetric leastsquares problem), CG could break down and SYMMLQ’s solution could explode, while MINRES would give a leastsquares solution but not necessarily the minimumlength (pseudoinverse) solution. This understanding motivates us to design a MINRESlike algorithm to compute minimumlength solutions to singular symmetric systems. MINRES uses QR factors of the tridiagonal matrix from the Lanczos process (where R is uppertridiagonal). MINRESQLP uses a QLP decomposition (where rotations on the right reduce R to lowertridiagonal form). On illconditioned systems (singular or not), MINRESQLP can give more accurate solutions than MINRES. We derive preconditioned MINRESQLP, new stopping rules, and better estimates of the solution and residual norms, the matrix norm, and the condition number.
Complete Iterative Method for Computing Pseudospectra
, 1997
"... Efficient codes for computing pseudospectra of large sparse matrices usually use a Lanczos type method with the shift and invert technique and a shift equal to zero. Then, these codes are very efficient for computing pseudospectra on regions where the matrix is nonnormal (because k(A \Gamma zI) \G ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Efficient codes for computing pseudospectra of large sparse matrices usually use a Lanczos type method with the shift and invert technique and a shift equal to zero. Then, these codes are very efficient for computing pseudospectra on regions where the matrix is nonnormal (because k(A \Gamma zI) \Gamma1 k2 is large) but they lose their efficiency when they compute pseudospectra on regions where the spectrum of A is not sensitive (k(A \Gamma zI) \Gamma1 k2 is small). A way to overcome this loss of efficiency using only iterative methods associated with an adaptive shift is proposed. 1 Introduction The "pseudoeigenvalue and "pseudospectrum are defined as: ffl is an "pseudoeigenvalue of A if is an eigenvalue of A+ E with kEk 2 "kAk 2 ffl The "pseudospectrum of A is defined by " (A) = fz 2 l C ; z is an "\Gammapseudoeigenvalue of Ag For a fixed ", the contour of " (A) can be defined as fz 2 l C ; kAk 2 k(A \Gamma zI) \Gamma1 k 2 = " \Gamma1 g. The graphical representati...
Parallel Computational MagnetoFluid Dynamics
, 1998
"... this report will be on the computationally challenging applications that we claimed to tackle at the start of our activities. Various hydrodynamic and magnetohydrodynamic physics issues can now be studied systematically. iii iv 1 Update on the Cluster Project ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
this report will be on the computationally challenging applications that we claimed to tackle at the start of our activities. Various hydrodynamic and magnetohydrodynamic physics issues can now be studied systematically. iii iv 1 Update on the Cluster Project
Contents lists available at SciVerse ScienceDirect Earth and Planetary Science Letters
"... journal homepage: www.elsevier.com/locate/epsl Holocene tropical South American hydroclimate revealed from a decadally resolved ..."
Abstract
 Add to MetaCart
journal homepage: www.elsevier.com/locate/epsl Holocene tropical South American hydroclimate revealed from a decadally resolved
HOW TO MAKE SIMPLER GMRES AND GCR MORE STABLE
 VOL. 30, NO. 4, PP. 1483–1499
, 2008
"... In this paper we analyze the numerical behavior of several minimum residual methods which are mathematically equivalent to the GMRES method. Two main approaches are compared: one that computes the approximate solution in terms of a Krylov space basis from an upper triangular linear system for the co ..."
Abstract
 Add to MetaCart
In this paper we analyze the numerical behavior of several minimum residual methods which are mathematically equivalent to the GMRES method. Two main approaches are compared: one that computes the approximate solution in terms of a Krylov space basis from an upper triangular linear system for the coordinates, and one where the approximate solutions are updated with a simple recursion formula. We show that a different choice of the basis can significantly influence the numerical behavior of the resulting implementation. While Simpler GMRES and ORTHODIR are less stable due to the illconditioning of the basis used, the residual basis is wellconditioned as long as we have a reasonable residual norm decrease. These results lead to a new implementation, which is conditionally backward stable, and they explain the experimentally observed fact that the GCR method delivers very accurate approximate solutions when it converges fast enough without stagnation.
A block preconditioning cost analysis for solving . . .
, 2005
"... We investigate the cost of preconditioning when solving large sparse saddlepoint linear systems with Krylov subspace methods. To use the block structure of the original matrix, we apply one of two block preconditioners. Algebraic eigenvalue analysis is given for a particular case of the precondition ..."
Abstract
 Add to MetaCart
We investigate the cost of preconditioning when solving large sparse saddlepoint linear systems with Krylov subspace methods. To use the block structure of the original matrix, we apply one of two block preconditioners. Algebraic eigenvalue analysis is given for a particular case of the preconditioners. We also give eigenvalue bounds for the preconditioned matrix when the preconditioner is block diagonal and positive definite. We treat linear solves involving the preconditioner as a subproblem which we solve iteratively. In order to minimize cost, we implement a fixed inner tolerance and a varying inner tolerance based on bounds developed by Simoncini and Szyld (2003) and van den Eshof, Sleijpen, and van Gijzen (2005). Numerical experiments compare the cost of preconditioning for various iterative solvers and block preconditioners. We also experiment with different tolerances for the iterative solution of linear solves involving the preconditioner.
A block preconditioning cost analysis for . . .
, 2005
"... We investigate the cost of preconditioning when solving large sparse saddlepoint linear systems with Krylov subspace methods. To use the block structure of the original matrix, we apply one of two block preconditioners. Algebraic eigenvalue analysis is given for a particular case of the precondition ..."
Abstract
 Add to MetaCart
We investigate the cost of preconditioning when solving large sparse saddlepoint linear systems with Krylov subspace methods. To use the block structure of the original matrix, we apply one of two block preconditioners. Algebraic eigenvalue analysis is given for a particular case of the preconditioners. We also give eigenvalue bounds for the preconditioned matrix when the preconditioner is block diagonal and positive definite. We treat linear solves involving the preconditioner as a subproblem which we solve iteratively. In order to minimize cost, we implement a fixed inner tolerance and a varying inner tolerance based on bounds developed by Simoncini and Szyld (2003) and van den Eshof, Sleijpen, and van Gijzen (2005). Numerical experiments compare the cost of preconditioning for various iterative solvers and block preconditioners. We also experiment with different tolerances for the iterative solution of linear solves involving the preconditioner.
INEXACT KRYLOV SUBSPACE METHODS FOR LINEAR SYSTEMS ∗
"... Abstract. There is a class of linear problems for which the computation of the matrixvector product is very expensive since a time consuming approximation method is necessary to compute it with some prescribed relative precision. In this paper we investigate the impact of approximately computed mat ..."
Abstract
 Add to MetaCart
Abstract. There is a class of linear problems for which the computation of the matrixvector product is very expensive since a time consuming approximation method is necessary to compute it with some prescribed relative precision. In this paper we investigate the impact of approximately computed matrixvector products on the convergence and attainable accuracy of several Krylov subspace solvers. We will argue that the success of a relaxation strategy depends on the underlying way the Krylov subspace is constructed and not on the optimality properties of the particular method. The obtained insight is used to tune the precision of the matrixvector product in every iteration step in such a way that an overall efficient process is obtained. Our analysis confirms the empirically found relaxation strategy of Bouras and Frayssé for the GMRES method proposed in [2]. Furthermore, we give an improved version of a strategy of Bouras, Frayssé, and Giraud [3] for the Conjugate Gradient method. 1. Introduction. There
for the conjugate gradient method of Bouras, Frayssé, and Giraud used in [A Relaxation Strategy for
"... Abstract. There is a class of linear problems for which the computation of the matrixvector product is very expensive since a time consuming method is necessary to approximate it with some prescribed relative precision. In this paper we investigate the impact of approximately computed matrixvector ..."
Abstract
 Add to MetaCart
Abstract. There is a class of linear problems for which the computation of the matrixvector product is very expensive since a time consuming method is necessary to approximate it with some prescribed relative precision. In this paper we investigate the impact of approximately computed matrixvector products on the convergence and attainable accuracy of several Krylov subspace solvers. We will argue that the sensitivity towards perturbations is mainly determined by the underlying way the Krylov subspace is constructed and does not depend on the optimality properties of the particular method. The obtained insight is used to tune the precision of the matrixvector product in every iteration step in such a way that an overall efficient process is obtained. Our analysis confirms the empirically found relaxation strategy of Bouras and Frayssé for the GMRES method proposed