Results 1 
8 of
8
Recent computational developments in Krylov subspace methods for linear systems
 NUMER. LINEAR ALGEBRA APPL
, 2007
"... Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are metho ..."
Abstract

Cited by 48 (12 self)
 Add to MetaCart
Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are methods specifically tailored to systems with special properties such as special forms of symmetry and those depending on one or more parameters.
MINRESQLP: A Krylov subspace method for indefinite or singular symmetric systems, SIAMJ.Sci.Comput.,toappear
"... Abstract. CG, SYMMLQ, and MINRES are Krylov subspace methods for solving symmetric systems of linear equations. When these methods are applied to an incompatible system (that is, a singular symmetric leastsquares problem), CG could break down and SYMMLQ’s solution could explode, while MINRES would ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Abstract. CG, SYMMLQ, and MINRES are Krylov subspace methods for solving symmetric systems of linear equations. When these methods are applied to an incompatible system (that is, a singular symmetric leastsquares problem), CG could break down and SYMMLQ’s solution could explode, while MINRES would give a leastsquares solution but not necessarily the minimumlength (pseudoinverse) solution. This understanding motivates us to design a MINRESlike algorithm to compute minimumlength solutions to singular symmetric systems. MINRES uses QR factors of the tridiagonal matrix from the Lanczos process (where R is uppertridiagonal). MINRESQLP uses a QLP decomposition (where rotations on the right reduce R to lowertridiagonal form). On illconditioned systems (singular or not), MINRESQLP can give more accurate solutions than MINRES. We derive preconditioned MINRESQLP, new stopping rules, and better estimates of the solution and residual norms, the matrix norm, and the condition number.
Complete Iterative Method for Computing Pseudospectra
, 1997
"... Efficient codes for computing pseudospectra of large sparse matrices usually use a Lanczos type method with the shift and invert technique and a shift equal to zero. Then, these codes are very efficient for computing pseudospectra on regions where the matrix is nonnormal (because k(A \Gamma zI) \G ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Efficient codes for computing pseudospectra of large sparse matrices usually use a Lanczos type method with the shift and invert technique and a shift equal to zero. Then, these codes are very efficient for computing pseudospectra on regions where the matrix is nonnormal (because k(A \Gamma zI) \Gamma1 k2 is large) but they lose their efficiency when they compute pseudospectra on regions where the spectrum of A is not sensitive (k(A \Gamma zI) \Gamma1 k2 is small). A way to overcome this loss of efficiency using only iterative methods associated with an adaptive shift is proposed. 1 Introduction The "pseudoeigenvalue and "pseudospectrum are defined as: ffl is an "pseudoeigenvalue of A if is an eigenvalue of A+ E with kEk 2 "kAk 2 ffl The "pseudospectrum of A is defined by " (A) = fz 2 l C ; z is an "\Gammapseudoeigenvalue of Ag For a fixed ", the contour of " (A) can be defined as fz 2 l C ; kAk 2 k(A \Gamma zI) \Gamma1 k 2 = " \Gamma1 g. The graphical representati...
Parallel Computational MagnetoFluid Dynamics
, 1998
"... this report will be on the computationally challenging applications that we claimed to tackle at the start of our activities. Various hydrodynamic and magnetohydrodynamic physics issues can now be studied systematically. iii iv 1 Update on the Cluster Project ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
this report will be on the computationally challenging applications that we claimed to tackle at the start of our activities. Various hydrodynamic and magnetohydrodynamic physics issues can now be studied systematically. iii iv 1 Update on the Cluster Project
HOW TO MAKE SIMPLER GMRES AND GCR MORE STABLE
 VOL. 30, NO. 4, PP. 1483–1499
, 2008
"... In this paper we analyze the numerical behavior of several minimum residual methods which are mathematically equivalent to the GMRES method. Two main approaches are compared: one that computes the approximate solution in terms of a Krylov space basis from an upper triangular linear system for the co ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper we analyze the numerical behavior of several minimum residual methods which are mathematically equivalent to the GMRES method. Two main approaches are compared: one that computes the approximate solution in terms of a Krylov space basis from an upper triangular linear system for the coordinates, and one where the approximate solutions are updated with a simple recursion formula. We show that a different choice of the basis can significantly influence the numerical behavior of the resulting implementation. While Simpler GMRES and ORTHODIR are less stable due to the illconditioning of the basis used, the residual basis is wellconditioned as long as we have a reasonable residual norm decrease. These results lead to a new implementation, which is conditionally backward stable, and they explain the experimentally observed fact that the GCR method delivers very accurate approximate solutions when it converges fast enough without stagnation.
A block preconditioning cost analysis for . . .
, 2005
"... We investigate the cost of preconditioning when solving large sparse saddlepoint linear systems with Krylov subspace methods. To use the block structure of the original matrix, we apply one of two block preconditioners. Algebraic eigenvalue analysis is given for a particular case of the precondition ..."
Abstract
 Add to MetaCart
We investigate the cost of preconditioning when solving large sparse saddlepoint linear systems with Krylov subspace methods. To use the block structure of the original matrix, we apply one of two block preconditioners. Algebraic eigenvalue analysis is given for a particular case of the preconditioners. We also give eigenvalue bounds for the preconditioned matrix when the preconditioner is block diagonal and positive definite. We treat linear solves involving the preconditioner as a subproblem which we solve iteratively. In order to minimize cost, we implement a fixed inner tolerance and a varying inner tolerance based on bounds developed by Simoncini and Szyld (2003) and van den Eshof, Sleijpen, and van Gijzen (2005). Numerical experiments compare the cost of preconditioning for various iterative solvers and block preconditioners. We also experiment with different tolerances for the iterative solution of linear solves involving the preconditioner.
A block preconditioning cost analysis for solving . . .
, 2005
"... We investigate the cost of preconditioning when solving large sparse saddlepoint linear systems with Krylov subspace methods. To use the block structure of the original matrix, we apply one of two block preconditioners. Algebraic eigenvalue analysis is given for a particular case of the precondition ..."
Abstract
 Add to MetaCart
We investigate the cost of preconditioning when solving large sparse saddlepoint linear systems with Krylov subspace methods. To use the block structure of the original matrix, we apply one of two block preconditioners. Algebraic eigenvalue analysis is given for a particular case of the preconditioners. We also give eigenvalue bounds for the preconditioned matrix when the preconditioner is block diagonal and positive definite. We treat linear solves involving the preconditioner as a subproblem which we solve iteratively. In order to minimize cost, we implement a fixed inner tolerance and a varying inner tolerance based on bounds developed by Simoncini and Szyld (2003) and van den Eshof, Sleijpen, and van Gijzen (2005). Numerical experiments compare the cost of preconditioning for various iterative solvers and block preconditioners. We also experiment with different tolerances for the iterative solution of linear solves involving the preconditioner.
Numerische Mathematik manuscript No.
"... (will be inserted by the editor) On nonsymmetric saddle point matrices that allow conjugate gradient iterations ⋆ ..."
Abstract
 Add to MetaCart
(will be inserted by the editor) On nonsymmetric saddle point matrices that allow conjugate gradient iterations ⋆