Results 1  10
of
17
Recent computational developments in Krylov subspace methods for linear systems
 NUMER. LINEAR ALGEBRA APPL
, 2007
"... Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are metho ..."
Abstract

Cited by 73 (12 self)
 Add to MetaCart
Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are methods specifically tailored to systems with special properties such as special forms of symmetry and those depending on one or more parameters.
MINRESQLP: A Krylov subspace method for indefinite or singular symmetric systems
 SIAMJ.SCI.COMPUT.,TOAPPEAR
, 2011
"... CG, SYMMLQ, and MINRES are Krylov subspace methods for solving symmetric systems of linear equations. When these methods are applied to an incompatible system (that is, a singular symmetric leastsquares problem), CG could break down and SYMMLQ’s solution could explode, while MINRES would give a le ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
(Show Context)
CG, SYMMLQ, and MINRES are Krylov subspace methods for solving symmetric systems of linear equations. When these methods are applied to an incompatible system (that is, a singular symmetric leastsquares problem), CG could break down and SYMMLQ’s solution could explode, while MINRES would give a leastsquares solution but not necessarily the minimumlength (pseudoinverse) solution. This understanding motivates us to design a MINRESlike algorithm to compute minimumlength solutions to singular symmetric systems. MINRES uses QR factors of the tridiagonal matrix from the Lanczos process (where R is uppertridiagonal). MINRESQLP uses a QLP decomposition (where rotations on the right reduce R to lowertridiagonal form). On illconditioned systems (singular or not), MINRESQLP can give more accurate solutions than MINRES. We derive preconditioned MINRESQLP, new stopping rules, and better estimates of the solution and residual norms, the matrix norm, and the condition number.
Complete Iterative Method for Computing Pseudospectra
, 1997
"... Efficient codes for computing pseudospectra of large sparse matrices usually use a Lanczos type method with the shift and invert technique and a shift equal to zero. Then, these codes are very efficient for computing pseudospectra on regions where the matrix is nonnormal (because k(A \Gamma zI) \G ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Efficient codes for computing pseudospectra of large sparse matrices usually use a Lanczos type method with the shift and invert technique and a shift equal to zero. Then, these codes are very efficient for computing pseudospectra on regions where the matrix is nonnormal (because k(A \Gamma zI) \Gamma1 k2 is large) but they lose their efficiency when they compute pseudospectra on regions where the spectrum of A is not sensitive (k(A \Gamma zI) \Gamma1 k2 is small). A way to overcome this loss of efficiency using only iterative methods associated with an adaptive shift is proposed. 1 Introduction The "pseudoeigenvalue and "pseudospectrum are defined as: ffl is an "pseudoeigenvalue of A if is an eigenvalue of A+ E with kEk 2 "kAk 2 ffl The "pseudospectrum of A is defined by " (A) = fz 2 l C ; z is an "\Gammapseudoeigenvalue of Ag For a fixed ", the contour of " (A) can be defined as fz 2 l C ; kAk 2 k(A \Gamma zI) \Gamma1 k 2 = " \Gamma1 g. The graphical representati...
HOW TO MAKE SIMPLER GMRES AND GCR MORE STABLE
 VOL. 30, NO. 4, PP. 1483–1499
, 2008
"... In this paper we analyze the numerical behavior of several minimum residual methods which are mathematically equivalent to the GMRES method. Two main approaches are compared: one that computes the approximate solution in terms of a Krylov space basis from an upper triangular linear system for the co ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In this paper we analyze the numerical behavior of several minimum residual methods which are mathematically equivalent to the GMRES method. Two main approaches are compared: one that computes the approximate solution in terms of a Krylov space basis from an upper triangular linear system for the coordinates, and one where the approximate solutions are updated with a simple recursion formula. We show that a different choice of the basis can significantly influence the numerical behavior of the resulting implementation. While Simpler GMRES and ORTHODIR are less stable due to the illconditioning of the basis used, the residual basis is wellconditioned as long as we have a reasonable residual norm decrease. These results lead to a new implementation, which is conditionally backward stable, and they explain the experimentally observed fact that the GCR method delivers very accurate approximate solutions when it converges fast enough without stagnation.
Parallel Computational MagnetoFluid Dynamics
, 1998
"... this report will be on the computationally challenging applications that we claimed to tackle at the start of our activities. Various hydrodynamic and magnetohydrodynamic physics issues can now be studied systematically. iii iv 1 Update on the Cluster Project ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
this report will be on the computationally challenging applications that we claimed to tackle at the start of our activities. Various hydrodynamic and magnetohydrodynamic physics issues can now be studied systematically. iii iv 1 Update on the Cluster Project
PRECONDITIONED RECYCLING KRYLOV SUBSPACE METHODS FOR SELFADJOINT PROBLEMS
"... Abstract. The authors propose a recycling Krylov subspace method for the solution of a sequence of selfadjoint linear systems. Such problems appear, for example, in the Newton process for solving nonlinear equations. Ritz vectors are automatically extracted from one MINRES run and then used for se ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. The authors propose a recycling Krylov subspace method for the solution of a sequence of selfadjoint linear systems. Such problems appear, for example, in the Newton process for solving nonlinear equations. Ritz vectors are automatically extracted from one MINRES run and then used for selfadjoint deflation in the next. The method is designed to work with arbitrary inner products and arbitrary selfadjoint positivedefinite preconditioners whose inverse can be computed with high accuracy. Numerical experiments with nonlinear Schrödinger equations indicate a substantial decrease in computation time when recycling is used.
Numerische Mathematik manuscript No.
"... (will be inserted by the editor) On nonsymmetric saddle point matrices that allow conjugate gradient iterations ⋆ ..."
Abstract
 Add to MetaCart
(Show Context)
(will be inserted by the editor) On nonsymmetric saddle point matrices that allow conjugate gradient iterations ⋆
FOR THE
, 2011
"... Redistributed by Stanford University under license with the author. ii I certify that I have read this dissertation and that, in my opinion, it is fully adequate ..."
Abstract
 Add to MetaCart
(Show Context)
Redistributed by Stanford University under license with the author. ii I certify that I have read this dissertation and that, in my opinion, it is fully adequate
Acknowledgements
, 1970
"... The JacobiDavidson algorithm for solving large sparse symmetric eigenvalue problems with application to the design of accelerator cavities ..."
Abstract
 Add to MetaCart
(Show Context)
The JacobiDavidson algorithm for solving large sparse symmetric eigenvalue problems with application to the design of accelerator cavities
INEXACT KRYLOV SUBSPACE METHODS FOR LINEAR SYSTEMS ∗
"... Abstract. There is a class of linear problems for which the computation of the matrixvector product is very expensive since a time consuming approximation method is necessary to compute it with some prescribed relative precision. In this paper we investigate the impact of approximately computed mat ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. There is a class of linear problems for which the computation of the matrixvector product is very expensive since a time consuming approximation method is necessary to compute it with some prescribed relative precision. In this paper we investigate the impact of approximately computed matrixvector products on the convergence and attainable accuracy of several Krylov subspace solvers. We will argue that the success of a relaxation strategy depends on the underlying way the Krylov subspace is constructed and not on the optimality properties of the particular method. The obtained insight is used to tune the precision of the matrixvector product in every iteration step in such a way that an overall efficient process is obtained. Our analysis confirms the empirically found relaxation strategy of Bouras and Frayssé for the GMRES method proposed in [2]. Furthermore, we give an improved version of a strategy of Bouras, Frayssé, and Giraud [3] for the Conjugate Gradient method. 1. Introduction. There