Results 1  10
of
20
Recent computational developments in Krylov subspace methods for linear systems
 NUMER. LINEAR ALGEBRA APPL
, 2007
"... Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are metho ..."
Abstract

Cited by 50 (12 self)
 Add to MetaCart
Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are methods specifically tailored to systems with special properties such as special forms of symmetry and those depending on one or more parameters.
Least squares residuals and minimal residual methods
 SIAM J. Sci. Comput
"... Abstract. We study Krylov subspace methods for solving unsymmetriclinear algebraicsystems that minimize the norm of the residual at each step (minimal residual (MR) methods). MR methods are often formulated in terms of a sequence of least squares (LS) problems of increasing dimension. We present sev ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
Abstract. We study Krylov subspace methods for solving unsymmetriclinear algebraicsystems that minimize the norm of the residual at each step (minimal residual (MR) methods). MR methods are often formulated in terms of a sequence of least squares (LS) problems of increasing dimension. We present several basicidentities and bounds for the LS residual. These results are interesting in the general context of solving LS problems. When applied to MR methods, they show that the size of the MR residual is strongly related to the conditioning of different bases of the same Krylov subspace. Using different bases is useful in theory because relating convergence to the characteristics of different bases offers new insight into the behavior of MR methods. Different bases also lead to different implementations which are mathematically equivalent but can differ numerically. Our theoretical results are used for a finite precision analysis of implementations of the GMRES method [Y. Saad and M. H. Schultz, SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856–869]. We explain that the choice of the basis is fundamental for the numerical stability of the implementation. As demonstrated in the case of Simpler GMRES [H. F. Walker and L. Zhou, Numer. Linear Algebra Appl., 1 (1994), pp. 571–581], the best orthogonalization technique used for computing the basis does not compensate for the loss of accuracy due to an inappropriate choice of the basis. In particular, we prove that Simpler GMRES is inherently less numerically stable than
Convergence of GMRES for tridiagonal Toeplitz matrices
 SIAM J. Matrix Anal. Appl
"... Statist. Comput., 7 (1986), pp. 856–859], when the method is applied to tridiagonal Toeplitz matrices. We first derive formulas for the residuals as well as their norms when GMRES is applied to scaled Jordan blocks. This problem has been studied previously by Ipsen [BIT, 40 (2000), pp. 524–535] and ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
Statist. Comput., 7 (1986), pp. 856–859], when the method is applied to tridiagonal Toeplitz matrices. We first derive formulas for the residuals as well as their norms when GMRES is applied to scaled Jordan blocks. This problem has been studied previously by Ipsen [BIT, 40 (2000), pp. 524–535] and Eiermann and Ernst [Private communication, 2002], but we formulate and prove our results in a different way. We then extend the (lower) bidiagonal Jordan blocks to tridiagonal Toeplitz matrices and study extensions of our bidiagonal analysis to the tridiagonal case. Intuitively, when a scaled Jordan block is extended to a tridiagonal Toeplitz matrix by a superdiagonal of small modulus (compared to the modulus of the subdiagonal), the GMRES residual norms for both matrices and the same initial residual should be close to each other. We confirm and quantify this intuitive statement. We also demonstrate principal difficulties of any GMRES convergence analysis which is based on eigenvector expansion of the initial residual when the eigenvector matrix is illconditioned. Such analyses are complicated by a cancellation of possibly huge components due to close eigenvectors, which can prevent achieving welljustified conclusions. Key words. Krylov subspace methods, GMRES, minimal residual methods, convergence analysis,
The worstcase GMRES for normal matrices
 BIT
"... We study the convergence of GMRES for linear algebraic systems with normal matrices. In particular, we explore the standard bound based on a minmax approximation problem on the discrete set of the matrix eigenvalues. This bound is sharp, i.e. it is attainable by the GMRES residual norm. The questio ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
We study the convergence of GMRES for linear algebraic systems with normal matrices. In particular, we explore the standard bound based on a minmax approximation problem on the discrete set of the matrix eigenvalues. This bound is sharp, i.e. it is attainable by the GMRES residual norm. The question is how to evaluate or estimate the standard bound, and if it is possible to characterize the GMRESrelated quantities for which this bound is attained (worstcase GMRES). In this paper we completely characterize the worstcase GMRESrelated quantities in the nexttolast iteration step and evaluate the standard bound in terms of explicit polynomials involving the matrix eigenvalues. For a general iteration step, we develop a computable lower and upper bound on the standard bound. Our bounds allow us to study the worstcase GMRES residual norm as a function of the eigenvalue distribution. For hermitian matrices the lower bound is equal to the worstcase residual norm. In addition, numerical experiments show that the lower bound is generally very tight, and support our conjecture that it is to within a factor of 4/π of the actual worstcase residual norm. Since the worstcase residual norm in each step is to within a factor of the square root of the matrix size to what is considered an “average ” residual norm, our results are of relevance beyond the worst case.
How Descriptive Are GMRES Convergence Bounds?
 Oxford University Computing Laboratory
, 1999
"... . Eigenvalues with the eigenvector condition number, the field of values, and pseudospectra have all been suggested as the basis for convergence bounds for minimum residual Krylov subspace methods applied to nonnormal coefficient matrices. This paper analyzes and compares these bounds, illustrating ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
. Eigenvalues with the eigenvector condition number, the field of values, and pseudospectra have all been suggested as the basis for convergence bounds for minimum residual Krylov subspace methods applied to nonnormal coefficient matrices. This paper analyzes and compares these bounds, illustrating with six examples the success and failure of each one. Refined bounds based on eigenvalues and the field of values are suggested to handle lowdimensional nonnormality. It is observed that pseudospectral bounds can capture multiple convergence stages. Unfortunately, computation of pseudospectra can be rather expensive. This motivates an adaptive technique for estimating GMRES convergence based on approximate pseudospectra taken from the Arnoldi process that is the basis for GMRES. Key words. Krylov subspace methods, GMRES convergence, nonnormal matrices, pseudospectra, field of values AMS subject classifications. 15A06, 65F10, 15A18, 15A60, 31A15 1. Introduction. Popular algorithms for...
ON MEINARDUS ’ EXAMPLES FOR THE CONJUGATE GRADIENT METHOD
"... Abstract. The conjugate gradient (CG) method is widely used to solve a positive definite linear system Ax = b of order N. It is well known that the relative residual of the kth approximate solution by CG (with the initial approximation x0 = 0) is bounded above by 2 ∆ k κ + ∆ −k −1 κ +1 κ with ∆κ = ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
Abstract. The conjugate gradient (CG) method is widely used to solve a positive definite linear system Ax = b of order N. It is well known that the relative residual of the kth approximate solution by CG (with the initial approximation x0 = 0) is bounded above by 2 ∆ k κ + ∆ −k −1 κ +1 κ with ∆κ = √, κ − 1 where κ ≡ κ(A) =‖A‖2‖A −1 ‖2 is A’s spectral condition number. In 1963, Meinardus (Numer. Math., 5 (1963), pp. 14–23) gave an example to achieve this bound for k = N − 1 but without saying anything about all other 1 ≤ k< N − 1. This very example can be used to show that the bound is sharp for any given k by constructing examples to attain the bound, but such examples depend on k and for them the (k + 1)th residual is exactly zero. Therefore it would be interesting to know if there is any example on which the CG relative residuals are comparable to the bound for all 1 ≤ k ≤ N − 1. There are two contributions in this paper: (1) A closed formula for the CG residuals for all 1 ≤ k ≤ N −1 on Meinardus’ example is obtained, and in particular it implies that the bound is always within a factor of √ 2 of the actual residuals; (2) A complete characterization of extreme positive linear systems for which the kth CG residual achieves the bound is also presented. 1.
Spectral Factorization Of The Krylov Matrix And Convergence Of Gmres
"... Is it possible to use eigenvalues and eigenvectors to establish accurate results on GMRES performance? Existing convergence bounds, that are extensions of analysis of Hermitian solvers like CG and MINRES, provide no useful information when the coefficient matrix is almost defective. In this paper we ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Is it possible to use eigenvalues and eigenvectors to establish accurate results on GMRES performance? Existing convergence bounds, that are extensions of analysis of Hermitian solvers like CG and MINRES, provide no useful information when the coefficient matrix is almost defective. In this paper we propose a new framework for using spectral information for convergence analysis. It is based on what we call the spectral factorization of the Krylov matrix. Using the new apparatus, we prove that two related matrices are equivalent in terms of GMRES convergence, and derive necessary conditions for the worstcase righthand side vector. We also show that for a specific family of application problems, the worstcase vector has a compact form. In addition, we present numerical data that shows that two matrices that yield the same worstcase GMRES behavior may differ significantly in their average behavior.
Complete stagnation of GMRES
, 2003
"... We study problems for which the iterative method GMRES for solving linear systems of equations makes no progress in its initial iterations. Our tool for analysis is a nonlinear system of equations, the stagnation system, that characterizes this behavior. We focus on complete stagnation, for which th ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We study problems for which the iterative method GMRES for solving linear systems of equations makes no progress in its initial iterations. Our tool for analysis is a nonlinear system of equations, the stagnation system, that characterizes this behavior. We focus on complete stagnation, for which there is no progress until the last iteration. We give necessary and sufficient conditions for complete stagnation of systems involving unitary matrices, and show that if a normal matrix completely stagnates then so does an entire family of nonnormal matrices with the same eigenvalues. Finally, we show that there are real matrices for which complete stagnation occurs for certain complex righthand sides but not for real ones.
Some theoretical results derived from polynomial numerical hulls of Jordan blocks
 Electron. Trans. Numer. Anal
"... Abstract. The polynomial numerical hull of degree for a square matrix is a set in the complex plane designed to give useful information about the norms of functions of the matrix; it is defined as for all polynomials of degree or less In a previous paper [V. Faber, A. Greenbaum, and D. Marshall, The ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract. The polynomial numerical hull of degree for a square matrix is a set in the complex plane designed to give useful information about the norms of functions of the matrix; it is defined as for all polynomials of degree or less In a previous paper [V. Faber, A. Greenbaum, and D. Marshall, The polynomial numerical hulls of Jordan blocks and related matrices, Linear Algebra Appl., 374 (2003), pp. 231–246] analytic expressions were derived for the polynomial numerical hulls of Jordan blocks. In this paper, we explore some consequences of these results. We derive lower bounds on the norms of functions of Jordan blocks and triangular Toeplitz matrices that approach equalities as the matrix size approaches infinity. We demonstrate that even for moderate size matrices these bounds give fairly good estimates of the behavior of matrix powers, the matrix exponential, and the resolvent norm. We give new estimates of the convergence rate of the GMRES algorithm applied to a Jordan block. We also derive a new estimate for the field of values of a general Toeplitz matrix. Key words. polynomial numerical hull, field of values, Toeplitz matrix. AMS subject classifications. 15A60, 65F15, 65F35.
Stagnation Of Gmres
"... We study problems for which the iterative method gmres for solving linear systems of equations makes no progress in its initial iterations. Our tool for analysis is a nonlinear system of equations, the stagnation system, that characterizes this behavior. For problems of dimension 2 we can solve this ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
We study problems for which the iterative method gmres for solving linear systems of equations makes no progress in its initial iterations. Our tool for analysis is a nonlinear system of equations, the stagnation system, that characterizes this behavior. For problems of dimension 2 we can solve this system explicitly, determining that every choice of eigenvalues leads to a stagnating problem for eigenvector matrices that are sufficiently poorly conditioned. We partially extend this result to higher dimensions for a class of eigenvector matrices called extreme. We give necessary and sufficient conditions for stagnation of systems involving unitary matrices, and show that if a normal matrix stagnates then so does an entire family of nonnormal matrices with the same eigenvalues. Finally, we show that there are real matrices for which stagnation occurs for certain complex righthand sides but not for real ones.