Results 1 
9 of
9
Analysis of Acceleration Strategies for Restarted Minimal Residual Methods
, 2000
"... We provide an overview of existing strategies which compensate for the deterioration of convergence of minimum residual (MR) Krylov subspace methods due to restarting. We evaluate the popular practice of using nearly invariant subspaces to either augment Krylov subspaces or to construct precondit ..."
Abstract

Cited by 35 (6 self)
 Add to MetaCart
We provide an overview of existing strategies which compensate for the deterioration of convergence of minimum residual (MR) Krylov subspace methods due to restarting. We evaluate the popular practice of using nearly invariant subspaces to either augment Krylov subspaces or to construct preconditioners which invert on these subspaces. In the case where these spaces are exactly invariant, the augmentation approach is shown to be superior. We further show how a strategy recently introduced by de Sturler for truncating the approximation space of an MR method can be interpreted as a controlled loosening of the condition for global MR approximation based on the canonical angles between subspaces. For the special case of Krylov subspace methods, we give a concise derivation of the role of Ritz and harmonic Ritz values and vectors in the polynomial description of Krylov spaces as well as of the use of the implicitly updated Arnoldi method for manipulating Krylov spaces.
Least squares residuals and minimal residual methods
 SIAM J. Sci. Comput
"... Abstract. We study Krylov subspace methods for solving unsymmetriclinear algebraicsystems that minimize the norm of the residual at each step (minimal residual (MR) methods). MR methods are often formulated in terms of a sequence of least squares (LS) problems of increasing dimension. We present sev ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
Abstract. We study Krylov subspace methods for solving unsymmetriclinear algebraicsystems that minimize the norm of the residual at each step (minimal residual (MR) methods). MR methods are often formulated in terms of a sequence of least squares (LS) problems of increasing dimension. We present several basicidentities and bounds for the LS residual. These results are interesting in the general context of solving LS problems. When applied to MR methods, they show that the size of the MR residual is strongly related to the conditioning of different bases of the same Krylov subspace. Using different bases is useful in theory because relating convergence to the characteristics of different bases offers new insight into the behavior of MR methods. Different bases also lead to different implementations which are mathematically equivalent but can differ numerically. Our theoretical results are used for a finite precision analysis of implementations of the GMRES method [Y. Saad and M. H. Schultz, SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856–869]. We explain that the choice of the basis is fundamental for the numerical stability of the implementation. As demonstrated in the case of Simpler GMRES [H. F. Walker and L. Zhou, Numer. Linear Algebra Appl., 1 (1994), pp. 571–581], the best orthogonalization technique used for computing the basis does not compensate for the loss of accuracy due to an inappropriate choice of the basis. In particular, we prove that Simpler GMRES is inherently less numerically stable than
The worstcase GMRES for normal matrices
 BIT
"... We study the convergence of GMRES for linear algebraic systems with normal matrices. In particular, we explore the standard bound based on a minmax approximation problem on the discrete set of the matrix eigenvalues. This bound is sharp, i.e. it is attainable by the GMRES residual norm. The questio ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
We study the convergence of GMRES for linear algebraic systems with normal matrices. In particular, we explore the standard bound based on a minmax approximation problem on the discrete set of the matrix eigenvalues. This bound is sharp, i.e. it is attainable by the GMRES residual norm. The question is how to evaluate or estimate the standard bound, and if it is possible to characterize the GMRESrelated quantities for which this bound is attained (worstcase GMRES). In this paper we completely characterize the worstcase GMRESrelated quantities in the nexttolast iteration step and evaluate the standard bound in terms of explicit polynomials involving the matrix eigenvalues. For a general iteration step, we develop a computable lower and upper bound on the standard bound. Our bounds allow us to study the worstcase GMRES residual norm as a function of the eigenvalue distribution. For hermitian matrices the lower bound is equal to the worstcase residual norm. In addition, numerical experiments show that the lower bound is generally very tight, and support our conjecture that it is to within a factor of 4/π of the actual worstcase residual norm. Since the worstcase residual norm in each step is to within a factor of the square root of the matrix size to what is considered an “average ” residual norm, our results are of relevance beyond the worst case.
Faber Polynomials Corresponding to Rational Exterior Mapping Functions
, 1999
"... this paper we consider Faber polynomials for sets ..."
iterative method for solving systems of linear equations:
"... Abstract. The purpose of this paper is to derive new computable convergence bounds for GMRES. The new bounds depend on the initial guess and are thus conceptually different from standard “worstcase ” bounds. Most importantly, approximations to the new bounds can be computed from information generat ..."
Abstract
 Add to MetaCart
Abstract. The purpose of this paper is to derive new computable convergence bounds for GMRES. The new bounds depend on the initial guess and are thus conceptually different from standard “worstcase ” bounds. Most importantly, approximations to the new bounds can be computed from information generated during the run of a certain GMRES implementation. The approximations allow predictions of how the algorithm will perform. Heuristics for such predictions are given. Numerical experiments illustrate the behavior of the new bounds as well as the use of the heuristics.
CONSTRUCTIVE APPROXIMATION © 2001 SpringerVerlag New York Inc. Faber Polynomials Corresponding to Rational Exterior Mapping Functions
"... Abstract. Faber polynomials corresponding to rational exterior mapping functions of degree (m, m − 1) are studied. It is shown that these polynomials always satisfy an (m + 1)term recurrence. For the special case m = 2, it is shown that the Faber polynomials can be expressed in terms of the classic ..."
Abstract
 Add to MetaCart
Abstract. Faber polynomials corresponding to rational exterior mapping functions of degree (m, m − 1) are studied. It is shown that these polynomials always satisfy an (m + 1)term recurrence. For the special case m = 2, it is shown that the Faber polynomials can be expressed in terms of the classical Chebyshev polynomials of the first kind. In this case, explicit formulas for the Faber polynomials are derived. 1.
Convergence Analysis of Krylov . . .
"... One of the most powerful tools for solving large and sparse systems of linear algebraic equations is a class of iterative methods called Krylov subspace methods. Their significant advantages like low memory requirements and good approximation properties make them very popular, and they are widely us ..."
Abstract
 Add to MetaCart
One of the most powerful tools for solving large and sparse systems of linear algebraic equations is a class of iterative methods called Krylov subspace methods. Their significant advantages like low memory requirements and good approximation properties make them very popular, and they are widely used in applications throughout science and engineering. The use of the Krylov subspaces in iterative methods for linear systems is even counted among the “Top 10 ” algorithmic ideas of the 20th century. Convergence analysis of these methods is not only of a great theoretical importance but it can also help to answer practically relevant questions about improving the performance of these methods. As we show, the question about the convergence behavior leads to complicated nonlinear problems. Despite intense research efforts, these problems are not well understood in some cases. The goal of this survey is to summarize known convergence results for three wellknown Krylov subspace methods (CG, MINRES and GMRES) and to formulate open questions in this area.
The worstcase GMRES for normal matrices
, 2003
"... We study the convergence of GMRES [14] for linear algebraic systems with normal matrices. In particular, we explore the standard bound based on a minmax approximation problem on the discrete set of the matrix eigenvalues. This bound is sharp, i.e. it is attainable by the GMRES residual norm [4, 8] ..."
Abstract
 Add to MetaCart
We study the convergence of GMRES [14] for linear algebraic systems with normal matrices. In particular, we explore the standard bound based on a minmax approximation problem on the discrete set of the matrix eigenvalues. This bound is sharp, i.e. it is attainable by the GMRES residual norm [4, 8]. The question is how to evaluate or estimate the standard bound, and if it is possible to characterize the GMRESrelated quantities for which this bound is attained (worstcase GMRES). In this paper we completely characterize the worstcase GMRESrelated quantities in the nexttolast iteration step and evaluate the standard bound in terms of explicit polynomials involving the matrix eigenvalues. For a general iteration step, we develop a computable lower and upper bound on the standard bound. Our bounds allow to study the worstcase GMRES residual norm in dependence of the eigenvalue distribution. For hermitian matrices the lower bound is equal to the worstcase residual norm. In addition, numerical experiments show that the lower bound is generally very tight, and support our conjecture that it is to within a factor of 4/pi of the actual worstcase residual norm. Since the worstcase residual norm in each step is to within a factor of the square root of the matrix size to what is considered an “average ” residual norm, our results are of relevance beyond the worst case.