@MISC{Botchev06“numericallinear, author = {Mike Botchev}, title = {“Numerical Linear Algebra ” course}, year = {2006} }

Share

OpenURL

Abstract

1 Work and memory requirements in GMRES At every step of GMRES k+1 inner products and k+1 vector updates1 are carried out, this amounts to computational work of O((k+1)n) flops at iteration k. The storage requirements of GMRES are also O((k+1)n) floating point memory places which are needed for the matrix Vk+1. (Why do not we take into account the work for solving the (k + 1) × k least squares problem?) Similar estimates for the computational work and storage hold for FOM. Since the work and storage requirements grow with the iteration number k, GMRES and FOM are “inefficient ” methods. 2 GMRES(M): restarted GMRES One way to keep the work and storage requirements of GMRES bounded is to restart the method after, say, M iterations, taking the obtained approximation xM as a new initial guess: x0 = xM. This is known as the GMRES(M) method (see [3], page 73 and [1], Section 2.3.4, page 19). Since GMRES minimizes the residual norm, the residual norms (the Euclidean norms) produced by GMRES form a nonincreasing sequence: ‖r0‖2> ‖r1‖2> ‖r2‖2>.... This nice property is lost in GMRES(M) which is guaranteed to converge only for some