Results 1 
9 of
9
Least squares residuals and minimal residual methods
 SIAM J. Sci. Comput
"... Abstract. We study Krylov subspace methods for solving unsymmetriclinear algebraicsystems that minimize the norm of the residual at each step (minimal residual (MR) methods). MR methods are often formulated in terms of a sequence of least squares (LS) problems of increasing dimension. We present sev ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
Abstract. We study Krylov subspace methods for solving unsymmetriclinear algebraicsystems that minimize the norm of the residual at each step (minimal residual (MR) methods). MR methods are often formulated in terms of a sequence of least squares (LS) problems of increasing dimension. We present several basicidentities and bounds for the LS residual. These results are interesting in the general context of solving LS problems. When applied to MR methods, they show that the size of the MR residual is strongly related to the conditioning of different bases of the same Krylov subspace. Using different bases is useful in theory because relating convergence to the characteristics of different bases offers new insight into the behavior of MR methods. Different bases also lead to different implementations which are mathematically equivalent but can differ numerically. Our theoretical results are used for a finite precision analysis of implementations of the GMRES method [Y. Saad and M. H. Schultz, SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856–869]. We explain that the choice of the basis is fundamental for the numerical stability of the implementation. As demonstrated in the case of Simpler GMRES [H. F. Walker and L. Zhou, Numer. Linear Algebra Appl., 1 (1994), pp. 571–581], the best orthogonalization technique used for computing the basis does not compensate for the loss of accuracy due to an inappropriate choice of the basis. In particular, we prove that Simpler GMRES is inherently less numerically stable than
Convergence of GMRES for tridiagonal Toeplitz matrices
 SIAM J. Matrix Anal. Appl
"... Statist. Comput., 7 (1986), pp. 856–859], when the method is applied to tridiagonal Toeplitz matrices. We first derive formulas for the residuals as well as their norms when GMRES is applied to scaled Jordan blocks. This problem has been studied previously by Ipsen [BIT, 40 (2000), pp. 524–535] and ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
Statist. Comput., 7 (1986), pp. 856–859], when the method is applied to tridiagonal Toeplitz matrices. We first derive formulas for the residuals as well as their norms when GMRES is applied to scaled Jordan blocks. This problem has been studied previously by Ipsen [BIT, 40 (2000), pp. 524–535] and Eiermann and Ernst [Private communication, 2002], but we formulate and prove our results in a different way. We then extend the (lower) bidiagonal Jordan blocks to tridiagonal Toeplitz matrices and study extensions of our bidiagonal analysis to the tridiagonal case. Intuitively, when a scaled Jordan block is extended to a tridiagonal Toeplitz matrix by a superdiagonal of small modulus (compared to the modulus of the subdiagonal), the GMRES residual norms for both matrices and the same initial residual should be close to each other. We confirm and quantify this intuitive statement. We also demonstrate principal difficulties of any GMRES convergence analysis which is based on eigenvector expansion of the initial residual when the eigenvector matrix is illconditioned. Such analyses are complicated by a cancellation of possibly huge components due to close eigenvectors, which can prevent achieving welljustified conclusions. Key words. Krylov subspace methods, GMRES, minimal residual methods, convergence analysis,
MODIFIED GRAM–SCHMIDT (MGS), LEAST SQUARES, AND BACKWARD STABILITY OF MGSGMRES
, 2006
"... The generalized minimum residual method (GMRES) [Y. Saad and M. Schultz, SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856–869] for solving linear systems Ax = b is implemented as a sequence of least squares problems involving Krylov subspaces of increasing dimensions. The most usual implementation ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
The generalized minimum residual method (GMRES) [Y. Saad and M. Schultz, SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856–869] for solving linear systems Ax = b is implemented as a sequence of least squares problems involving Krylov subspaces of increasing dimensions. The most usual implementation is modified Gram–Schmidt GMRES (MGSGMRES). Here we show that MGSGMRES is backward stable. The result depends on a more general result on the backward stability of a variant of the MGS algorithm applied to solving a linear least squares problem, and uses other new results on MGS and its loss of orthogonality, together with an important but neglected condition number, and a relation between residual norms and certain singular values.
GMRES convergence analysis for a convectiondiffusion model problem
 SIAM J. Sci. Comput
"... (1986), pp. 856–869] is applied to streamline upwind Petrov–Galerkin (SUPG) discretized convectiondiffusion problems, it typically exhibits an initial period of slow convergence followed by a faster decrease of the residual norm. Several approaches were made to understand this behavior. However, the ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
(1986), pp. 856–869] is applied to streamline upwind Petrov–Galerkin (SUPG) discretized convectiondiffusion problems, it typically exhibits an initial period of slow convergence followed by a faster decrease of the residual norm. Several approaches were made to understand this behavior. However, the existing analyses are solely based on the matrix of the discretized system and they do not take into account any influence of the righthand side (determined by the boundary conditions and/or source term in the PDE). Therefore they cannot explain the length of the initial period of slow convergence which is righthand side dependent. We concentrate on a frequently used model problem with Dirichlet boundary conditions and with a constant velocity field parallel to one of the axes. Instead of the eigendecomposition of the system matrix, which is ill conditioned, we use its orthogonal transformation into a blockdiagonal matrix with nonsymmetric tridiagonal Toeplitz blocks and offer an explanation of GMRES convergence. We show how the initial period of slow convergence is related to the boundary conditions and address the question why the convergence in the second stage accelerates. Key words. convectiondiffusion problem, streamline upwind Petrov–Galerkin discretization, GMRES, rate of convergence, illconditioned eigenvectors, nonnormality, tridiagonal Toeplitz matrices
EXTENSIONS OF CERTAIN GRAPHBASED ALGORITHMS FOR PRECONDITIONING
, 2007
"... The original TPABLO algorithms are a collection of algorithms which compute a symmetric permutation of a linear system such that the permuted system has a relatively full block diagonal with relatively large nonzero entries. This block diagonal can then be used as a preconditioner. We propose and an ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
The original TPABLO algorithms are a collection of algorithms which compute a symmetric permutation of a linear system such that the permuted system has a relatively full block diagonal with relatively large nonzero entries. This block diagonal can then be used as a preconditioner. We propose and analyze three extensions of this approach: we incorporate a nonsymmetric permutation to obtain a large diagonal, we use a more general parametrization for TPABLO, and we use a block GaussSeidel preconditioner which can be implemented to have the same execution time as the corresponding block Jacobi preconditioner. Experiments are presented showing that for certain classes of matrices, the block GaussSeidel preconditioner used with the system permuted with the new algorithm can outperform the best ILUT preconditioners in a large set of experiments.
The Arnoldi process and GMRES for nearly symmetric matrices
 SIAM J. Matrix Anal. Appl
, 2008
"... Abstract. Matrices with a skewsymmetric part of low rank arise in many applications, including path following methods and integral equations. This paper explores the properties of the Arnoldi process when applied to such a matrix. We show that an orthogonal Krylov subspace basis can be generated wi ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Abstract. Matrices with a skewsymmetric part of low rank arise in many applications, including path following methods and integral equations. This paper explores the properties of the Arnoldi process when applied to such a matrix. We show that an orthogonal Krylov subspace basis can be generated with short recursion formulas and that the Hessenberg matrix generated by the Arnoldi process has a structure, which makes it possible to derive a progressive GMRES method. Eigenvalue computation is also considered. Key words. computation lowrank perturbation, iterative method, solution of linear system, eigenvalue AMS subject classifications. 65F10, 65F15 DOI. 10.1137/060668274
Analytic models of the quantum harmonic oscillator
 Contemp. Math
, 1997
"... Abstract. There are many examples where nonorthogonality of a basis for Krylov subspace methods arises naturally. These methods usually require less storage or computational effort per iteration than methods using an orthonormal basis (optimal methods), but the convergence may be delayed. Truncated ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Abstract. There are many examples where nonorthogonality of a basis for Krylov subspace methods arises naturally. These methods usually require less storage or computational effort per iteration than methods using an orthonormal basis (optimal methods), but the convergence may be delayed. Truncated Krylov subspace methods and other examples of nonoptimal methods have been shown to converge in many situations, often with small delay, but not in others. We explore the question of what is the effect of having a nonoptimal basis. We prove certain identities for the relative residual gap, i.e., the relative difference between the residuals of the optimal and nonoptimal methods. These identities and related bounds provide insight into when the delay is small and convergence is achieved. Further understanding is gained by using a general theory of superlinear convergence recently developed. Our analysis confirms the observed fact that in exact arithmetic the orthogonality of the basis is not important, only the need to maintain linear independence is. Numerical examples illustrate our theoretical results.
BOUNDS FOR THE LEAST SQUARES RESIDUAL USING SCALED TOTAL LEAST SQUARES
"... Abstract The standard approaches to solving overdetermined linear systems Ax ≈ b construct minimal corrections to the data to make the corrected system compatible. In ordinary least squares (LS) the correction is restricted to the right hand side b, while in scaled total least squares (Scaled TLS) [ ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract The standard approaches to solving overdetermined linear systems Ax ≈ b construct minimal corrections to the data to make the corrected system compatible. In ordinary least squares (LS) the correction is restricted to the right hand side b, while in scaled total least squares (Scaled TLS) [10; 7] corrections to both b and A are allowed, and their relative sizes are determined by a real positive parameter γ. As γ → 0, the Scaled TLS solution approaches the LS solution. Fundamentals of the Scaled TLS problem are analyzed in our paper [7] and in the contribution in this book entitled Unifying least squares, total least squares and data least squares. This contribution is based on the paper [8]. It presents a theoretical analysis of the relationship between the sizes of the LS and Scaled TLS corrections (called the LS and Scaled TLS distances) in terms of γ. We give new upper and lower bounds on the LS distance in terms of the Scaled TLS distance, compare these to existing bounds, and examine the tightness of the new bounds. This work can be applied to the analysis of iterative methods which minimize the residual norm [9; 6].
FOR PRECONDITIONING ∗
, 2007
"... Abstract. The original TPABLO algorithms are a collection of algorithms which compute a symmetric permutation of a linear system such that the permuted system has a relatively full block diagonal with relatively large nonzero entries. This block diagonal can then be used as a preconditioner. We prop ..."
Abstract
 Add to MetaCart
Abstract. The original TPABLO algorithms are a collection of algorithms which compute a symmetric permutation of a linear system such that the permuted system has a relatively full block diagonal with relatively large nonzero entries. This block diagonal can then be used as a preconditioner. We propose and analyze three extensions of this approach: we incorporate a nonsymmetric permutation to obtain a large diagonal, we use a more general parametrization for TPABLO, and we use a block GaussSeidel preconditioner which can be implemented to have the same execution time as the corresponding block Jacobi preconditioner. Experiments are presented showing that for certain classes of matrices, the block GaussSeidel preconditioner used with the system permuted with the new algorithm can outperform the best ILUT preconditioners in a large set of experiments.