Results 1  10
of
38
From Potential Theory To Matrix Iterations In Six Steps
 SIAM REVIEW
"... The theory of the convergence of Krylov subspace iterations for linear systems of equations (conjugate gradients, biconjugate gradients, GMRES, QMR, BiCGSTAB, ...) is reviewed. For a computation of this kind, an estimated asymptotic convergence factor ae 1 can be derived by solving a problem of pot ..."
Abstract

Cited by 45 (4 self)
 Add to MetaCart
(Show Context)
The theory of the convergence of Krylov subspace iterations for linear systems of equations (conjugate gradients, biconjugate gradients, GMRES, QMR, BiCGSTAB, ...) is reviewed. For a computation of this kind, an estimated asymptotic convergence factor ae 1 can be derived by solving a problem of potential theory or conformal mapping. Six approximations are involved in relating the actual computation to this scalar estimate. These six approximations are discussed in a systematic way and illustrated by a sequence of examples computed with tools of numerical conformal mapping and semidefinite programming.
Lanczostype solvers for nonsymmetric linear systems of equations
 Acta Numer
, 1997
"... Among the iterative methods for solving large linear systems with a sparse (or, possibly, structured) nonsymmetric matrix, those that are based on the Lanczos process feature short recurrences for the generation of the Krylov space. This means low cost and low memory requirement. This review article ..."
Abstract

Cited by 40 (11 self)
 Add to MetaCart
Among the iterative methods for solving large linear systems with a sparse (or, possibly, structured) nonsymmetric matrix, those that are based on the Lanczos process feature short recurrences for the generation of the Krylov space. This means low cost and low memory requirement. This review article introduces the reader not only to the basic forms of the Lanczos process and some of the related theory, but also describes in detail a number of solvers that are based on it, including those that are considered to be the most efficient ones. Possible breakdowns of the algorithms and ways to cure them by lookahead are also discussed. www.DownloadPaper.ir
Estimates in Quadratic Formulas
, 1994
"... Let A be a real symmetric positive definite matrix. We consider three particular questions, namely estimates for the error in linear systems Ax = b, minimizing quadratic functional min x (x T Ax \Gamma 2b T x) subject to the constraint k x k= ff, ff !k A \Gamma1 b k, and estimates for the e ..."
Abstract

Cited by 33 (8 self)
 Add to MetaCart
Let A be a real symmetric positive definite matrix. We consider three particular questions, namely estimates for the error in linear systems Ax = b, minimizing quadratic functional min x (x T Ax \Gamma 2b T x) subject to the constraint k x k= ff, ff !k A \Gamma1 b k, and estimates for the entries of the matrix inverse A \Gamma1 . All of these questions can be formulated as a problem of finding an estimate or an upper and lower bound on u T F (A)u, where F (A) = A \Gamma1 resp. F (A) = A \Gamma2 , u is a real vector. This problem can be considered in terms of estimates in the Gaußtype quadrature formulas which can be effectively computed exploiting the underlying Lanczos process. Using this approach, we first recall the exact arithmetic solution of the questions formulated above and then analyze the effect of rounding errors in the quadrature calculations. It is proved that the basic relation between the accuracy of Gauß quadrature for f() = \Gamma1 and the rate of ...
ThickRestart Lanczos Method for Symmetric Eigenvalue Problems
 SIAM J. MATRIX ANAL. APPL
, 1998
"... For real symmetric eigenvalue problems, there are a number of algorithms that are mathematically equivalent, for example, the Lanczos algorithm, the Arnoldi method and the unpreconditioned Davidson method. The Lanczos algorithm is often preferred because it uses significantly fewer arithmetic ope ..."
Abstract

Cited by 23 (3 self)
 Add to MetaCart
For real symmetric eigenvalue problems, there are a number of algorithms that are mathematically equivalent, for example, the Lanczos algorithm, the Arnoldi method and the unpreconditioned Davidson method. The Lanczos algorithm is often preferred because it uses significantly fewer arithmetic operations per iteration. To limit the maximum memory usage, these algorithms are often restarted. In recent years, a number of effective restarting schemes have been developed for the Arnoldi method and the Davidson method. This paper describes a simple restarting scheme for the Lanczos algorithm. This restarted Lanczos algorithm uses as many arithmetic operations as the original algorithm. Theoretically, this restarted Lanczos method is equivalent to the implicitly restarted Arnoldi method and the thickrestart Davidson method. Because it uses less arithmetic operations than the others, it is an attractive alternative for solving symmetric eigenvalue problems.
Differences in the effects of rounding errors in Krylov solvers for symmetric indefinite linear systems
, 1999
"... The 3term Lanczos process leads, for a symmetric matrix, to bases for Krylov subspaces of increasing dimension. The Lanczos basis, together with the recurrence coefficients, can be used for the solution of symmetric indefinite linear systems, by solving the reduced system in one way or another. Thi ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
The 3term Lanczos process leads, for a symmetric matrix, to bases for Krylov subspaces of increasing dimension. The Lanczos basis, together with the recurrence coefficients, can be used for the solution of symmetric indefinite linear systems, by solving the reduced system in one way or another. This leads to wellknown methods: MINRES, GMRES, and SYMMLQ. We will discuss in what way and to what extent these approaches differ in their sensitivity to rounding errors. In our analysis we will assume that the Lanczos basis is generated in exactly the same way for the different methods, and we will not consider the errors in the Lanczos process itself. We will show that the method of solution may lead, under certain circumstances, to large additional errors, that are not corrected by continuing the iteration process. Our findings are supported and illustrated by numerical examples. 1 Introduction We will consider iterative methods for the construction of approximate solutions, starting with...
Analysis of the finite precision BiConjugate Gradient algorithm for nonsymmetric linear systems
 Math. Comp
, 1995
"... Abstract. In this paper we analyze the biconjugate gradient algorithm in finite precision arithmetic, and suggest reasons for its often observed robustness. By using a tridiagonal structure, which is preserved by the finite precision biconjugate gradient iteration, we are able to bound its residua ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we analyze the biconjugate gradient algorithm in finite precision arithmetic, and suggest reasons for its often observed robustness. By using a tridiagonal structure, which is preserved by the finite precision biconjugate gradient iteration, we are able to bound its residual norm by a minimum polynomial of a perturbed matrix (i.e. the residual norm of the exact GMRES applied to a perturbed matrix) multiplied by an amplification factor. This shows that occurrence of nearbreakdowns or loss of biorthogonality does not necessarily deter convergence of the residuals provided that the amplification factor remains bounded. Numerical examples are given to gain insights into these bounds. 1.
Residual and backward error bounds in minimum residual Krylov subspace methods
 SIAM J. SCI. COMPUT
, 2002
"... Minimum residual norm iterative methods for solving linear systems Ax = b can be viewed as, and are often implemented as, sequences of least squares problems involving Krylov subspaces of increasing dimensions. The minimum residual method (MINRES) [C. Paige and M. Saunders, SIAM J. Numer. Anal., 12 ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
Minimum residual norm iterative methods for solving linear systems Ax = b can be viewed as, and are often implemented as, sequences of least squares problems involving Krylov subspaces of increasing dimensions. The minimum residual method (MINRES) [C. Paige and M. Saunders, SIAM J. Numer. Anal., 12 (1975), pp. 617–629] and generalized minimum residual method (GMRES) [Y. Saad and M. Schultz, SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856–869] represent typical examples. In [C. Paige and Z. Strakoˇs, Bounds for the least squares distance using scaled total least squares, Numer. Math., to appear] revealing upper and lower bounds on the residual norm of any linear least squares (LS) problem were derived in terms of the total least squares (TLS) correction of the corresponding scaled TLS problem. In this paper theoretical results of [C. Paige and Z. Strakoˇs, Bounds for the least squares distance using scaled total least squares, Numer. Math., to appear] are extended to the GMRES context. The bounds that are developed are important in theory, but they also have fundamental practical implications for the finite precision behavior of the modified Gram–Schmidt implementation of GMRES, and perhaps for other minimum norm methods.
Residual Replacement Strategies for Krylov Subspace Iterative Methods for the Convergence of True Residuals
 Delft University of Technology
"... In this paper, a strategy is proposed for alternative computations of the residual vectors in Krylov subspace methods, which improves the agreement of the computed residuals and the true residuals to the level of O(u)kAkkxk. Building on earlier ideas on residual replacement and on insights in the ni ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
In this paper, a strategy is proposed for alternative computations of the residual vectors in Krylov subspace methods, which improves the agreement of the computed residuals and the true residuals to the level of O(u)kAkkxk. Building on earlier ideas on residual replacement and on insights in the nite precision behaviour of the Krylov subspace methods, computable error bounds are derived for iterations that involve occasionally replacing the computed residuals by the true residuals, and they are used to monitor the deviation of the two residuals and hence to select residual replacement steps, so that the recurrence relations for the computed residuals, which control the convergence of the method, are perturbed within safe bounds. Numerical examples are presented to demonstrate the eectiveness of this new residual replacement scheme. 1