Results 1  10
of
23
From Potential Theory To Matrix Iterations In Six Steps
 SIAM REVIEW
"... The theory of the convergence of Krylov subspace iterations for linear systems of equations (conjugate gradients, biconjugate gradients, GMRES, QMR, BiCGSTAB, ...) is reviewed. For a computation of this kind, an estimated asymptotic convergence factor ae 1 can be derived by solving a problem of pot ..."
Abstract

Cited by 35 (4 self)
 Add to MetaCart
The theory of the convergence of Krylov subspace iterations for linear systems of equations (conjugate gradients, biconjugate gradients, GMRES, QMR, BiCGSTAB, ...) is reviewed. For a computation of this kind, an estimated asymptotic convergence factor ae 1 can be derived by solving a problem of potential theory or conformal mapping. Six approximations are involved in relating the actual computation to this scalar estimate. These six approximations are discussed in a systematic way and illustrated by a sequence of examples computed with tools of numerical conformal mapping and semidefinite programming.
Estimates in Quadratic Formulas
, 1994
"... Let A be a real symmetric positive definite matrix. We consider three particular questions, namely estimates for the error in linear systems Ax = b, minimizing quadratic functional min x (x T Ax \Gamma 2b T x) subject to the constraint k x k= ff, ff !k A \Gamma1 b k, and estimates for the e ..."
Abstract

Cited by 26 (7 self)
 Add to MetaCart
Let A be a real symmetric positive definite matrix. We consider three particular questions, namely estimates for the error in linear systems Ax = b, minimizing quadratic functional min x (x T Ax \Gamma 2b T x) subject to the constraint k x k= ff, ff !k A \Gamma1 b k, and estimates for the entries of the matrix inverse A \Gamma1 . All of these questions can be formulated as a problem of finding an estimate or an upper and lower bound on u T F (A)u, where F (A) = A \Gamma1 resp. F (A) = A \Gamma2 , u is a real vector. This problem can be considered in terms of estimates in the Gaußtype quadrature formulas which can be effectively computed exploiting the underlying Lanczos process. Using this approach, we first recall the exact arithmetic solution of the questions formulated above and then analyze the effect of rounding errors in the quadrature calculations. It is proved that the basic relation between the accuracy of Gauß quadrature for f() = \Gamma1 and the rate of ...
ThickRestart Lanczos Method for Symmetric Eigenvalue Problems
 SIAM J. MATRIX ANAL. APPL
, 1998
"... For real symmetric eigenvalue problems, there are a number of algorithms that are mathematically equivalent, for example, the Lanczos algorithm, the Arnoldi method and the unpreconditioned Davidson method. The Lanczos algorithm is often preferred because it uses significantly fewer arithmetic ope ..."
Abstract

Cited by 23 (3 self)
 Add to MetaCart
For real symmetric eigenvalue problems, there are a number of algorithms that are mathematically equivalent, for example, the Lanczos algorithm, the Arnoldi method and the unpreconditioned Davidson method. The Lanczos algorithm is often preferred because it uses significantly fewer arithmetic operations per iteration. To limit the maximum memory usage, these algorithms are often restarted. In recent years, a number of effective restarting schemes have been developed for the Arnoldi method and the Davidson method. This paper describes a simple restarting scheme for the Lanczos algorithm. This restarted Lanczos algorithm uses as many arithmetic operations as the original algorithm. Theoretically, this restarted Lanczos method is equivalent to the implicitly restarted Arnoldi method and the thickrestart Davidson method. Because it uses less arithmetic operations than the others, it is an attractive alternative for solving symmetric eigenvalue problems.
Error analysis of the Lanczos algorithm for the nonsymmetric eigenvalue problem
 In R
, 1994
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
Differences in the effects of rounding errors in Krylov solvers for symmetric indefinite linear systems
, 1999
"... The 3term Lanczos process leads, for a symmetric matrix, to bases for Krylov subspaces of increasing dimension. The Lanczos basis, together with the recurrence coefficients, can be used for the solution of symmetric indefinite linear systems, by solving the reduced system in one way or another. Thi ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
The 3term Lanczos process leads, for a symmetric matrix, to bases for Krylov subspaces of increasing dimension. The Lanczos basis, together with the recurrence coefficients, can be used for the solution of symmetric indefinite linear systems, by solving the reduced system in one way or another. This leads to wellknown methods: MINRES, GMRES, and SYMMLQ. We will discuss in what way and to what extent these approaches differ in their sensitivity to rounding errors. In our analysis we will assume that the Lanczos basis is generated in exactly the same way for the different methods, and we will not consider the errors in the Lanczos process itself. We will show that the method of solution may lead, under certain circumstances, to large additional errors, that are not corrected by continuing the iteration process. Our findings are supported and illustrated by numerical examples. 1 Introduction We will consider iterative methods for the construction of approximate solutions, starting with...
Residual and backward error bounds in minimum residual Krylov subspace methods
 SIAM J. SCI. COMPUT
, 2002
"... Minimum residual norm iterative methods for solving linear systems Ax = b can be viewed as, and are often implemented as, sequences of least squares problems involving Krylov subspaces of increasing dimensions. The minimum residual method (MINRES) [C. Paige and M. Saunders, SIAM J. Numer. Anal., 12 ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
Minimum residual norm iterative methods for solving linear systems Ax = b can be viewed as, and are often implemented as, sequences of least squares problems involving Krylov subspaces of increasing dimensions. The minimum residual method (MINRES) [C. Paige and M. Saunders, SIAM J. Numer. Anal., 12 (1975), pp. 617–629] and generalized minimum residual method (GMRES) [Y. Saad and M. Schultz, SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856–869] represent typical examples. In [C. Paige and Z. Strakoˇs, Bounds for the least squares distance using scaled total least squares, Numer. Math., to appear] revealing upper and lower bounds on the residual norm of any linear least squares (LS) problem were derived in terms of the total least squares (TLS) correction of the corresponding scaled TLS problem. In this paper theoretical results of [C. Paige and Z. Strakoˇs, Bounds for the least squares distance using scaled total least squares, Numer. Math., to appear] are extended to the GMRES context. The bounds that are developed are important in theory, but they also have fundamental practical implications for the finite precision behavior of the modified Gram–Schmidt implementation of GMRES, and perhaps for other minimum norm methods.
Analysis of the finite precision BiConjugate Gradient algorithm for nonsymmetric linear systems
 Math. Comp
, 1995
"... Abstract. In this paper we analyze the biconjugate gradient algorithm in finite precision arithmetic, and suggest reasons for its often observed robustness. By using a tridiagonal structure, which is preserved by the finite precision biconjugate gradient iteration, we are able to bound its residua ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
Abstract. In this paper we analyze the biconjugate gradient algorithm in finite precision arithmetic, and suggest reasons for its often observed robustness. By using a tridiagonal structure, which is preserved by the finite precision biconjugate gradient iteration, we are able to bound its residual norm by a minimum polynomial of a perturbed matrix (i.e. the residual norm of the exact GMRES applied to a perturbed matrix) multiplied by an amplification factor. This shows that occurrence of nearbreakdowns or loss of biorthogonality does not necessarily deter convergence of the residuals provided that the amplification factor remains bounded. Numerical examples are given to gain insights into these bounds. 1.
Arnoldi versus Nonsymmetric Lanczos Algorithms for Solving Nonsymmetric Matrix Eigenvalue Problems
 BIT
, 1996
"... We obtain several results which may be useful in determining the convergence behavior of eigenvalue algorithms based upo n Arnoldi and nonsymmetric Lanczos recursions. We derive a relationship between nonsymmetric Lanczos eigenvalue procedures and Arnoldi eigenvalue procedures. We demonstrate that t ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
We obtain several results which may be useful in determining the convergence behavior of eigenvalue algorithms based upo n Arnoldi and nonsymmetric Lanczos recursions. We derive a relationship between nonsymmetric Lanczos eigenvalue procedures and Arnoldi eigenvalue procedures. We demonstrate that the Arnoldi recursions preserve a property which characterizes normal matrices, and that if we could determine the appropriate starting vectors, we could mimic the nonsymmetric Lanczos eigenvalue convergence on a general diagonalizable matrix by its convergence on related normal matrices. Using a unitary equivalence for each of these Krylov subspace methods, we define sets of test problems where we can easily vary certain spectral properties of the matrices. We use these and other test problems to examine the behavior of an Arnoldi and of a nonsymmetric Lanczos procedure. Mathematical Sciences Department, IBM Research Division, T.J. Watson Research Center, Yorktown Heights, NY 10598, USA, a...
Conjugate Gradient Algorithms with Reduced Synchronization Overhead on Distributed Memory Multiprocessors
, 1999
"... The standard formulation of the conjugate gradient algorithm involves two inner product computations. The results of these two inner products are needed to update the search direction and the computed solution. Since these inner products are mutually interdependent, in a distributed memory parallel ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
The standard formulation of the conjugate gradient algorithm involves two inner product computations. The results of these two inner products are needed to update the search direction and the computed solution. Since these inner products are mutually interdependent, in a distributed memory parallel environment their computation and subsequent distribution requires two separate communication and synchronization phases. In this paper, we present three related mathematically equivalent rearrangements of the standard algorithm that reduce the number of communication phases. We present empirical evidence that two of these rearrangements are numerically stable. This claim is further substantiated by a proof that one of the empirically stable rearrangements arises naturally in the symmetric Lanczos method for linear systems, which is equivalent to the conjugate gradient method.
Residual Replacement Strategies for Krylov Subspace Iterative Methods for the Convergence of True Residuals
, 1999
"... In this paper, a strategy is proposed for alternative computations of the residual vectors in Krylov subspace methods, which improves the agreement of the computed residuals and the true residuals to the level of O(u)kAkkxk. Building on earlier ideas on residual replacement and on insights in the fi ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
In this paper, a strategy is proposed for alternative computations of the residual vectors in Krylov subspace methods, which improves the agreement of the computed residuals and the true residuals to the level of O(u)kAkkxk. Building on earlier ideas on residual replacement and on insights in the finite precision behaviour of the Krylov subspace methods, computable error bounds are derived for iterations that involve occasionally replacing the computed residuals by the true residuals, and they are used to monitor the deviation of the two residuals and hence to select residual replacement steps, so that the recurrence relations for the computed residuals, which control the convergence of the method, are perturbed within safe bounds. Numerical examples are presented to demonstrate the effectiveness of this new residual replacement scheme. 1 Introduction Krylov subspace iterative methods for solving a large linear system Ax = b typically consist of iterations that recursively update appr...