Results 1  10
of
10
Inexact Preconditioned Conjugate Gradient Method with InnerOuter Iteration
 SIAM J. Sci. Comput
, 1997
"... An important variation of preconditioned conjugate gradient algorithms is inexact preconditioner implemented with innerouter iterations [5], where the preconditioner is solved by an inner iteration to a prescribed precision. In this paper, we formulate an inexact preconditioned conjugate gradient a ..."
Abstract

Cited by 51 (0 self)
 Add to MetaCart
An important variation of preconditioned conjugate gradient algorithms is inexact preconditioner implemented with innerouter iterations [5], where the preconditioner is solved by an inner iteration to a prescribed precision. In this paper, we formulate an inexact preconditioned conjugate gradient algorithm for a symmetric positive definite system and analyze its convergence property. We establish a linear convergence result using a local relation of residual norms. We also analyze the algorithm using a global equation and show that the algorithm may have the superlinear convergence property, when the inner iteration is solved to high accuracy. The analysis is in agreement with observed numerical behaviour of the algorithm. In particular, it suggests a heuristic choice of the stopping threshold for the inner iteration. Numerical examples are given to show the effectiveness of this choice and to compare the convergence bound. 1 Introduction Iterative methods for solving linear systems ...
Lanczostype solvers for nonsymmetric linear systems of equations
 Acta Numer
, 1997
"... Among the iterative methods for solving large linear systems with a sparse (or, possibly, structured) nonsymmetric matrix, those that are based on the Lanczos process feature short recurrences for the generation of the Krylov space. This means low cost and low memory requirement. This review article ..."
Abstract

Cited by 32 (11 self)
 Add to MetaCart
Among the iterative methods for solving large linear systems with a sparse (or, possibly, structured) nonsymmetric matrix, those that are based on the Lanczos process feature short recurrences for the generation of the Krylov space. This means low cost and low memory requirement. This review article introduces the reader not only to the basic forms of the Lanczos process and some of the related theory, but also describes in detail a number of solvers that are based on it, including those that are considered to be the most efficient ones. Possible breakdowns of the algorithms and ways to cure them by lookahead are also discussed. www.DownloadPaper.ir
Residual Replacement Strategies for Krylov Subspace Iterative Methods for the Convergence of True Residuals
, 1999
"... In this paper, a strategy is proposed for alternative computations of the residual vectors in Krylov subspace methods, which improves the agreement of the computed residuals and the true residuals to the level of O(u)kAkkxk. Building on earlier ideas on residual replacement and on insights in the fi ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
In this paper, a strategy is proposed for alternative computations of the residual vectors in Krylov subspace methods, which improves the agreement of the computed residuals and the true residuals to the level of O(u)kAkkxk. Building on earlier ideas on residual replacement and on insights in the finite precision behaviour of the Krylov subspace methods, computable error bounds are derived for iterations that involve occasionally replacing the computed residuals by the true residuals, and they are used to monitor the deviation of the two residuals and hence to select residual replacement steps, so that the recurrence relations for the computed residuals, which control the convergence of the method, are perturbed within safe bounds. Numerical examples are presented to demonstrate the effectiveness of this new residual replacement scheme. 1 Introduction Krylov subspace iterative methods for solving a large linear system Ax = b typically consist of iterations that recursively update appr...
Semiduality in the twosided Lanczos algorithm
, 1993
"... . Lanczos vectors computed in finite precision arithmetic by the threeterm recurrence tend to lose their mutual biorthogonality. One either accepts this loss and takes more steps or rebiorthogonalizes the Lanczos vectors at each step. For the symmetric case, there is a compromise approach. This com ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
. Lanczos vectors computed in finite precision arithmetic by the threeterm recurrence tend to lose their mutual biorthogonality. One either accepts this loss and takes more steps or rebiorthogonalizes the Lanczos vectors at each step. For the symmetric case, there is a compromise approach. This compromise, known as maintaining semiorthogonality, minimizes the cost of reorthogonalization. This paper extends the compromise to the twosided Lanczos algorithm, and justifies the new algorithm. The compromise is called maintaining semiduality. An advantage of maintaining semiduality is that the computed tridiagonal is a perturbation of a matrix that is exactly similar to the appropriate projection of the given matrix onto the computed subspaces. Another benefit is that the simple twosided GramSchmidt procedure is a viable way to correct for loss of duality. Some numerical experiments show that our Lanczos code is significantly more efficient than Arnoldi's method. Keywords. Lanczos a...
A Mixed Product Krylov Subspace Method for Solving Nonsymmetric Linear systems
"... In this paper, a product Krylov subspace method that we call mixed BiCGSTABCGS is derived. The method is built on the idea of the standard CGS and BiCGSTAB iterations but allows switching between the two at each iteration. This flexibility can be used, for example, to address the difficulty of exce ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper, a product Krylov subspace method that we call mixed BiCGSTABCGS is derived. The method is built on the idea of the standard CGS and BiCGSTAB iterations but allows switching between the two at each iteration. This flexibility can be used, for example, to address the difficulty of excessive increase in residual norm in CGS, which may cause instability. In particular, a CGS based implementation will be presented, which can be regarded as another way of using the BiCGSTAB idea to improve the stability of CGS. Numerical examples are given to demonstrate the stabilizing effect of the mixed algorithm. 1 Introduction Iterative methods for solving a large nonsymmetric linear systems Ax = b that extract approximate solutions from the Krylov subspace K n = spanfb; Ab; A 2 b; \Delta \Delta \Delta ; A n bg are usually called Krylov subspace methods. The BiCG algorithm [4, 9] is a classical Krylov subspace method that produces an approximation x n with a residual reduction r n =...
1032639. A RESIDUAL REPLACEMENT STRATEGY FOR IMPROVING THE MAXIMUM ATTAINABLE ACCURACY OF SSTEP KRYLOV SUBSPACE METHODS
"... All rights reserved. ..."
Computing and Information NUMERICAL INVESTIGATION OF KRYLOV SUBSPACE METHODS FOR SOLVING NONSYMMETRIC SYSTEMS OF LINEAR EQUATIONS WITH DOMINANT SKEWSYMMETRIC
"... (Communicated by Lubin Vulkov) Abstract. Numerical investigation of BiCG and GMRES methods for solving nonsymmetric linear equation systems with dominant skewsymmetric part has been presented. Numerical experiments were carried out for the linear system arising from a 5point central difference ap ..."
Abstract
 Add to MetaCart
(Communicated by Lubin Vulkov) Abstract. Numerical investigation of BiCG and GMRES methods for solving nonsymmetric linear equation systems with dominant skewsymmetric part has been presented. Numerical experiments were carried out for the linear system arising from a 5point central difference approximation of the two dimensional convectiondiffusion problem with different velocity coefficients and small parameter at the higher derivative. Behavior of BiCG and GMRES(10) has been compared for such kind of systems.
linear systems with multiple righthand sides
"... Extending the eigCG algorithm to nonsymmetric Lanczos for ..."
Implementation Aspects
"... e inner products, vector updates and matrix vector product are easily parallelized and vectorized. The more successful preconditionings, i.e, based upon incomplete LU decomposition, are not easily parallelizable. For that reason one is often satisfied with the use of only diagonal scaling as a preco ..."
Abstract
 Add to MetaCart
e inner products, vector updates and matrix vector product are easily parallelized and vectorized. The more successful preconditionings, i.e, based upon incomplete LU decomposition, are not easily parallelizable. For that reason one is often satisfied with the use of only diagonal scaling as a preconditioner on highly parallel computers, such as the CM2 [24]. On distributed memory computers we need large grained parallelism in order to reduce synchronization overhead. This can be achieved by combining the work required for a successive number of iteration steps. The idea is to construct first in parallel a straight forward Krylov basis for the search subspace in which an update for the current solution will be determined. Once this basis has been computed, the vectors are orthogonalized, as is done in Krylov subspace methods. The construction as well as the orthogonalization can be done with large grained parallelism, and has su#cient degree of parallelism in it. This approach has be
Residual Replacement Strategies for Krylov Subspace Iterative Methods for the Convergence of True Residuals
, 2000
"... In this paper, a strategy is proposed for alternative computations of the residual vectors in Krylov subspace methods, which improves the agreement of the computed residuals and the true residuals to the level of O(u)#A##x#. Building on earlier ideas on residual replacement and on insights in the fi ..."
Abstract
 Add to MetaCart
In this paper, a strategy is proposed for alternative computations of the residual vectors in Krylov subspace methods, which improves the agreement of the computed residuals and the true residuals to the level of O(u)#A##x#. Building on earlier ideas on residual replacement and on insights in the finite precision behavior of the Krylov subspace methods, computable error bounds are derived for iterations that involve occasionally replacing the computed residuals by the true residuals, and they are used to monitor the deviation of the two residuals and hence to select residual replacement steps, so that the recurrence relations for the computed residuals, which control the convergence of the method, are perturbed within safe bounds. Numerical examples are presented to demonstrate the e#ectiveness of this new residual replacement scheme.