Results 1  10
of
10
Residual Smoothing Techniques: Do They Improve The Limiting Accuracy Of Iterative Solvers?
, 1999
"... . Many iterative methods for solving linear systems, in particular the biconjugate gradient (BiCG) method and its \squared" version CGS (or BiCGS), produce often residuals whose norms decrease far from monotonously, but uctuate rather strongly. Large intermediate residuals are known to reduce t ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
. Many iterative methods for solving linear systems, in particular the biconjugate gradient (BiCG) method and its \squared" version CGS (or BiCGS), produce often residuals whose norms decrease far from monotonously, but uctuate rather strongly. Large intermediate residuals are known to reduce the ultimately attainable accuracy of the method, unless special measures are taken to counteract this eect. One measure that has been suggested is residual smoothing: by application of simple recurrences, the iterates xn and the corresponding residuals rn : b Axn are replaced by smoothed iterates yn and corresponding residuals sn : b Ayn . We address the question whether the smoothed residuals can ultimately become markedly smaller than the primary ones. To investigate this, we present a roundo error analysis of the smoothing algorithms. It shows that the ultimately attainable accuracy of the smoothed iterates, measured in the norm of the corresponding residuals, is, in general, not higher t...
On Lanczostype methods for Wilson fermions
"... . Numerical simulations of lattice gauge theories with fermions rely heavily on the iterative solution of huge sparse linear systems of equations. Due to short recurrences, which mean small memory requirement, Lanczostype methods (including suitable versions of the conjugate gradient method when ap ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
. Numerical simulations of lattice gauge theories with fermions rely heavily on the iterative solution of huge sparse linear systems of equations. Due to short recurrences, which mean small memory requirement, Lanczostype methods (including suitable versions of the conjugate gradient method when applicable) are best suited for this type of problem. The Wilson formulation of the lattice Dirac operator leads to a matrix with special symmetry properties that makes the application of the classical biconjugate gradient (BiCG) particularly attractive, but other methods, for example BiCGStab and BiCGStab2 have also been widely used. We discuss some of the pros and cons of these methods. In particular, we review the specic simplication of BiCG, clarify some details, and discuss general results on the roundo behavior. 1 The symmetry properties of the Wilson fermion matrix In the Wilson formulation of the lattice Dirac operator, where the Green's function of a single quark with bare mass ...
Variations of Zhang's LanczosType Product Method
, 2001
"... Among the Lanczostype product methods, which are characterized by residual polynomials pntn that are the product of the Lanczos polynomial Pn and another polynomial tn of exact degree n with tn(O) = 1, Zhang's algorithm GPBICG has the feature that the polynomials tn are implicitly built up by ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Among the Lanczostype product methods, which are characterized by residual polynomials pntn that are the product of the Lanczos polynomial Pn and another polynomial tn of exact degree n with tn(O) = 1, Zhang's algorithm GPBICG has the feature that the polynomials tn are implicitly built up by a pair of coupled two term recurrences whose coefficients are chosen so that the new residual is minimized in a 2dimensional space. There are several ways to achieve this. We discuss here alternative algorithms that are mathematically equivalent (that is, produce in exact arithmetic the same results). The goal is to find one where the ultimate accuracy of the iterates Xn is guaranteed to be high and the cost is at most slightly increased. Key words: Krylov space method, biconjugate gradients, Lanczostype product method, BiCGxMR2, GPBiCG I
A Residual Replacement Strategy for Improving the Maximum Attainable Accuracy of CommunicationAvoiding Krylov Subspace Methods
, 2012
"... ..."
ML(N)BICGSTAB: REFORMULATION, ANALYSIS AND IMPLEMENTATION
"... With the aid of index functions, we rederive the ML(n)BiCGStab algorithm in [35] in a more systematic way. It turns out that there are n ways to define the ML(n)BiCGStab residual vector. Each definition will lead to a different ML(n)BiCGStab algorithm. We demonstrate this by deriving a second alg ..."
Abstract
 Add to MetaCart
With the aid of index functions, we rederive the ML(n)BiCGStab algorithm in [35] in a more systematic way. It turns out that there are n ways to define the ML(n)BiCGStab residual vector. Each definition will lead to a different ML(n)BiCGStab algorithm. We demonstrate this by deriving a second algorithm which requires less storage. We also analyze the breakdown situations from the probabilistic point of view and summarize some useful properties of ML(n)BiCGStab. Implementation issues are also addressed. We discuss in detail the choices of the parameters in ML(n)BiCGStab and their effects on the performance of the algorithm.
Delft University of Technology, The Netherlands. On the convergence behaviour of IDR(s)
, 2010
"... An explanation is given of the convergence behaviour of the IDR(s) methods. The convergence of the IDR(s) algorithms has two components. The first consists of damping properties of certain factors in the residual polynomials, which becomes less important for large values of s. The second component d ..."
Abstract
 Add to MetaCart
(Show Context)
An explanation is given of the convergence behaviour of the IDR(s) methods. The convergence of the IDR(s) algorithms has two components. The first consists of damping properties of certain factors in the residual polynomials, which becomes less important for large values of s. The second component depends on the behaviour of quasiLanczos polynomials that occur in the theoretical description. In this paper, this second component is analysed, the convergence behaviour of the methods is explained, and an expectation is given on the rate of convergence.
1032639. A RESIDUAL REPLACEMENT STRATEGY FOR IMPROVING THE MAXIMUM ATTAINABLE ACCURACY OF SSTEP KRYLOV SUBSPACE METHODS
"... All rights reserved. ..."
(Show Context)
ML(N)BICGSTAB: REFORMULATION, ANALYSIS AND IMPLEMENTATION MANCHUNG YEUNG
"... Abstract. With the aid of index functions, we rederive the ML(n)BiCGStab algorithm in [39] more systematically. There are n ways to define the ML(n)BiCGStab residual vector. Each definition leads to a different ML(n)BiCGStab algorithm. We demonstrate this by presenting a second algorithm which requ ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. With the aid of index functions, we rederive the ML(n)BiCGStab algorithm in [39] more systematically. There are n ways to define the ML(n)BiCGStab residual vector. Each definition leads to a different ML(n)BiCGStab algorithm. We demonstrate this by presenting a second algorithm which requires less storage. In theory, this second algorithm serves as a bridge that connects the Lanczosbased BiCGStab and the Arnoldibased FOM while ML(n)BiCG is a bridge connecting BiCG and FOM. We also analyze the breakdown situation from the probabilistic point of view and summarize some useful properties of ML(n)BiCGStab. Implementation issues are also addressed.
ERROR ANALYSIS OF THE SSTEP LANCZOS METHOD IN FINITE PRECISION
, 2014
"... precision ..."
(Show Context)
ANALYSIS OF THE FINITE PRECISION SSTEP BICONJUGATE GRADIENT METHOD
, 2014
"... personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires pri ..."
Abstract
 Add to MetaCart
personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission.