Results 1 
7 of
7
QMR: a QuasiMinimal Residual Method for NonHermitian Linear Systems
, 1991
"... ... In this paper, we present a novel BCGlike approach, the quasiminimal residual (QMR) method, which overcomes the problems of BCG. An implementation of QMR based on a lookahead version of the nonsymmetric Lanczos algorithm is proposed. It is shown how BCG iterates can be recovered stably from t ..."
Abstract

Cited by 334 (26 self)
 Add to MetaCart
... In this paper, we present a novel BCGlike approach, the quasiminimal residual (QMR) method, which overcomes the problems of BCG. An implementation of QMR based on a lookahead version of the nonsymmetric Lanczos algorithm is proposed. It is shown how BCG iterates can be recovered stably from the QMR process. Some further properties of the QMR approach are given and an error bound is presented. Finally, numerical experiments are reported.
Iterative Solution of Linear Systems
 Acta Numerica
, 1992
"... this paper is as follows. In Section 2, we present some background material on general Krylov subspace methods, of which CGtype algorithms are a special case. We recall the outstanding properties of CG and discuss the issue of optimal extensions of CG to nonHermitian matrices. We also review GMRES ..."
Abstract

Cited by 100 (8 self)
 Add to MetaCart
this paper is as follows. In Section 2, we present some background material on general Krylov subspace methods, of which CGtype algorithms are a special case. We recall the outstanding properties of CG and discuss the issue of optimal extensions of CG to nonHermitian matrices. We also review GMRES and related methods, as well as CGlike algorithms for the special case of Hermitian indefinite linear systems. Finally, we briefly discuss the basic idea of preconditioning. In Section 3, we turn to Lanczosbased iterative methods for general nonHermitian linear systems. First, we consider the nonsymmetric Lanczos process, with particular emphasis on the possible breakdowns and potential instabilities in the classical algorithm. Then we describe recent advances in understanding these problems and overcoming them by using lookahead techniques. Moreover, we describe the quasiminimal residual algorithm (QMR) proposed by Freund and Nachtigal (1990), which uses the lookahead Lanczos process to obtain quasioptimal approximate solutions. Next, a survey of transposefree Lanczosbased methods is given. We conclude this section with comments on other related work and some historical remarks. In Section 4, we elaborate on CGNR and CGNE and we point out situations where these approaches are optimal. The general class of Krylov subspace methods also contains parameterdependent algorithms that, unlike CGtype schemes, require explicit information on the spectrum of the coefficient matrix. In Section 5, we discuss recent insights in obtaining appropriate spectral information for parameterdependent Krylov subspace methods. After that, 4 R.W. Freund, G.H. Golub and N.M. Nachtigal
On conjugate gradient type methods and polynomial preconditioners for a class of complex nonHermitian matrices
 NUMER. MATH
, 1990
"... We consider conjugate gradient type methods for the solution of large linear systems Az = b with complex coefficient matrices of the type A = T + io1 where T is Hermitian and u a real scalar. Three different conjugate gradient type approaches with iterates defined by a minimal residual property, ..."
Abstract

Cited by 28 (3 self)
 Add to MetaCart
We consider conjugate gradient type methods for the solution of large linear systems Az = b with complex coefficient matrices of the type A = T + io1 where T is Hermitian and u a real scalar. Three different conjugate gradient type approaches with iterates defined by a minimal residual property, a Galerkin type condition, and an Euclidian error minimization, respectively, are investigated. In particular, we propose numerically stable implementations based on the ideas behind Paige and Saunders’s SYMMLQ and MINRES for real symmetric matrices and derive error bounds for all three methods. It is shown how the special shift structure of A can be preserved by using polynomial preconditioning, and results on the optimal choice of the polynomial preconditioner are given. Also, we report on some numerical experiments for matrices arising from finite difference approximations to the complex Helmholtz equation.
Chebyshev approximation via polynomial mappings and the convergence behaviour of Krylov subspace methods
 Electr. Trans. Numer. Anal
, 2001
"... Abstract. Let ϕm be a polynomial satisfying some mild conditions. Given a set R ⊂ C, a continuous function f on R and its best approximation p ∗ n−1 from Πn−1 with respect to the maximum norm, we show that p ∗ n−1 ◦ ϕm is a best approximation to f ◦ ϕm on the inverse polynomial image S of R, i.e. ϕm ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Abstract. Let ϕm be a polynomial satisfying some mild conditions. Given a set R ⊂ C, a continuous function f on R and its best approximation p ∗ n−1 from Πn−1 with respect to the maximum norm, we show that p ∗ n−1 ◦ ϕm is a best approximation to f ◦ ϕm on the inverse polynomial image S of R, i.e. ϕm(S) = R, where the extremal signature is given explicitly. A similar result is presented for constrained Chebyshev polynomial approximation. Finally, we apply the obtained results to the computation of the convergence rate of Krylov subspace methods when applied to a preconditioned linear system. We investigate pairs of preconditioners where the eigenvalues are contained in sets S and R, respectively, which are related by ϕm(S) = R. Key words. Chebyshev polynomial, optimal polynomial, extremal signature, Krylov subspace method, convergence rate. AMS subject classifications. 41A10, 30E10, 65F10. 1. Notations and statement of the problem. Let R ⊂ C denote a compact subset of the complex plane and let C(R) be the set of continuous functions on R. For f ∈ C(R) we denote by ‖f‖R: = maxz∈R f(z)  the uniform norm on R. Furthermore, let g1, g2..., gn ∈ C(R) be linearly independent functions with Vn: = span{g1, g2..., gn}. Then the best approximation g ∗ of f with respect to Vn on R is the solution of the complex Chebyshev
The Chebyshev iteration revisited
, 2002
"... Compared to Krylov space methods based on orthogonal or oblique projection, the Chebyshev iteration does not require inner products and is therefore particularly suited for massively parallel computers with high communication cost. Here, six different algorithms that implement this method are presen ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Compared to Krylov space methods based on orthogonal or oblique projection, the Chebyshev iteration does not require inner products and is therefore particularly suited for massively parallel computers with high communication cost. Here, six different algorithms that implement this method are presented and compared with respect to roundoff effects, in particular, the ultimately achievable accuracy. Two of these algorithms replace the threeterm recurrences by more accurate coupled twoterm recurrences and seem to be new. It is also shown that, for real data, the classical threeterm Chebyshev iteration is never seriously affected by roundoff, in contrast to the corresponding version of the conjugate gradient method. Even for complex data, strongroundoff effects are seen to be limited to very special situations where convergence is anyway slow. The Chebyshev iteration is applicable to symmetric definite linear systems and to nonsymmetric matrices whose eigenvalues are known to be confined to an elliptic domain that does not include the origin. Also considered is a corresponding stationary 2step method, which has the same asymptotic convergence behavior and is additionally suitable for mildly nonlinear problems.
Krylov Subspace Methods for Large Linear Systems of Equations
, 1993
"... When solving PDE's by means of numerical methods one often has to deal with large systems of linear equations, specifically if the PDE is timeindependent or if the timeintegrator is implicit. For real life problems, these large systems can often only be solved by means of some iterative method. Ev ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
When solving PDE's by means of numerical methods one often has to deal with large systems of linear equations, specifically if the PDE is timeindependent or if the timeintegrator is implicit. For real life problems, these large systems can often only be solved by means of some iterative method. Even if the systems are preconditioned, the basic iterative method often converges slowly or even diverges. We discuss and classify algebraic techniques to accelerate the basic iterative method. Our discussion includes methods like CG, GCR, ORTHODIR, GMRES, CGNR, BiCG and their modifications like GMRESR, CGS, BiCGSTAB. We place them in a frame, discuss their convergence behavior and their advantages and drawbacks. 1 Introduction Our aim is to compute acceptable approximations for the solution x of the equation Ax = b; (1) where A and b are given, A is a nonsingular n \Theta nmatrix, A is sparse, n is large and b an nvector. We will assume A and b to be real, but our methods are easily ...
FABER POLYNOMIALS OF MATRICES FOR NONCONVEX SETS
, 2013
"... Pour Paul Sablonnière, à l’occasion de son soixantecinquième anniversaire ABSTRACT. It has been recently shown that Fn(A)  ≤ 2, where A is a linear continuous operator acting in a Hilbert space, and Fn is the Faber polynomial of degree n corresponding to some convex compact E ⊂ C containing th ..."
Abstract
 Add to MetaCart
Pour Paul Sablonnière, à l’occasion de son soixantecinquième anniversaire ABSTRACT. It has been recently shown that Fn(A)  ≤ 2, where A is a linear continuous operator acting in a Hilbert space, and Fn is the Faber polynomial of degree n corresponding to some convex compact E ⊂ C containing the numerical range of A. Such an inequality is useful in numerical linear algebra, it allows for instance to derive error bounds for Krylov subspace methods. In the present paper we extend this result to not necessary convex sets E.