Results 1  10
of
21
Iterative Solution of Linear Systems
 Acta Numerica
, 1992
"... this paper is as follows. In Section 2, we present some background material on general Krylov subspace methods, of which CGtype algorithms are a special case. We recall the outstanding properties of CG and discuss the issue of optimal extensions of CG to nonHermitian matrices. We also review GMRES ..."
Abstract

Cited by 101 (8 self)
 Add to MetaCart
this paper is as follows. In Section 2, we present some background material on general Krylov subspace methods, of which CGtype algorithms are a special case. We recall the outstanding properties of CG and discuss the issue of optimal extensions of CG to nonHermitian matrices. We also review GMRES and related methods, as well as CGlike algorithms for the special case of Hermitian indefinite linear systems. Finally, we briefly discuss the basic idea of preconditioning. In Section 3, we turn to Lanczosbased iterative methods for general nonHermitian linear systems. First, we consider the nonsymmetric Lanczos process, with particular emphasis on the possible breakdowns and potential instabilities in the classical algorithm. Then we describe recent advances in understanding these problems and overcoming them by using lookahead techniques. Moreover, we describe the quasiminimal residual algorithm (QMR) proposed by Freund and Nachtigal (1990), which uses the lookahead Lanczos process to obtain quasioptimal approximate solutions. Next, a survey of transposefree Lanczosbased methods is given. We conclude this section with comments on other related work and some historical remarks. In Section 4, we elaborate on CGNR and CGNE and we point out situations where these approaches are optimal. The general class of Krylov subspace methods also contains parameterdependent algorithms that, unlike CGtype schemes, require explicit information on the spectrum of the coefficient matrix. In Section 5, we discuss recent insights in obtaining appropriate spectral information for parameterdependent Krylov subspace methods. After that, 4 R.W. Freund, G.H. Golub and N.M. Nachtigal
Estimates in Quadratic Formulas
, 1994
"... Let A be a real symmetric positive definite matrix. We consider three particular questions, namely estimates for the error in linear systems Ax = b, minimizing quadratic functional min x (x T Ax \Gamma 2b T x) subject to the constraint k x k= ff, ff !k A \Gamma1 b k, and estimates for the e ..."
Abstract

Cited by 26 (7 self)
 Add to MetaCart
Let A be a real symmetric positive definite matrix. We consider three particular questions, namely estimates for the error in linear systems Ax = b, minimizing quadratic functional min x (x T Ax \Gamma 2b T x) subject to the constraint k x k= ff, ff !k A \Gamma1 b k, and estimates for the entries of the matrix inverse A \Gamma1 . All of these questions can be formulated as a problem of finding an estimate or an upper and lower bound on u T F (A)u, where F (A) = A \Gamma1 resp. F (A) = A \Gamma2 , u is a real vector. This problem can be considered in terms of estimates in the Gaußtype quadrature formulas which can be effectively computed exploiting the underlying Lanczos process. Using this approach, we first recall the exact arithmetic solution of the questions formulated above and then analyze the effect of rounding errors in the quadrature calculations. It is proved that the basic relation between the accuracy of Gauß quadrature for f() = \Gamma1 and the rate of ...
Differences in the effects of rounding errors in Krylov solvers for symmetric indefinite linear systems
, 1999
"... The 3term Lanczos process leads, for a symmetric matrix, to bases for Krylov subspaces of increasing dimension. The Lanczos basis, together with the recurrence coefficients, can be used for the solution of symmetric indefinite linear systems, by solving the reduced system in one way or another. Thi ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
The 3term Lanczos process leads, for a symmetric matrix, to bases for Krylov subspaces of increasing dimension. The Lanczos basis, together with the recurrence coefficients, can be used for the solution of symmetric indefinite linear systems, by solving the reduced system in one way or another. This leads to wellknown methods: MINRES, GMRES, and SYMMLQ. We will discuss in what way and to what extent these approaches differ in their sensitivity to rounding errors. In our analysis we will assume that the Lanczos basis is generated in exactly the same way for the different methods, and we will not consider the errors in the Lanczos process itself. We will show that the method of solution may lead, under certain circumstances, to large additional errors, that are not corrected by continuing the iteration process. Our findings are supported and illustrated by numerical examples. 1 Introduction We will consider iterative methods for the construction of approximate solutions, starting with...
Iterative Solution Methods for Large Linear Discrete IllPosed Problems
, 1998
"... This paper discusses iterative methods for the solution of very large severely illconditioned linear systems of equations that arise from the discretization of linear illposed problems. The righthand side vector represents the given data and is assumed to be contaminated by errors. Solution metho ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
This paper discusses iterative methods for the solution of very large severely illconditioned linear systems of equations that arise from the discretization of linear illposed problems. The righthand side vector represents the given data and is assumed to be contaminated by errors. Solution methods proposed in the literature employ some form of filtering to reduce the influence of the error in the righthand side on the computed approximate solution. The amount of filtering is determined by a parameter, often referred to as the regularization parameter. We discuss how the filtering affects the computed approximate solution and consider the selection of regularization parameter. Methods in which a suitable value of the regularization parameter is determined during the computation, without user intervention, are emphasized. New iterative solution methods based on expanding explicitly chosen filter functions in terms of Chebyshev polynomials are presented. The properties of these methods are illustrated with applications to image restoration.
By How Much Can Residual Minimization Accelerate The Convergence Of Orthogonal Residual Methods?
, 2001
"... We capitalize upon the known relationship between pairs of orthogonal and minimal residual methods (or, biorthogonal and quasiminimal residual methods) in order to estimate how much smaller the residuals or quasiresiduals of the minimizing methods can be compared to the those of the corresponding ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
We capitalize upon the known relationship between pairs of orthogonal and minimal residual methods (or, biorthogonal and quasiminimal residual methods) in order to estimate how much smaller the residuals or quasiresiduals of the minimizing methods can be compared to the those of the corresponding Galerkin or PetrovGalerkin method. Examples of such pairs are the conjugate gradient (CG) and the conjugate residual (CR) methods, the full orthogonalization method (FOM) and the generalized minimal residual (GMRes) method, the CGNE and CGNR versions of applying CG to the normal equations, as well as the biconjugate gradient (BiCG) and the quasiminimal residual (QMR) methods. Also the pairs consisting of the (bi)conjugate gradient squared (CGS) and the transposefree QMR (TFQMR) methods can be added to this list if the residuals at halfsteps are included, and further examples can be created easily. The analysis is more generally applicable to the minimal residual (MR) and quasiminimal residual (QMR) smoothing processes, which are known to provide the transition from the results of the first method of such a pair to those of the second one. By an interpretation of these smoothing processes in coordinate space we deepen the understanding of some of the underlying relationships and introduce a unifying framework for minimal residual and quasiminimal residual smoothing. This framework includes the general notion of QMRtype methods.
Deflated and augmented Krylov subspace methods: A framework for deflated . . .
, 2013
"... We present an extension of the framework of Gaul et al. (SIAM J. Matrix Anal. Appl. 34, 495–518 (2013)) for deflated and augmented Krylov subspace methods satisfying a Galerkin condition to more general Petrov–Galerkin conditions. The main goal is to apply the framework also to the biconjugate gra ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
We present an extension of the framework of Gaul et al. (SIAM J. Matrix Anal. Appl. 34, 495–518 (2013)) for deflated and augmented Krylov subspace methods satisfying a Galerkin condition to more general Petrov–Galerkin conditions. The main goal is to apply the framework also to the biconjugate gradient method (BiCG) and some of its generalizations, including BiCGStab approach does not depend on particular recurrences and thus simplifies the derivation of theoretical results. It easily leads to a variety of realizations by specific algorithms. We do not go into algorithmic details, but we show that for every method there are two different approaches for extending it by augmentation and deflation: one that explicitly takes care of the augmentation space in every step, and one that applies the unchanged basic algorithm to a projected problem but requires a correction step at the end. Both typically generate a Krylov space for a singular operator that is associated with the projected problem. The deflated biconjugate gradient requires two such Krylov spaces, but it also allows us to solve two dual linear systems at once. Deflated Lanczostype product methods fit in our new framework too. The question of how to extract the augmentation and deflation subspace is not addressed here.
From Orthogonal Polynomials To Iteration Schemes For Linear Systems: CG and CR Revisited
"... Large systems of linear equations arise frequently in numerical analysis and are the basis of many models in engineering and other applied sciences. This note provides a study for the solution of Hermitian linear systems. One particular feature which distinguishes this paper from the usual literatur ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Large systems of linear equations arise frequently in numerical analysis and are the basis of many models in engineering and other applied sciences. This note provides a study for the solution of Hermitian linear systems. One particular feature which distinguishes this paper from the usual literature on polynomial based iteration methods is its emphasis on the properties of the underlying polynomials rather than more conventional matrix manipulations. In particular, a development and discussion of the properties of orthogonal polynomials leads to unified analysis of the stateoftheart methods conjugate gradient and conjugate residual, respectively.
A framework for generalized conjugate gradient methods  with special emphasis on contributions by Rüdiger Weiss
, 2002
"... ..."
Spectral deflation in Krylov solvers: A theory of coordinate space based methods
 ETNA
"... Abstract. For the iterative solution of large sparse linear systems we develop a theory for a family of augmented and deflated Krylov space solvers that are coordinate based in the sense that the given problem is transformed into one that is formulated in terms of the coordinates with respect to the ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. For the iterative solution of large sparse linear systems we develop a theory for a family of augmented and deflated Krylov space solvers that are coordinate based in the sense that the given problem is transformed into one that is formulated in terms of the coordinates with respect to the augmented bases of the Krylov subspaces. Except for the augmentation, the basis is as usual generated by an Arnoldi or Lanczos process, but now with a deflated, singular matrix. The idea behind deflation is to explicitly annihilate certain eigenvalues of the system matrix, typically eigenvalues of small absolute value. The deflation of the matrix is based on an either orthogonal or oblique projection on a subspace that is complimentary to the deflated approximately invariant subspace. While an orthogonal projection allows us to find minimal residual norm solutions, the oblique projections, which we favor when the matrix is nonHermitian, allow us in the case of an exactly invariant subspace to correctly deflate both the right and the corresponding left (possibly generalized) eigenspaces of the matrix, so that convergence only depends on the nondeflated eigenspaces. The minimality of the residual is replaced by the minimality of a quasiresidual. Among the methods that we treat are primarily deflated versions of GMRES, MINRES, and QMR, but we also extend our approach to deflated, coordinate space based versions of other Krylov space methods including variants of CG and BICG. Numerical results will be published elsewhere.
A Chebyshevlike semiiteration for inconsistent linear systems
 Elec. Trans. Numer. Anal
, 1993
"... Abstract. Semiiterative methods are known as a powerful tool for the iterative solution of nonsingular linear systems of equations. For singular but consistent linear systems with coefficient matrix of index one, one can still apply the methods designed for the nonsingular case. However, if the syst ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. Semiiterative methods are known as a powerful tool for the iterative solution of nonsingular linear systems of equations. For singular but consistent linear systems with coefficient matrix of index one, one can still apply the methods designed for the nonsingular case. However, if the system is inconsistent, the approximations usually fail to converge. Nevertheless, it is still possible to modify classical methods like the Chebyshev semiiterative method in order to fulfill the additional convergence requirements caused by the inconsistency. These modifications may suffer from instabilities since they are based on the computation of the diverging Chebyshev iterates. In this paper we develop an alternative algorithm which allows to construct more stable approximations. This algorithm can be efficiently implemented with short recurrences. There are several reasons indicating that the new algorithm is the most natural generalization of the Chebyshev semiiteration to inconsistent linear systems. Key words. Semiiterative methods, singular systems, Zolotarev problem, orthogonal polynomials.