## Preconditioning techniques for large linear systems: A survey (2002)

Venue: | J. COMPUT. PHYS |

Citations: | 122 - 5 self |

### BibTeX

@ARTICLE{Benzi02preconditioningtechniques,

author = {Michele Benzi},

title = {Preconditioning techniques for large linear systems: A survey},

journal = {J. COMPUT. PHYS},

year = {2002},

volume = {182},

pages = {418--477}

}

### Years of Citing Articles

### OpenURL

### Abstract

This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization issues, and block and multilevel extensions. Some of the challenges ahead are also discussed. An extensive bibliography completes the paper.

### Citations

1569 |
Practical Optimization
- Gill, Murray, et al.
(Show Context)
Citation Context ...e pi chosen is too small, the triangular solves with L and L T can become unstable; i.e., �I − ¯L −1 A ¯L −T �F is large. Some heuristics can be found in [190, 277] and in the optimization literature =-=[154, 260]-=- (see also [204]). Unfortunately, it is frequently the case that even a handful of pivot shifts will result in a poor preconditioner. The problem is that if a nonpositive pivot occurs, the loss of inf... |

1517 | GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems
- Saad, Schultz
- 1986
(Show Context)
Citation Context ...earchers turned their attention to the nonsymmetric case. This work, which continued throughout the 1980s, culminated with the development of the GMRES algorithms424 MICHELE BENZI by Saad and Schultz =-=[253]-=-, the QMR algorithm by Freund and Nachtigal [148], and the Bi-CGSTAB method by van der Vorst [280]. For an excellent discussion of developments up to about 1990, see [147]. In the 1990s, research on K... |

874 | V.: A fast and high quality multilevel scheme for partitioning irregular graphs
- Karypis, Kumar
- 1998
(Show Context)
Citation Context ...rs being used (for a fixed problem size).s448 MICHELE BENZI Very brifley, these algorithms consist of the following steps: graph partitioning (e.g., using the highly efficient techniques described in =-=[188]-=-), incomplete elimination of interior nodes in a subdomain before boundary nodes, and coloring the subdomains to process the boundary nodes in parallel. For ILU(0), such an algorithm has been describe... |

867 |
Multigrid Methods and Applications
- Hackbusch
- 1985
(Show Context)
Citation Context ...ods, which allow for the use of variable preconditioners [157, 230, 247, 269]. Another crucial event in the area of iterative methods is the development of multigrid methods by Brandt [61], Hackbusch =-=[166]-=-, and others (see also the early papers by Fedorenko [138, 139]). Up-to-date treatments can be found in [59, 68, 273]. These methods can be interpreted as (very sophisticated) stationary iterative sch... |

794 |
Methods of conjugate gradients for solving linear systems
- Hestenes, Stiefel
- 1952
(Show Context)
Citation Context ...y, of this field. The history of Krylov subspace methods can be briefly summed up as follows (see [155, 255] for more detailed historical information). In 1952, Lanczos [199] and Hestenes and Stiefel =-=[171]-=- discovered (independently and almost simultaneously) the method of conjugate gradients (CG) for solving linear systems with a symmetric and positive definite matrix A. This method was initially regar... |

538 |
Computer Solution of Large Sparse Positive Definite Systems
- George, Liu
- 1981
(Show Context)
Citation Context ... industrial codes, especially where reliability is the primary concern. Indeed, direct solvers are very robust, and they tend to require a predictable amount of resources in terms of time and storage =-=[121, 150]-=-. With a state-of-the-art sparse direct solver (see, e.g., [5]) it is possible to efficiently solve 0021-9991/02 $35.00 c○ 2002 Elsevier Science (USA) All rights reserved. 418sPRECONDITIONING TECHNIQU... |

533 |
Domain Decomposition. Parallel Multilevel Methods for Elliptic P.D.E.’s
- Smith, Bjørstard, et al.
- 1996
(Show Context)
Citation Context ...or class of iterative schemes that should be mentioned here is that of domain decomposition methods, which became popular in the 1980s, in part as a result of the emergence of parallel computing (see =-=[240, 264]-=- for recent surveys). Currently, domain decomposition methods (and their multilevel variants) are used almost exclusively as preconditioners. Mathematically, they can be regarded as an extension of si... |

532 |
der Vorst. Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, 2nd Edition
- Barrett, Berry, et al.
- 1994
(Show Context)
Citation Context ...k marks the transition from the old to the new era in the history of iterative methods. In the past few years, a number of books entirely devoted to iterative methods for linear systems have appeared =-=[12, 26, 63, 71, 142, 160, 167, 217, 251, 289]-=-. This is the culmination of several years of intensive development of iterative methods, particularly Krylov subspace methods. The very existence of these monographs is a testimony to the vitality, a... |

416 |
Multi-level adaptive solutions to boundaryvalue problems
- Brandt
- 1977
(Show Context)
Citation Context ...s of Krylov methods, which allow for the use of variable preconditioners [157, 230, 247, 269]. Another crucial event in the area of iterative methods is the development of multigrid methods by Brandt =-=[61]-=-, Hackbusch [166], and others (see also the early papers by Fedorenko [138, 139]). Up-to-date treatments can be found in [59, 68, 273]. These methods can be interpreted as (very sophisticated) station... |

367 |
Iterative Solutions of Large Linear Systems
- Young
- 1971
(Show Context)
Citation Context ... industry, iterative methods have always been popular. Indeed, these areas have historically provided the stimulus for much early research on iterative methods, as witnessed in the classic monographs =-=[282, 286, 294]-=-. In contrast, direct methods have been traditionally preferred in the areas of structural analysis and semiconductor device modeling, in most parts of computational fluid dynamics (CFD), and in virtu... |

360 |
der Vorst. Bi–CGSTAB: A fast and smoothly converging variant of Bi– CG for the solution of nonsymmetric linear systems
- van
- 1992
(Show Context)
Citation Context ...e 1980s, culminated with the development of the GMRES algorithms424 MICHELE BENZI by Saad and Schultz [253], the QMR algorithm by Freund and Nachtigal [148], and the Bi-CGSTAB method by van der Vorst =-=[280]-=-. For an excellent discussion of developments up to about 1990, see [147]. In the 1990s, research on Krylov subspace methods proceeded at a more modest pace, mostly focusing on the analysis and refine... |

354 | QMR: a quasi-minimal residual method for non-Hermitian linear systems
- Freund, Nachtigal
- 1991
(Show Context)
Citation Context ...ic case. This work, which continued throughout the 1980s, culminated with the development of the GMRES algorithms424 MICHELE BENZI by Saad and Schultz [253], the QMR algorithm by Freund and Nachtigal =-=[148]-=-, and the Bi-CGSTAB method by van der Vorst [280]. For an excellent discussion of developments up to about 1990, see [147]. In the 1990s, research on Krylov subspace methods proceeded at a more modest... |

350 |
Solution of sparse indefinite systems of linear equations
- Paige, Saunders
- 1975
(Show Context)
Citation Context ... the rate of convergence of the method [101]. The extension of conjugate gradients to symmetric indefinite systems led to the development, by Paige and Saunders, of the MINRES and SYMMLQ methods (see =-=[236]-=-). Important contributions were also made by Fletcher (see [143]). In the late 1970s and early 1980s several researchers turned their attention to the nonsymmetric case. This work, which continued thr... |

324 | The University of Florida sparse matrix collection
- Davis, Hu
(Show Context)
Citation Context ...e not from PDEs. Except for SLIDE, which was provided by Ivan Otero of the Lawrence Livermore National Laboratory, these matrices are available from the University of Florida Sparse Matrix Collection =-=[107]-=-. In the table, n denotes the order of the matrix and nnz the numbers442 MICHELE BENZI TABLE V Test Results for ILUT with MC64 Preprocessing Matrix n nnz MC64 Density Its Tot-time LNS3937 3937 25407 0... |

307 |
Domain Decomposition Methods for Partial Differential Equations
- Quarteroni, Valli
- 1999
(Show Context)
Citation Context ...or class of iterative schemes that should be mentioned here is that of domain decomposition methods, which became popular in the 1980s, in part as a result of the emergence of parallel computing (see =-=[240, 264]-=- for recent surveys). Currently, domain decomposition methods (and their multilevel variants) are used almost exclusively as preconditioners. Mathematically, they can be regarded as an extension of si... |

301 | A flexible inner-outer preconditioned GMRES algorithm
- Saad
- 1993
(Show Context)
Citation Context ... of several basic Krylov subspace methods (see, e.g., [118, 162]). Also worth mentioning is the development of flexible variants of Krylov methods, which allow for the use of variable preconditioners =-=[157, 230, 247, 269]-=-. Another crucial event in the area of iterative methods is the development of multigrid methods by Brandt [61], Hackbusch [166], and others (see also the early papers by Fedorenko [138, 139]). Up-to-... |

279 | Algebraic multigrid
- Ruge, StÄuben
- 1987
(Show Context)
Citation Context ...le examples of this trend are Dendy’s black box multigrid [110] and especially Ruge and Stüben’s algebraic multigrid (AMG) method [244] (see also the early papers [62, 159, 266] and the recent survey =-=[267]-=-). While not completely general purpose, AMG is widely applicable and it is currently the focus of intensive development (see, e.g., [97]). AMG is a promising technique for the solution of very large ... |

276 | An approximate minimum degree ordering algorithm
- Amestoy, Davis, et al.
- 1996
(Show Context)
Citation Context ...ude bandwidth- and profilereducing orderings, such as reverse Cuthill–McKee (RCM) [103], Sloan’s ordering [263], and the Gibbs–Poole–Stockmeyer ordering [152]; variants of the minimum degree ordering =-=[4, 151, 207]-=-; and (generalized) nested dissection [149, 206]. In Fig. 1 we show the sparsity pattern of a simple five-point finite difference discretization of a diffusion operator corresponding to four orderings... |

270 |
der Vorst, An iterative solution method for linear systems of which the coefficient matrix is a symmetric $M- mat\dot{m
- Meijerink, Van
(Show Context)
Citation Context ... out in 1972 by Axelsson [9]. A major breakthrough took place around the mid-1970s, with the introduction by Meijerink and van der Vorst of the incomplete Cholesky-conjugate gradient (ICCG) algorithm =-=[215]-=-. Incomplete factorization methods were introduced for the first time by Buleev in the then-Soviet Union in the late 1950s, and independently by Varga (see [72, 73, 179, 281]; see also [231]). However... |

200 | Parallel preconditioning with sparse approximate inverses
- Grote, Huckle
- 1997
(Show Context)
Citation Context ...an approach was first proposed by Cosgrove et al.sPRECONDITIONING TECHNIQUES 451 FIG. 3. Sparsity patterns, matrix ALE3D. [102]. Slightly different strategies were also considered by Grote and Huckle =-=[163]-=- and by Gould and Scott [158]. The most successful of these approaches is the one proposed by Grote and Huckle, hereafter referred to as the SPAI preconditioner [163]. The algorithm runs as follows. A... |

195 |
The Theory of Matrices in Numerical Analysis
- Householder
- 1964
(Show Context)
Citation Context ...st [255]. 2.1. Iterative Methods The first instances of iterative methods for solving systems of linear equations appeared in works of Gauss, Jacobi, Seidel, and Nekrasov during the 19th century (see =-=[173]-=-). Important developments took place in the first half of the 20th century, but the systematic study of iterative methods for large linear systems began only after the development of digital electroni... |

192 |
Applied Iterative Methods
- HAGEMAN, YOUNG
- 1981
(Show Context)
Citation Context ...these limitations. Adaptive parameter estimation procedures, together with acceleration techniques based on several emerging Krylov subspace methods, are covered in the monograph by Hageman and Young =-=[168]-=-. In a sense, this book marks the transition from the old to the new era in the history of iterative methods. In the past few years, a number of books entirely devoted to iterative methods for linear ... |

191 |
Generalized nested dissection
- Lipton, Rose, et al.
- 1979
(Show Context)
Citation Context ...as reverse Cuthill–McKee (RCM) [103], Sloan’s ordering [263], and the Gibbs–Poole–Stockmeyer ordering [152]; variants of the minimum degree ordering [4, 151, 207]; and (generalized) nested dissection =-=[149, 206]-=-. In Fig. 1 we show the sparsity pattern of a simple five-point finite difference discretization of a diffusion operator corresponding to four orderings of the grid points: lexicographical, RCM, red–b... |

189 |
Conjugate gradient methods for indefinite systems
- Fletcher
- 1976
(Show Context)
Citation Context ...onjugate gradients to symmetric indefinite systems led to the development, by Paige and Saunders, of the MINRES and SYMMLQ methods (see [236]). Important contributions were also made by Fletcher (see =-=[143]-=-). In the late 1970s and early 1980s several researchers turned their attention to the nonsymmetric case. This work, which continued throughout the 1980s, culminated with the development of the GMRES ... |

189 |
Nested dissection of a regular finite element mesh
- George
- 1973
(Show Context)
Citation Context ...as reverse Cuthill–McKee (RCM) [103], Sloan’s ordering [263], and the Gibbs–Poole–Stockmeyer ordering [152]; variants of the minimum degree ordering [4, 151, 207]; and (generalized) nested dissection =-=[149, 206]-=-. In Fig. 1 we show the sparsity pattern of a simple five-point finite difference discretization of a diffusion operator corresponding to four orderings of the grid points: lexicographical, RCM, red–b... |

186 | Solution of Systems of Linear Equations by Minimized Iterations
- Lanczos
- 1953
(Show Context)
Citation Context ... vitality, and also the maturity, of this field. The history of Krylov subspace methods can be briefly summed up as follows (see [155, 255] for more detailed historical information). In 1952, Lanczos =-=[199]-=- and Hestenes and Stiefel [171] discovered (independently and almost simultaneously) the method of conjugate gradients (CG) for solving linear systems with a symmetric and positive definite matrix A. ... |

175 | A Fully Asynchronous Multifrontal Solver Using Distributed Dynamic Scheduling
- AMESTOY, DUFF, et al.
(Show Context)
Citation Context ...n. Indeed, direct solvers are very robust, and they tend to require a predictable amount of resources in terms of time and storage [121, 150]. With a state-of-the-art sparse direct solver (see, e.g., =-=[5]-=-) it is possible to efficiently solve 0021-9991/02 $35.00 c○ 2002 Elsevier Science (USA) All rights reserved. 418sPRECONDITIONING TECHNIQUES 419 in a reasonable amount of time linear systems of fairly... |

173 | A sparse approximate inverse preconditioner for the conjugate gradient method
- Benzi, Meyer, et al.
- 1996
(Show Context)
Citation Context ...adient method (see pp. 425–427). Because this algorithm costs twice as much as the Cholesky factorization in the dense case, A-orthogonalization is never used to factor matrices. However, as noted in =-=[38]-=-, Aorthogonalization also produces the inverse factorization A −1 = ZD −1 Z T (with Z unit upper triangular and D diagonal) and this fact has been exploited to construct factored sparse approximate in... |

146 |
Modification of the minimum-degree algorithm by multiple elimination
- Liu
- 1985
(Show Context)
Citation Context ...ude bandwidth- and profilereducing orderings, such as reverse Cuthill–McKee (RCM) [103], Sloan’s ordering [263], and the Gibbs–Poole–Stockmeyer ordering [152]; variants of the minimum degree ordering =-=[4, 151, 207]-=-; and (generalized) nested dissection [149, 206]. In Fig. 1 we show the sparsity pattern of a simple five-point finite difference discretization of a diffusion operator corresponding to four orderings... |

143 |
A transpose-free quasi-minimal residual algorithm for non-Hermitian linear systems
- Freund
- 1993
(Show Context)
Citation Context ...iety of symmetric and nonsymmetric problems in conjunction with standard Krylov subspace methods such as conjugate gradients (for symmetric positive definite matrices) and GMRES, Bi-CGSTAB, and TFQMR =-=[146]-=- (for nonsymmetric problems). The preconditioner has been found to be comparable to ILU methods in terms of robustness and rates of convergence, with ILU methods being somewhat faster on average on se... |

143 |
The numerical solution of parabolic and elliptic differential equations
- Peaceman, Rachford
- 1955
(Show Context)
Citation Context ... matrix, c is a fixed vector, and x 0 an initial guess. Early on, Chebyschev acceleration of symmetrizable iterations (like SSOR) was considered [261]. The alternating direction implicit (ADI) method =-=[117, 237]-=-, a serious competitor to the SOR method (especially in the petroleum industry), also belongs to this period. A crucial event is the publication of Varga’s famous book [282] which, among other things,... |

124 |
Algebraic multilevel preconditioning methods II
- Axelsson, Vassilevski
- 1990
(Show Context)
Citation Context ... difference between incomplete factorizations and methods like AMG is not as deep as it might appear at first sight, as all these algorithms can be interpreted as approximate Schur complement methods =-=[16, 17, 104, 284]-=-. This realization has prompted the development of a number of algebraic multilevel algorithms which are rooted in standard ILU or approximate inverse techniques but somehow attempt to achieve algorit... |

124 |
A van der Vorst. Numerical Linear Algebra for High-performance Computers
- Dongarra, Duff, et al.
- 1998
(Show Context)
Citation Context ...as which were previously the exclusive domain of direct solution methods. 3.4. Block Algorithms A standard technique to improve performance in dense matrix computations is to use blocking (see, e.g., =-=[116]-=-). By partitioning the matrices and vectors into blocks of suitable size (which usually depends on the target architecture) and by making such blocks the elementary entities on which the computations ... |

124 |
Factorized Sparse Approximate Inverse Preconditionings. IV: Simple Approaches to Rising Efficiency , Numerical Linear Algebra with Applications
- Kolotilina, Jeremin
- 1999
(Show Context)
Citation Context ...all lower triangular matrices with sparsity pattern SL, with the additional constraint that the entries of the preconditioned matrix XAXT be all equal to 1 (see [196] and the related paper [185]; see =-=[194]-=- for improvements of the basic FSAI algorithm). As before, the main issue is the selection of a good sparsity pattern for G L. A common choice is to allow nonzeros in G L only in positions correspondi... |

105 |
Algebraic Multigrid (AMG) for Sparse Matrix equations. pp 257-284, Sparse and its applications, Cambridge Univ
- Brandt, McCormick, et al.
- 1985
(Show Context)
Citation Context ...ions, geometries, and so forth. Notable examples of this trend are Dendy’s black box multigrid [110] and especially Ruge and Stüben’s algebraic multigrid (AMG) method [244] (see also the early papers =-=[62, 159, 266]-=- and the recent survey [267]). While not completely general purpose, AMG is widely applicable and it is currently the focus of intensive development (see, e.g., [97]). AMG is a promising technique for... |

93 | Block Preconditioning for the Conjugate Gradient method
- Concus, Golub, et al.
- 1985
(Show Context)
Citation Context ...ion preconditioning have been used for many years in the solution of block tridiagonal linear systems arising from the discretization of partial differential equations on structured grids (see, e.g., =-=[13, 100, 179, 195, 275]-=-, and the relevant chapters in [12, 217]). Here the blocks arise from some natural partitioning of the problem (grid lines, planes, or subdomains) and they are usually large and sparse. For structured... |

93 |
A generalized conjugate gradient method for the numerical solution of elliptic PDE
- Concus, Golub, et al.
- 1976
(Show Context)
Citation Context ...th indefinite and/or nonsymmetric matrices were sought; on the other, techniques to improve the conditioning of linear systems were developed in order to improve the rate of convergence of the method =-=[101]-=-. The extension of conjugate gradients to symmetric indefinite systems led to the development, by Paige and Saunders, of the MINRES and SYMMLQ methods (see [236]). Important contributions were also ma... |

93 | ILUT: a dual threshold incomplete LU factorization
- Saad
- 1994
(Show Context)
Citation Context ... storage that will be needed to store the incomplete LU factors. An efficient, predictable algorithm is obtained by limiting the number of nonzeros allowed in each row of the triangular factors. Saad =-=[248]-=- has proposed the following dual threshold strategy: fix a drop tolerance � and a numbers428 MICHELE BENZI TABLE I Test Results for 2D Convection–Diffusion Problem Precond. ILU(0) ILU(1) ILUT(5 × 10 −... |

91 | Optimal convergence properties of the FETI domain decomposition method - Farhat, Mandel, et al. - 1994 |

89 |
The effect of ordering on preconditioned conjugate gradients
- Duff, Meurant
- 1983
(Show Context)
Citation Context ...ty. Accuracy refers to how close the incomplete factors of A are to the exact ones and can be measured by N1 =�A − ¯LŪ�F. For some classes of problems, including symmetric M-matrices, it can be shown =-=[14, 125]-=- that the number of preconditioned CG iterations is almost directly related to N1, so that improving the accuracy of the incomplete factorization by allowing additional fill will result in a decrease ... |

87 | BoomerAMG: A Parallel Algebraic Multigrid Solver and Preconditioner
- Henson, Yang
(Show Context)
Citation Context ... of AMG and other algebraic multilevel methods has proven to be difficult due to the fact that the coarsening strategy used in the original AMG algorithm is highly sequential in nature. Recent papers =-=[170, 198]-=- show that progress is being made in this direction as well, but much work remains to be done, especially for indefinite problems, for problems in 3D, and for systems of partial differential equations... |

87 | A note on preconditioning for indefinite linear systems
- Murphy, Golub, et al.
(Show Context)
Citation Context ... with positive definite operator, much work remains to be done for more complicated problems. For example, there is a need for reliable and efficient preconditioners for symmetric indefinite problems =-=[223]-=-. Ideally, an optimal preconditioner for problem (1) would result in an O(n) solution algorithm, would be perfectly scalable when implemented on a parallel computer, and would behave robustly over lar... |

84 |
Black box multigrid
- DENDY
- 1982
(Show Context)
Citation Context ...ore generally, multilevel methods has grown to include larger and larger classes of problems, discretizations, geometries, and so forth. Notable examples of this trend are Dendy’s black box multigrid =-=[110]-=- and especially Ruge and Stüben’s algebraic multigrid (AMG) method [244] (see also the early papers [62, 159, 266] and the recent survey [267]). While not completely general purpose, AMG is widely app... |

81 | Approximate inverse preconditioners via sparse–sparse iterations
- Chow, Saad
- 1998
(Show Context)
Citation Context ...2, 23]). Therefore, some effort has been put into finding ways to reduce the construction cost of SPAI. This was the motivation for Chow and Saad’s development of the MR (for Minimal Residual) method =-=[96]-=-. In this algorithm, the exact minimization of �I − AM�F is replaced by an approximate minimization obtained by performing a few iterations of a minimal residual-type method applied to Am j = e j . No... |

80 |
Decay rates for inverses of band matrices
- Demko, Moss, et al.
- 1984
(Show Context)
Citation Context ...any of the entries in the inverse of a sparse matrix are small in absolute value, thus making the approximation of A −1 with a sparse matrix possible. For instance, a classical result of Demko et al. =-=[112]-=- states that if A is a banded symmetric positive definite matrix, then the entries of A −1 are bounded in an exponentially decaying manner along each row or column. More precisely, there exist 0 < � <... |

79 |
An Algorithm for Reducing the Bandwidth and the Profile of a Sparse Matrix
- Gibbs, Poole, et al.
- 1976
(Show Context)
Citation Context ... [121, 150]. Classical ordering strategies include bandwidth- and profilereducing orderings, such as reverse Cuthill–McKee (RCM) [103], Sloan’s ordering [263], and the Gibbs–Poole–Stockmeyer ordering =-=[152]-=-; variants of the minimum degree ordering [4, 151, 207]; and (generalized) nested dissection [149, 206]. In Fig. 1 we show the sparsity pattern of a simple five-point finite difference discretization ... |

78 | On algorithms for permuting large entries to the diagonal of a sparse matrix
- Duff, Koster
(Show Context)
Citation Context ...derings aimed at permuting large entries to the main diagonal of a general 4 These matrices are numerically nonsymmetric, but structurally symmetric.sPRECONDITIONING TECHNIQUES 441 sparse matrix (see =-=[33, 54, 123, 124, 232]-=-). Incomplete factorization preconditioners often fail on general sparse matrices that lack nice properties such as symmetry, positive definiteness, diagonal dominance, and so forth. Thus, failure rat... |

75 |
On the numerical solution of heat conduction problems in two and three space variables
- Douglas, Rachford
- 1956
(Show Context)
Citation Context ... matrix, c is a fixed vector, and x 0 an initial guess. Early on, Chebyschev acceleration of symmetrizable iterations (like SSOR) was considered [261]. The alternating direction implicit (ADI) method =-=[117, 237]-=-, a serious competitor to the SOR method (especially in the petroleum industry), also belongs to this period. A crucial event is the publication of Varga’s famous book [282] which, among other things,... |

75 | The design and use of algorithms for permuting large entries to the diagonal of sparse matrices
- Duff, Koster
- 1999
(Show Context)
Citation Context ...derings aimed at permuting large entries to the main diagonal of a general 4 These matrices are numerically nonsymmetric, but structurally symmetric.sPRECONDITIONING TECHNIQUES 441 sparse matrix (see =-=[33, 54, 123, 124, 232]-=-). Incomplete factorization preconditioners often fail on general sparse matrices that lack nice properties such as symmetry, positive definiteness, diagonal dominance, and so forth. Thus, failure rat... |

75 |
A class of first order factorization methods
- Gustafsson
- 1978
(Show Context)
Citation Context ...etric and indefinite matrices, such as those arising in many CFD applications. A hierarchy of ILU preconditioners may be obtained based on the “levels of fill-in” concept, as formalized by Gustafsson =-=[164]-=- for finite difference discretizations and by Watts [288] for more general problems. A level of fill is attributed to each matrix entry that occurs in the incomplete factorization process. Fill-ins ar... |