## A comparative study of sparse approximate inverse preconditioners (1999)

Citations: | 55 - 9 self |

### BibTeX

@MISC{Benzi99acomparative,

author = {Michele Benzi and Miroslav Tůma},

title = {A comparative study of sparse approximate inverse preconditioners },

year = {1999}

}

### Years of Citing Articles

### OpenURL

### Abstract

### Citations

1518 |
Iterative methods for sparse linear systems
- Saad
- 2003
(Show Context)
Citation Context ...cal use of these methods, we do not give a detailed description of their theoretical properties, for which the interested reader is referred to the original papers. See also the overviews in [5],[28],=-=[67]-=-. It is convenient to group the different methods into three categories. First, we consider approximate inverse methods based on Frobenius norm minimization. Second, we describe factorized sparse appr... |

1323 |
GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems
- Saad, Schultz
- 1986
(Show Context)
Citation Context ...nditioned conjugate gradient (PCG) method is used for solving the problems in the first class. For the problems in the second class, three popular Krylov subspace methods were tested: restarted GMRES =-=[68]-=-, Bi-CGSTAB [72], and TFQMR [47]. Good general references on iterative methods are [49] and [67]; see also [9] for a concise introduction. All codes developed for the tests 1 were written in Fortran77... |

538 |
Direct Methods for Sparse Matrices
- DUFF, ERISMAN, et al.
- 1986
(Show Context)
Citation Context ... requires n multiplications at each step, where n is the problem size. For matrices which have zero entries on the main diagonal, nonsymmetric permutations can be used to produce a zero-free diagonal =-=[39]-=-, as was done in [18]. The implementation of the Krylov subspace accelerators (conjugate gradients, GMRES, etc.) was fairly standard. All the matrix--vector products with the coefficient matrix A and ... |

494 |
Iterative Solution Methods
- Axelsson
- 1994
(Show Context)
Citation Context ...he practical use of these methods, we do not give a detailed description of their theoretical properties, for which the interested reader is referred to the original papers. See also the overviews in =-=[5]-=-,[28],[67]. It is convenient to group the different methods into three categories. First, we consider approximate inverse methods based on Frobenius norm minimization. Second, we describe factorized s... |

491 |
der Vorst. Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, 2nd Edition
- Berry, Demmel, et al.
- 1994
(Show Context)
Citation Context ... in the second class, three popular Krylov subspace methods were tested: restarted GMRES [68], Bi-CGSTAB [72], and TFQMR [47]. Good general references on iterative methods are [49] and [67]; see also =-=[9]-=- for a concise introduction. All codes developed for the tests 1 were written in Fortran77 and compiled using the optimization option \GammaZ v. In the experiments, convergence is considered attained ... |

337 | AMR: A quasi-minimal residual method for non-hermitian linear systems
- Freund, Nachtigal
(Show Context)
Citation Context ...G) method is used for solving the problems in the first class. For the problems in the second class, three popular Krylov subspace methods were tested: restarted GMRES [68], Bi-CGSTAB [72], and TFQMR =-=[47]-=-. Good general references on iterative methods are [49] and [67]; see also [9] for a concise introduction. All codes developed for the tests 1 were written in Fortran77 and compiled using the optimiza... |

324 |
Iterative Methods for Solving Linear Systems
- Greenbaum
- 1997
(Show Context)
Citation Context ... class. For the problems in the second class, three popular Krylov subspace methods were tested: restarted GMRES [68], Bi-CGSTAB [72], and TFQMR [47]. Good general references on iterative methods are =-=[49]-=- and [67]; see also [9] for a concise introduction. All codes developed for the tests 1 were written in Fortran77 and compiled using the optimization option \GammaZ v. In the experiments, convergence ... |

310 |
der Vorst, BI–CGSTAB: A fast and smoothly converging variant of BI–CG for the solution of nonsymmetric linear systems
- van
- 1992
(Show Context)
Citation Context ...ate gradient (PCG) method is used for solving the problems in the first class. For the problems in the second class, three popular Krylov subspace methods were tested: restarted GMRES [68], Bi-CGSTAB =-=[72]-=-, and TFQMR [47]. Good general references on iterative methods are [49] and [67]; see also [9] for a concise introduction. All codes developed for the tests 1 were written in Fortran77 and compiled us... |

283 |
Sparse matrix test problems
- Duff, Grimes, et al.
- 1989
(Show Context)
Citation Context ... analysis FDM1 6050 18028 Diffusion equation FDM2 32010 95738 Diffusion equation Table 1: SPD test problems information Matrices NOS, BUS, and BCSSTK are extracted from the Harwell--Boeing collection =-=[40]-=-; the NASA matrices are extracted from Tim Davis's collection [34], and the FDM matrices were kindly provided by Carsten Ullrich of CERFACS. Matrices NOS3, NASA and BCSSTK arise from finite element mo... |

281 | BICGSTAB: A fast and smoothly converging variant of Bi-CG for the solution of nonsymmetric linear systems - Vorst - 1992 |

250 |
der Vorst. An iterative solution method for linear systems of which the coe cient matrix is a symmetric M-matrix
- Meijerink, Van
- 1977
(Show Context)
Citation Context ...ises of whether the preconditioner construction can be performed without breakdowns (divisions by zero): in [15] it is proved that a sufficient condition is that A be an H-matrix, similar to ILU [59],=-=[60]-=-. In the general case, diagonal modifications may be necessary. A third approach that can be used to compute a factorized sparse approximate inverse preconditioner directly from the input matrix A is ... |

184 | Parallel preconditioning with sparse approximate inverses
- Grote, Huckle
- 1997
(Show Context)
Citation Context ...a maximum number of nonzeros in m j has been reached. Such an approach was first proposed by Cosgrove, D'iaz, and Griewank [32]. Slightly different strategies were also considered by Grote and Huckle =-=[50]-=- and by Gould and Scott [48]. The most successful of these approaches is the one proposed by Grote and Huckle, hereafter referred to as the SPAI preconditioner [50]. The algorithm runs as follows. Alg... |

154 | A sparse approximate inverse preconditioner for the conjugate gradient method
- Benzi, Meyer, et al.
- 1996
(Show Context)
Citation Context ...ds in this class include the FSAI preconditioner introduced by Kolotilina and Yeremin [57], a related method due to Kaporin 12 Michele Benzi and Miroslav Tuma [55], incomplete (bi)conjugation schemes =-=[15]-=-,[18], and bordering strategies [67]. Another class of methods first compute an incomplete triangular factorization of A using standard techniques, and then obtain a factorized sparse approximate inve... |

115 |
Factorized sparse approximate inverse preconditionings. III: Iterative construction of preconditioners
- Yeremin, Nikishin
- 2000
(Show Context)
Citation Context ...e triangular factors of A: the factorized approximate inverse preconditioner is constructed directly from A. Methods in this class include the FSAI preconditioner introduced by Kolotilina and Yeremin =-=[57]-=-, a related method due to Kaporin 12 Michele Benzi and Miroslav Tuma [55], incomplete (bi)conjugation schemes [15],[18], and bordering strategies [67]. Another class of methods first compute an incomp... |

83 |
The effect of ordering on preconditioned conjugate gradient
- Duff, Meurant
- 1989
(Show Context)
Citation Context ...ivity of these preconditioners to reorderings. It is well known that incomplete factorization preconditioners are very Sparse Approximate Inverse Preconditioners 11 sensitive to reorderings; see [16],=-=[41]-=-. On the other hand, the SPAI and MR preconditioners are scarcely sensitive to reorderings. This is, at the same time, good and bad. The advantage is that A can be partitioned and reordered in whichev... |

83 | ILUT: a dual threshold incomplete LU factorization’, Numerical Linear Algebra with Applications 1(4), 387–402
- Saad
- 1994
(Show Context)
Citation Context ...ompleteness makes these methods difficult to use in practice, owing to the necessity to choose a large number of user-defined parameters. For instance, if an ILUT-like dual threshold approach is used =-=[65]-=-, then the user is required to choose the values of four parameters, two for the ILUT factorization and other two for the approximate inversion of the ILUT factors. Notice that the values of the param... |

78 | Approximate inverse preconditioners via sparsesparse iterations
- Chow, Saad
- 1998
(Show Context)
Citation Context ...tion 3. As we shall see, the serial cost of computing the SPAI preconditioner can be very high, and the storage requirements rather stringent. In an attempt to alleviate these problems, Chow and Saad =-=[26]-=- proposed to use a few steps of an iterative method to reduce the residuals corresponding to each column of the approximate inverse. In other words, starting from a sparse initial guess, the n indepen... |

76 | Parallel algorithms for dense linear algebra computations - Gallivan, Plemmons, et al. - 1990 |

73 |
Decay rates for inverses of band matrices
- Demko, Moss, et al.
- 1984
(Show Context)
Citation Context ...xception is the case where A is a banded symmetric positive definite (SPD) matrix. In this case, the entries of A \Gamma1 are bounded in an exponentially decaying manner along each row or column; see =-=[35]-=-. Specifically, there exist 0 ! ae ! 1 and a constant C such that for all i; j j(A \Gamma1 ) ij jsCae ji\Gammajj : The numbers ae and C depend on the bandwidth and on the spectral condition number of ... |

68 |
Practical use of polynomial preconditionings for the conjugate gradient method
- Saad
- 1985
(Show Context)
Citation Context ...Gamma 1 2 . For the latter, we estimated the end points of the spectrum using Gerschgorin circles, and Horner's scheme was used to compute the action of the polynomial preconditioner on a vector; see =-=[63]-=-,[67] for details. Horner's scheme was also used with the truncated Neumann expansion methods. Because all the test matrices used in this study have a zero-free diagonal, the simple preconditioner bas... |

68 | Subspace methods on supercomputers
- Saad, Krylov
- 1989
(Show Context)
Citation Context ...rse preconditioners are vectorizable operations. To this end, after the approximate inverse preconditioners have been computed, they are transformed into the JAD, or jagged diagonal, format (see [52],=-=[64]-=-). The same is done with the coefficient matrix A. Although the matrix--vector products still involve indirect addressing, using the JAD format results in good, if not outstanding, vector performance.... |

67 | Software Libraries for Linear Algebra Computations on High Performance Computers - Dongarra, Walker - 1995 |

62 |
Approximate inverse preconditioning for sparse linear systems
- Cosgrove, Dias, et al.
- 1992
(Show Context)
Citation Context ...ype ke j \Gamma Am j k 2 ! " is satisfied for a given " ? 0 (for each j), or a maximum number of nonzeros in m j has been reached. Such an approach was first proposed by Cosgrove, D'iaz, and=-= Griewank [32]-=-. Slightly different strategies were also considered by Grote and Huckle [50] and by Gould and Scott [48]. The most successful of these approaches is the one proposed by Grote and Huckle, hereafter re... |

62 |
An incomplete factorization technique for positive definite linear systems
- Manteuffel
- 1980
(Show Context)
Citation Context ...on arises of whether the preconditioner construction can be performed without breakdowns (divisions by zero): in [15] it is proved that a sufficient condition is that A be an H-matrix, similar to ILU =-=[59]-=-,[60]. In the general case, diagonal modifications may be necessary. A third approach that can be used to compute a factorized sparse approximate inverse preconditioner directly from the input matrix ... |

58 | Experimental study of ILU preconditioners for indefinite matrices
- Chow, Saad
- 1997
(Show Context)
Citation Context ...c and/or indefinite. The failure is usually due to some form of instability, either in the incomplete factorization itself (zero or very small pivots), or in the back substitution phase, or both; see =-=[29]-=-. Most approximate inverse techniques are largely immune from these problems, and therefore constitute an important complement to more standard preconditioning methods even on serial computers. We rem... |

50 | Orderings for incomplete factorization preconditioning of nonsymmetric problems
- Benzi, Szyld, et al.
- 1999
(Show Context)
Citation Context ...ensitivity of these preconditioners to reorderings. It is well known that incomplete factorization preconditioners are very Sparse Approximate Inverse Preconditioners 11 sensitive to reorderings; see =-=[16]-=-,[41]. On the other hand, the SPAI and MR preconditioners are scarcely sensitive to reorderings. This is, at the same time, good and bad. The advantage is that A can be partitioned and reordered in wh... |

48 | Iterative Methods for Sparse Linear Systems. The PWS - Saad - 1996 |

46 |
Iterative solution of large sparse linear systems arising in certain multidimensional approximation problems
- Benson, Frederickson
- 1982
(Show Context)
Citation Context ...trix, the computation of M reduces to solving n independent linear least squares problems, subject to sparsity constraints. This approach was first proposed by Benson [10]. Other early papers include =-=[11]-=-,[12], and [46]. Notice that the above approach produces a right approximate inverse. A left approximate inverse can be computed by solving a constrained minimization problem for Sparse Approximate In... |

44 | Sparse approximate inverse preconditioning for dense linear systems arising in computational electromagnetics. Numerical Algorithms
- Alléon, Benzi, et al.
- 1997
(Show Context)
Citation Context ...prescribing a sparsity pattern have been successfully used for constructing sparse approximate inverse preconditioners for dense linear systems arising in the numerical solution of integral equations =-=[1]-=-,[56]. Because for general sparse matrices it is difficult to prescribe a good nonzero pattern for M , several authors have developed adaptive strategies which start with a simple initial guess Sparse... |

44 |
MA28—a set of Fortran subroutines for sparse unsymmetric linear equations
- Duff
- 1977
(Show Context)
Citation Context ...o pessimistic to be useful. Therefore, �� Z is stored by columns using dynamic data structures, similar to standard rightlooking implementations of sparse unsymmetric Gaussian elimination; see, e.=-=g., [37]-=-,[75]. Nonzero entries of each column are stored consecutively as a segment of a larger workspace. Sparse Approximate Inverse Preconditioners 23 During the AINV algorithm the length of the individual ... |

43 | Approximate inverse techniques for block-partitioned matrices
- CHOW, SAAD
- 1997
(Show Context)
Citation Context ...tion here the use of wavelet compression techniques for PDE problems [24], the combination of sparse approximate inverse methods with approximate Schur complement and other block partitioning schemes =-=[27]-=-, and the use of reorderings for reducing fill-in and improving the quality of factorized approximate inverses [19],[21]. We also mention that very recently, parallelizable adaptive algorithms for con... |

42 |
Computational methods for general sparse matrices
- Zlatev
- 1991
(Show Context)
Citation Context ...simistic to be useful. Therefore, �� Z is stored by columns using dynamic data structures, similar to standard rightlooking implementations of sparse unsymmetric Gaussian elimination; see, e.g., [=-=37],[75]-=-. Nonzero entries of each column are stored consecutively as a segment of a larger workspace. Sparse Approximate Inverse Preconditioners 23 During the AINV algorithm the length of the individual segme... |

40 |
A survey of preconditioned iterative methods for linear systems of algebraic equations
- Axelsson
- 1985
(Show Context)
Citation Context ...reconditioning, the reduction in the number of iterations tends to be compensated by the additional matrix--vector products to be performed at each iteration. More precisely, it was shown by Axelsson =-=[4]-=- that the cost per iteration increases linearly with the number m+1 of terms in the polynomial, whereas the number of iterations decreases slower than O( 1 m+1 ). Therefore, polynomial preconditioners... |

38 |
A survey of preconditioned iterative methods
- Bruaset
- 1995
(Show Context)
Citation Context ...the diagonal entries of �� D are all positive. It is also clear that the nonsingularity of M is trivial to check when M is expressed in factorized form. Following [26] (and contrary to what stated=-= in [22]-=-, p. 109), it can be argued that factorized forms provide better approximations to A \Gamma1 , for the same amount of storage, than nonfactorized ones, because they can express denser matrices than th... |

37 |
Biorthogonality and its Applications to Numerical Analysis
- Brezinski
- 1992
(Show Context)
Citation Context ...is algorithm can be interpreted as a (two-sided) generalized Gram--Schmidt orthogonalization process with respect to the bilinear form associated with A. Some references on this kind of algorithm are =-=[20]-=-,[30],[44],[45]. If A is SPD, only the process for Z need to be carried out (since in this case W = Z), and the algorithm is just a conjugate Gram-- Schmidt process, i.e., orthogonalization of the uni... |

37 |
Explicit preconditioning of systems of linear algebraic equations with dense matrices
- Kolotilina
- 1988
(Show Context)
Citation Context ...cribing a sparsity pattern have been successfully used for constructing sparse approximate inverse preconditioners for dense linear systems arising in the numerical solution of integral equations [1],=-=[56]-=-. Because for general sparse matrices it is difficult to prescribe a good nonzero pattern for M , several authors have developed adaptive strategies which start with a simple initial guess Sparse Appr... |

36 |
A stability analysis of incomplete LU factorizations
- Elman
- 1986
(Show Context)
Citation Context ...puted. This implies that these methods are not even applicable if an ILU factorization does not exist, or if it is unstable, as is sometimes the case for highly nonsymmetric, indefinite problems [29],=-=[42]-=-. Clearly, this assumption also limits the parallel efficiency of this class of methods, since the preconditioner construction phase is not entirely parallelizable (computing an ILU factorization is a... |

33 | Wavelet sparse approximate inverse preconditioned
- Chan, Tang, et al.
- 1997
(Show Context)
Citation Context ...ditioners can be further enhanced in a number of ways, the exploration of which has just begun. Among possible improvements, we mention here the use of wavelet compression techniques for PDE problems =-=[24]-=-, the combination of sparse approximate inverse methods with approximate Schur complement and other block partitioning schemes [27], and the use of reorderings for reducing fill-in and improving the q... |

32 | Sparse approximate-inverse preconditioners using norm-minimization techniques - Gould, Scott - 1998 |

29 |
Approximating the inverse of a matrix for use in iterative algorithms on vector processors
- Dubois, Greenbaum, et al.
- 1979
(Show Context)
Citation Context ...e coefficient matrix A with a low-degree polynomial in the matrix. These methods have a long history (see, e.g., [23]), but came into vogue only after the first vector processors had become available =-=[36]-=-,[54]. Polynomial preconditioners only require matrix--vector products with A and therefore have excellent potential for parallelization, but they are not as effective as incomplete factorization meth... |

26 |
Iterative solution of large scale linear systems
- Benson
- 1973
(Show Context)
Citation Context ... a matrix M �� A \Gamma1 is explicitly computed and stored. The preconditioning operation reduces to a matrix--vector product with M . Methods of this kind were first proposed in the early 1970s (=-=see [10]-=-,[46]), but they received little attention, due to the lack of effective strategies Sparse Approximate Inverse Preconditioners 3 for automatically determining a good nonzero pattern for the sparse app... |

25 | On approximate-inverse preconditioners
- Gould, Scott
- 1995
(Show Context)
Citation Context ... in m j has been reached. Such an approach was first proposed by Cosgrove, D'iaz, and Griewank [32]. Slightly different strategies were also considered by Grote and Huckle [50] and by Gould and Scott =-=[48]-=-. The most successful of these approaches is the one proposed by Grote and Huckle, hereafter referred to as the SPAI preconditioner [50]. The algorithm runs as follows. Algorithm 2.1. SPAI algorithm F... |

24 | An MPI implementation of the SPAI preconditioner on the T3E
- BARNARD, BERNARDO, et al.
- 1999
(Show Context)
Citation Context ...e processors, then the processors which compute different columns need to communicate matrix elements during the course of the computation. For a sophisticated solution to this problem using MPI, see =-=[7]-=-,[8]. Our implementation of Chow and Saad's MR preconditioner (with or without selfpreconditioning) is based on the descriptions in [26]. The storage requirements for the basic MR technique (Algorithm... |

22 |
Parallel algorithms for the solution of certain large sparse linear systems
- Benson, Krettmann, et al.
- 1984
(Show Context)
Citation Context ... the computation of M reduces to solving n independent linear least squares problems, subject to sparsity constraints. This approach was first proposed by Benson [10]. Other early papers include [11],=-=[12]-=-, and [46]. Notice that the above approach produces a right approximate inverse. A left approximate inverse can be computed by solving a constrained minimization problem for Sparse Approximate Inverse... |

20 |
A Portable MPI Implementation of the SPAI Preconditioner in ISIS
- Barnard, Clay
- 1997
(Show Context)
Citation Context ...ocessors, then the processors which compute different columns need to communicate matrix elements during the course of the computation. For a sophisticated solution to this problem using MPI, see [7],=-=[8]-=-. Our implementation of Chow and Saad's MR preconditioner (with or without selfpreconditioning) is based on the descriptions in [26]. The storage requirements for the basic MR technique (Algorithm 2.2... |

20 |
der Vorst. A vectorizable variant of some iccg methods
- van
- 1982
(Show Context)
Citation Context ...ly by applying some kind of truncation. In particular, the forward and back substitutions are replaced by matrix--vector products with sparse triangular matrices. This idea goes back to van der Vorst =-=[71]-=-, and has been recently applied to the SSOR preconditioner, which can be seen as a kind of incomplete factorization, by Gustafsson and Lindskog [51]. The truncated Neumann SSOR preconditioner for a sy... |

19 | A rank-one reduction formula and its applications to matrix factorizations
- Chu, Funderlic, et al.
- 1995
(Show Context)
Citation Context ...gorithm can be interpreted as a (two-sided) generalized Gram--Schmidt orthogonalization process with respect to the bilinear form associated with A. Some references on this kind of algorithm are [20],=-=[30]-=-,[44],[45]. If A is SPD, only the process for Z need to be carried out (since in this case W = Z), and the algorithm is just a conjugate Gram-- Schmidt process, i.e., orthogonalization of the unit vec... |

19 | lterative Solution Methods (Cambridge - Axelsson - 1994 |

18 |
Notes on the solution of algebraic linear simultaneous equations, Quart
- Fox, Huskey, et al.
- 1948
(Show Context)
Citation Context ...n be interpreted as a (two-sided) generalized Gram--Schmidt orthogonalization process with respect to the bilinear form associated with A. Some references on this kind of algorithm are [20],[30],[44],=-=[45]-=-. If A is SPD, only the process for Z need to be carried out (since in this case W = Z), and the algorithm is just a conjugate Gram-- Schmidt process, i.e., orthogonalization of the unit vectors with ... |

17 |
Sparsity structure and Gaussian elimination
- Duff, Erisman, et al.
- 1988
(Show Context)
Citation Context ...ly dense. This means that for a given irreducible sparsity pattern, it is always possible to assign numerical values to the nonzeros in such a way that all entries of the inverse will be nonzero; see =-=[38]-=-. Nevertheless, it is often the case that many of the entries in the inverse of a sparse matrix are small in absolute value, thus making the approximation of A \Gamma1 with a sparse matrix possible. R... |