Results 1  10
of
15
Preconditioning techniques for large linear systems: A survey
 J. COMPUT. PHYS
, 2002
"... This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization i ..."
Abstract

Cited by 105 (5 self)
 Add to MetaCart
This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization issues, and block and multilevel extensions. Some of the challenges ahead are also discussed. An extensive bibliography completes the paper.
A Priori Sparsity Patterns For Parallel Sparse Approximate Inverse Preconditioners
, 1998
"... . Parallel algorithms for computing sparse approximations to the inverse of a sparse matrix either use a prescribed sparsity pattern for the approximate inverse, or attempt to generate a good pattern as part of the algorithm. This paper demonstrates that for PDE problems, the patterns of powers of s ..."
Abstract

Cited by 53 (5 self)
 Add to MetaCart
. Parallel algorithms for computing sparse approximations to the inverse of a sparse matrix either use a prescribed sparsity pattern for the approximate inverse, or attempt to generate a good pattern as part of the algorithm. This paper demonstrates that for PDE problems, the patterns of powers of sparsied matrices (PSM's) can be used a priori as eective approximate inverse patterns, and that the additional eort of adaptive sparsity pattern calculations may not be required. PSM patterns are related to various other approximate inverse sparsity patterns through matrix graph theory and heuristics about the PDE's Green's function. A parallel implementation shows that PSMpatterned approximate inverses are signicantly faster to construct than approximate inverses constructed adaptively, while often giving preconditioners of comparable quality. Key words. preconditioned iterative methods, sparse approximate inverses, graph theory, parallel computing AMS subject classications. 65F10, ...
Orderings for incomplete factorization preconditioning of nonsymmetric problems
 SIAM J. SCI. COMPUT
, 1999
"... Numerical experiments are presented whereby the effect of reorderings on the convergence of preconditioned Krylov subspace methods for the solution of nonsymmetric linear systems is shown. The preconditioners used in this study are different variants of incomplete factorizations. It is shown that c ..."
Abstract

Cited by 52 (11 self)
 Add to MetaCart
Numerical experiments are presented whereby the effect of reorderings on the convergence of preconditioned Krylov subspace methods for the solution of nonsymmetric linear systems is shown. The preconditioners used in this study are different variants of incomplete factorizations. It is shown that certain reorderings for direct methods, such as reverse Cuthill–McKee, can be very beneficial. The benefit can be seen in the reduction of the number of iterations and also in measuring the deviation of the preconditioned operator from the identity.
An MPI implementation of the SPAI preconditioner on the t3E
 INTL. J. HIGH PERF. COMPUT. APPL
, 1999
"... The authors describe and test spai_1.1, a parallel MPI implementation of the sparse approximate inverse (SPAI) preconditioner. They show that SPAI can be very effective for solving a set of very large and difficult problems on a Cray T3E. The results clearly show the value of SPAI (and approximate i ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
The authors describe and test spai_1.1, a parallel MPI implementation of the sparse approximate inverse (SPAI) preconditioner. They show that SPAI can be very effective for solving a set of very large and difficult problems on a Cray T3E. The results clearly show the value of SPAI (and approximate inverse methods in general) as the viable alternative to ILUtype methods when facing very large and difficult problems. The authors strengthen this conclusion by showing that spai_1.1 also has very good scaling behavior.
Parallel Implementation and Practical Use of Sparse Approximate Inverse Preconditioners With a Priori Sparsity Patterns
 Int. J. High Perf. Comput. Appl
, 2001
"... This paper describes and tests a parallel, message passing code for constructing sparse approximate inverse preconditioners using Frobenius norm minimization. The sparsity patterns of the preconditioners are chosen as patterns of powers of sparsified matrices. Sparsification is necessary when powers ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
This paper describes and tests a parallel, message passing code for constructing sparse approximate inverse preconditioners using Frobenius norm minimization. The sparsity patterns of the preconditioners are chosen as patterns of powers of sparsified matrices. Sparsification is necessary when powers of a matrix have a large number of nonzeros, making the approximate inverse computation expensive. For our test problems, the minimum solution time is achieved with approximate inverses with fewer than twice the number of nonzeros of the original matrix. Additional accuracy is not compensated by the increased cost per iteration. The results lead to further understanding of how to use these methods and how well these methods work in practice. In addition, this paper describes programming techniques required for high performance, including onesided communication, local coordinate numbering, and load repartitioning.
A Sparse Approximate Inverse Technique for Parallel Preconditioning of General Sparse Matrices
 Appl. Math. Comput
, 1998
"... A sparse approximate inverse technique is introduced to solve general sparse linear systems. The sparse approximate inverse is computed as a factored form and used as a preconditioner to work with some Krylov subspace methods. The new technique is derived from a matrix decomposition algorithm for in ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
A sparse approximate inverse technique is introduced to solve general sparse linear systems. The sparse approximate inverse is computed as a factored form and used as a preconditioner to work with some Krylov subspace methods. The new technique is derived from a matrix decomposition algorithm for inverting dense nonsymmetric matrices. Several strategies and special data structures are proposed to implement the algorithm efficiently. Sparsity patterns of the the factored inverse are exploited to reduce computational cost. The computation of the factored sparse approximate inverse is relatively cheaper than the techniques based on norm minimization techniques. The new preconditioner possesses much greater inherent parallelism than traditional preconditioners based on incomplete LU factorizations. Numerical experiments are used to show the effectiveness and efficiency of the new sparse approximate inverse preconditioner.
Numerical Experiments With Two Approximate Inverse Preconditioners
 BIT
, 1998
"... We present the results of numerical experiments aimed at comparing two recently proposed sparse approximate inverse preconditioners from the point of view of robustness, cost, and effectiveness. Results for a standard ILU preconditioner are also included. The numerical experiments were carried out o ..."
Abstract

Cited by 14 (7 self)
 Add to MetaCart
We present the results of numerical experiments aimed at comparing two recently proposed sparse approximate inverse preconditioners from the point of view of robustness, cost, and effectiveness. Results for a standard ILU preconditioner are also included. The numerical experiments were carried out on a Cray C98 vector processor.
Parallel Implementation and Performance Characteristics of Least Squares Sparse Approximate Inverse Preconditioners
 Int. J. HighPerform. Comput. Appl
, 2000
"... This paper describes and tests a parallel, message passing code for constructing sparse approximate inverse preconditioners using Frobenius norm minimization. The sparsity patterns of the preconditioners are chosen as patterns of powers of sparsied matrices. Sparsication is necessary when powers ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
This paper describes and tests a parallel, message passing code for constructing sparse approximate inverse preconditioners using Frobenius norm minimization. The sparsity patterns of the preconditioners are chosen as patterns of powers of sparsied matrices. Sparsication is necessary when powers of a matrix have a large number of nonzeros, making the approximate inverse computation expensive. For our test problems, the minimum solution time is achieved with approximate inverses with fewer than twice the number of nonzeros of the original matrix. Additional accuracy is not compensated by the increased cost per iteration. The results lead to further understanding of how to use these methods and how well these methods work in practice. In addition, this paper describes programming techniques required for high performance, including onesided communication, local coordinate numbering, and load repartitioning. 1 Introduction A sparse approximate inverse approximates the invers...
Developments and Trends in the Parallel Solution of Linear Systems
 Parallel Computing
, 1999
"... In this review paper, we consider some important developments and trends in algorithm design for the solution of linear systems concentrating on aspects that involve the exploitation of parallelism. We briefly discuss the solution of dense linear systems, before studying the solution of sparse equat ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
In this review paper, we consider some important developments and trends in algorithm design for the solution of linear systems concentrating on aspects that involve the exploitation of parallelism. We briefly discuss the solution of dense linear systems, before studying the solution of sparse equations by direct and iterative methods. We consider preconditioning techniques for iterative solvers and discuss some of the present research issues in this field. Keywords: linear systems, dense matrices, sparse matrices, tridiagonal systems, parallelism, direct methods, iterative methods, Krylov methods, preconditioning. AMS(MOS) subject classifications: 65F05, 65F50. 1 Introduction Solution methods for systems of linear equations Ax = b; (1) where A is a coefficient matrix of order n and x and b are nvectors, are usually grouped into two distinct classes: direct methods and iterative methods. However, CCLRC  Rutherford Appleton Laboratory, Oxfordshire, England and CERFACS, Toulouse,...