Results 1  10
of
21
Preconditioning techniques for large linear systems: A survey
 J. COMPUT. PHYS
, 2002
"... This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization i ..."
Abstract

Cited by 164 (5 self)
 Add to MetaCart
(Show Context)
This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization issues, and block and multilevel extensions. Some of the challenges ahead are also discussed. An extensive bibliography completes the paper.
A Priori Sparsity Patterns For Parallel Sparse Approximate Inverse Preconditioners
, 1998
"... . Parallel algorithms for computing sparse approximations to the inverse of a sparse matrix either use a prescribed sparsity pattern for the approximate inverse, or attempt to generate a good pattern as part of the algorithm. This paper demonstrates that for PDE problems, the patterns of powers of s ..."
Abstract

Cited by 69 (6 self)
 Add to MetaCart
. Parallel algorithms for computing sparse approximations to the inverse of a sparse matrix either use a prescribed sparsity pattern for the approximate inverse, or attempt to generate a good pattern as part of the algorithm. This paper demonstrates that for PDE problems, the patterns of powers of sparsied matrices (PSM's) can be used a priori as eective approximate inverse patterns, and that the additional eort of adaptive sparsity pattern calculations may not be required. PSM patterns are related to various other approximate inverse sparsity patterns through matrix graph theory and heuristics about the PDE's Green's function. A parallel implementation shows that PSMpatterned approximate inverses are signicantly faster to construct than approximate inverses constructed adaptively, while often giving preconditioners of comparable quality. Key words. preconditioned iterative methods, sparse approximate inverses, graph theory, parallel computing AMS subject classications. 65F10, ...
Orderings for incomplete factorization preconditioning of nonsymmetric problems
 SIAM J. SCI. COMPUT
, 1999
"... Numerical experiments are presented whereby the effect of reorderings on the convergence of preconditioned Krylov subspace methods for the solution of nonsymmetric linear systems is shown. The preconditioners used in this study are different variants of incomplete factorizations. It is shown that c ..."
Abstract

Cited by 54 (11 self)
 Add to MetaCart
(Show Context)
Numerical experiments are presented whereby the effect of reorderings on the convergence of preconditioned Krylov subspace methods for the solution of nonsymmetric linear systems is shown. The preconditioners used in this study are different variants of incomplete factorizations. It is shown that certain reorderings for direct methods, such as reverse Cuthill–McKee, can be very beneficial. The benefit can be seen in the reduction of the number of iterations and also in measuring the deviation of the preconditioned operator from the identity.
Parallel Implementation and Practical Use of Sparse Approximate Inverse Preconditioners With a Priori Sparsity Patterns
 Int. J. High Perf. Comput. Appl
, 2001
"... This paper describes and tests a parallel, message passing code for constructing sparse approximate inverse preconditioners using Frobenius norm minimization. The sparsity patterns of the preconditioners are chosen as patterns of powers of sparsified matrices. Sparsification is necessary when powers ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
This paper describes and tests a parallel, message passing code for constructing sparse approximate inverse preconditioners using Frobenius norm minimization. The sparsity patterns of the preconditioners are chosen as patterns of powers of sparsified matrices. Sparsification is necessary when powers of a matrix have a large number of nonzeros, making the approximate inverse computation expensive. For our test problems, the minimum solution time is achieved with approximate inverses with fewer than twice the number of nonzeros of the original matrix. Additional accuracy is not compensated by the increased cost per iteration. The results lead to further understanding of how to use these methods and how well these methods work in practice. In addition, this paper describes programming techniques required for high performance, including onesided communication, local coordinate numbering, and load repartitioning.
An MPI implementation of the SPAI preconditioner on the t3E
 INTL. J. HIGH PERF. COMPUT. APPL
, 1999
"... The authors describe and test spai_1.1, a parallel MPI implementation of the sparse approximate inverse (SPAI) preconditioner. They show that SPAI can be very effective for solving a set of very large and difficult problems on a Cray T3E. The results clearly show the value of SPAI (and approximate i ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
(Show Context)
The authors describe and test spai_1.1, a parallel MPI implementation of the sparse approximate inverse (SPAI) preconditioner. They show that SPAI can be very effective for solving a set of very large and difficult problems on a Cray T3E. The results clearly show the value of SPAI (and approximate inverse methods in general) as the viable alternative to ILUtype methods when facing very large and difficult problems. The authors strengthen this conclusion by showing that spai_1.1 also has very good scaling behavior.
A Sparse Approximate Inverse Technique for Parallel Preconditioning of General Sparse Matrices
 Appl. Math. Comput
, 1998
"... A sparse approximate inverse technique is introduced to solve general sparse linear systems. The sparse approximate inverse is computed as a factored form and used as a preconditioner to work with some Krylov subspace methods. The new technique is derived from a matrix decomposition algorithm for in ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
(Show Context)
A sparse approximate inverse technique is introduced to solve general sparse linear systems. The sparse approximate inverse is computed as a factored form and used as a preconditioner to work with some Krylov subspace methods. The new technique is derived from a matrix decomposition algorithm for inverting dense nonsymmetric matrices. Several strategies and special data structures are proposed to implement the algorithm efficiently. Sparsity patterns of the the factored inverse are exploited to reduce computational cost. The computation of the factored sparse approximate inverse is relatively cheaper than the techniques based on norm minimization techniques. The new preconditioner possesses much greater inherent parallelism than traditional preconditioners based on incomplete LU factorizations. Numerical experiments are used to show the effectiveness and efficiency of the new sparse approximate inverse preconditioner.
Numerical Experiments With Two Approximate Inverse Preconditioners
 BIT
, 1998
"... We present the results of numerical experiments aimed at comparing two recently proposed sparse approximate inverse preconditioners from the point of view of robustness, cost, and effectiveness. Results for a standard ILU preconditioner are also included. The numerical experiments were carried out o ..."
Abstract

Cited by 17 (7 self)
 Add to MetaCart
We present the results of numerical experiments aimed at comparing two recently proposed sparse approximate inverse preconditioners from the point of view of robustness, cost, and effectiveness. Results for a standard ILU preconditioner are also included. The numerical experiments were carried out on a Cray C98 vector processor.
Prospects for CFD on Petaflops Systems
 CFD Review
, 1997
"... Abstract. With teraflopsscale computational modeling expected to be routine by 2003–04, under the terms of the Accelerated Strategic Computing Initiative (ASCI) of the U.S. Department of Energy, and with teraflopscapable platforms already available to a small group of users, attention naturally fo ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
(Show Context)
Abstract. With teraflopsscale computational modeling expected to be routine by 2003–04, under the terms of the Accelerated Strategic Computing Initiative (ASCI) of the U.S. Department of Energy, and with teraflopscapable platforms already available to a small group of users, attention naturally focuses on the next symbolically important milestone, computing at rates of 10 15 floating point operations per second, or “petaflop/s”. For architectural designs that are in any sense extrapolations of today’s, petaflopsscale computing will require approximately onemillionfold instructionlevel concurrency. Given that costeffective onethousandfold concurrency is challenging in practical computational fluid dynamics simulations today, algorithms are among the many possible bottlenecks to CFD on petaflops systems. After a general outline of the problems and prospects of petaflops computing, we examine the issue of algorithms for PDE computations in particular. A backoftheenvelope parallel complexity analysis focuses on the latency of global synchronization steps in the implicit algorithm. We argue that the latency of synchronization steps is a fundamental, but addressable, challenge for PDE computations with static data structures, which are primarily determined by grids. We provide recent results with encouraging scalability for parallel implicit Euler simulations using the NewtonKrylovSchwarz solver in the PETSc software library. The prospects for PDE simulations with dynamically evolving data structures are far less clear. Key words. Parallel scientific computing, computational fluid dynamics, petaflops architectures 1. Introduction. Future computing technology in general
Parallel Implementation and Performance Characteristics of Least Squares Sparse Approximate Inverse Preconditioners
 Int. J. HighPerform. Comput. Appl
, 2000
"... This paper describes and tests a parallel, message passing code for constructing sparse approximate inverse preconditioners using Frobenius norm minimization. The sparsity patterns of the preconditioners are chosen as patterns of powers of sparsied matrices. Sparsication is necessary when powers ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
This paper describes and tests a parallel, message passing code for constructing sparse approximate inverse preconditioners using Frobenius norm minimization. The sparsity patterns of the preconditioners are chosen as patterns of powers of sparsied matrices. Sparsication is necessary when powers of a matrix have a large number of nonzeros, making the approximate inverse computation expensive. For our test problems, the minimum solution time is achieved with approximate inverses with fewer than twice the number of nonzeros of the original matrix. Additional accuracy is not compensated by the increased cost per iteration. The results lead to further understanding of how to use these methods and how well these methods work in practice. In addition, this paper describes programming techniques required for high performance, including onesided communication, local coordinate numbering, and load repartitioning. 1 Introduction A sparse approximate inverse approximates the invers...