Results 1  10
of
37
Preconditioning techniques for large linear systems: A survey
 J. COMPUT. PHYS
, 2002
"... This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization i ..."
Abstract

Cited by 103 (5 self)
 Add to MetaCart
This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization issues, and block and multilevel extensions. Some of the challenges ahead are also discussed. An extensive bibliography completes the paper.
ARMS: An Algebraic Recursive Multilevel Solver for general sparse linear systems
 Numer. Linear Alg. Appl
, 1999
"... This paper presents a general preconditioning method based on a multilevel partial solution approach. The basic step in constructing the preconditioner is to separate the initial points into two subsets. The first subset which can be termed "coarse" is obtained by using "block" independent sets, ..."
Abstract

Cited by 46 (24 self)
 Add to MetaCart
This paper presents a general preconditioning method based on a multilevel partial solution approach. The basic step in constructing the preconditioner is to separate the initial points into two subsets. The first subset which can be termed "coarse" is obtained by using "block" independent sets, or "aggregates". Two aggregates have no coupling between them, but nodes in the same aggregate may be coupled. The nodes not in the coarse set are part of what might be called the "Fringe" set. The idea of the methods is to form the Schur complement related to the fringe set. This leads to a natural block LU factorization which can be used as a preconditioner for the system. This system is then solver recursively using as preconditioner the factorization that could be obtained from the next level. Unlike other multilevel preconditioners available, iterations between levels are allowed. One interesting aspect of the method is that it provides a common framework for many other technique...
Preconditioning highly indefinite and nonsymmetric matrices
 SIAM J. SCI. COMPUT
, 2000
"... Standard preconditioners, like incomplete factorizations, perform well when the coefficient matrix is diagonally dominant, but often fail on general sparse matrices. We experiment with nonsymmetric permutationsand scalingsaimed at placing large entrieson the diagonal in the context of preconditionin ..."
Abstract

Cited by 40 (4 self)
 Add to MetaCart
Standard preconditioners, like incomplete factorizations, perform well when the coefficient matrix is diagonally dominant, but often fail on general sparse matrices. We experiment with nonsymmetric permutationsand scalingsaimed at placing large entrieson the diagonal in the context of preconditioning for general sparse matrices. The permutations and scalings are those developed by Olschowka and Neumaier [Linear Algebra Appl., 240 (1996), pp. 131–151] and by Duff and
pARMS: a Parallel Version of the Algebraic Recursive Multilevel Solver
, 2001
"... A parallel version of the Algebraic Recursive Multilevel Solver (ARMS) is developed for distributed computing environments. The method adopts the general framework of distributed sparse matrices and relies on solving the resulting distributed Schur complement system. Numerical experiments are pre ..."
Abstract

Cited by 26 (12 self)
 Add to MetaCart
A parallel version of the Algebraic Recursive Multilevel Solver (ARMS) is developed for distributed computing environments. The method adopts the general framework of distributed sparse matrices and relies on solving the resulting distributed Schur complement system. Numerical experiments are presented which compare these approaches on regularly and irregularly structured problems.
Preconditioned Krylov Subspace Methods for Solving Nonsymmetric Matrices from CFD Applications
 Comput. Methods Appl. Mech. Engrg
, 1999
"... We conduct experimental study on the behavior of several preconditioned iterative methods to solve nonsymmetric matrices arising from computational fluid dynamics (CFD) applications. The preconditioned iterative methods consist of Krylov subspace accelerators and a powerful general purpose multil ..."
Abstract

Cited by 21 (12 self)
 Add to MetaCart
We conduct experimental study on the behavior of several preconditioned iterative methods to solve nonsymmetric matrices arising from computational fluid dynamics (CFD) applications. The preconditioned iterative methods consist of Krylov subspace accelerators and a powerful general purpose multilevel block ILU (BILUM) preconditioner. The BILUM preconditioner and an enhanced version of it are slightly modified versions of the originally proposed preconditioners. They will be used in combination with different Krylov subspace methods. We choose to test three popular transposefree Krylov subspace methods: BiCGSTAB, GMRES and TFQMR. Numerical experiments, using several sets of test matrices arising from various relevant CFD applications, are reported. Key words: Multilevel preconditioner, Krylov subspace methods, nonsymmetric matrices, CFD applications. AMS subject classifications: 65F10, 65F50, 65N06, 65N55. 1 Introduction A challenging problem in computational fluid dynamics (...
Domain Decomposition and MultiLevel Type Techniques for General Sparse Linear Systems
, 1998
"... Domaindecomposition and multilevel techniques are often formulated for linear systems that arise from the solution of elliptictype Partial Differential Equations. In this paper, generalizations of these techniques for irregularly structured sparse linear systems are considered. An interesting ..."
Abstract

Cited by 17 (16 self)
 Add to MetaCart
Domaindecomposition and multilevel techniques are often formulated for linear systems that arise from the solution of elliptictype Partial Differential Equations. In this paper, generalizations of these techniques for irregularly structured sparse linear systems are considered. An interesting common approach used to derive successful preconditioners is to resort to Schur complements. In particular, we discuss a multilevel domain decompositiontype algorithm for iterative solution of large sparse linear systems based on independent subsets of nodes. We also discuss a Schur complement technique that utilizes incomplete LU factorizations of local matrices. Key words: Schur complement techniques; Incomplete LU factorization; Schwarz iterations; Multielimination; Multilevel ILU preconditioners; Krylov subspace methods. 1 Introduction A recent trend in parallel preconditioning techniques for general sparse linear systems is to exploit ideas from domain decomposition concepts an...
On the Approximate Cyclic Reduction Preconditioner
 SIAM J. Sci. Comput
, 2000
"... We present a preconditioning method for the iterative solution of large sparse systems of equations. The preconditioner is based on ideas both from ILU preconditioning and from multigrid. The resulting preconditioning technique requires the matrix only. A multilevel structure is obtained by using ma ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
We present a preconditioning method for the iterative solution of large sparse systems of equations. The preconditioner is based on ideas both from ILU preconditioning and from multigrid. The resulting preconditioning technique requires the matrix only. A multilevel structure is obtained by using maximal independent sets for graph coarsening. A Schur complement approximation is constructed using a sequence of point Gaussian elimination steps. The resulting preconditioner has a transparant modular structure similar to the algoritmic structure of a multigrid Vcycle.
Diagonal Threshold Techniques in Robust MultiLevel ILU Preconditioners for General Sparse Linear Systems
 NUMER. LINEAR ALGEBRA APPL
, 1998
"... This paper introduces techniques based on diagonal threshold tolerance when developing multielimination and multilevel incomplete LU (ILUM) factorization preconditioners for solving general sparse linear systems. Existing heuristics solely based on the adjacency graph of the matrices have been ..."
Abstract

Cited by 14 (10 self)
 Add to MetaCart
This paper introduces techniques based on diagonal threshold tolerance when developing multielimination and multilevel incomplete LU (ILUM) factorization preconditioners for solving general sparse linear systems. Existing heuristics solely based on the adjacency graph of the matrices have been used to find independent sets and are not robust for matrices arising from certain applications in which the matrices may have small or zero diagonals. New heuristic strategies based on the adjacency graph and the diagonal values of the matrices for finding independent sets are introduced. Analytical bounds for the factorization and preconditioned errors are obtained for the case of a twolevel analysis. These bounds provide useful information in designing robust ILUM preconditioners. Extensive numerical experiments are conducted in order to compare robustness and efficiency of various heuristic strategies.
A Sparse Approximate Inverse Technique for Parallel Preconditioning of General Sparse Matrices
 Appl. Math. Comput
, 1998
"... A sparse approximate inverse technique is introduced to solve general sparse linear systems. The sparse approximate inverse is computed as a factored form and used as a preconditioner to work with some Krylov subspace methods. The new technique is derived from a matrix decomposition algorithm for in ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
A sparse approximate inverse technique is introduced to solve general sparse linear systems. The sparse approximate inverse is computed as a factored form and used as a preconditioner to work with some Krylov subspace methods. The new technique is derived from a matrix decomposition algorithm for inverting dense nonsymmetric matrices. Several strategies and special data structures are proposed to implement the algorithm efficiently. Sparsity patterns of the the factored inverse are exploited to reduce computational cost. The computation of the factored sparse approximate inverse is relatively cheaper than the techniques based on norm minimization techniques. The new preconditioner possesses much greater inherent parallelism than traditional preconditioners based on incomplete LU factorizations. Numerical experiments are used to show the effectiveness and efficiency of the new sparse approximate inverse preconditioner.