Results 1  10
of
101
A sparse approximate inverse preconditioner for nonsymmetric linear systems
 SIAM J. SCI. COMPUT
, 1998
"... This paper is concerned with a new approach to preconditioning for large, sparse linear systems. A procedure for computing an incomplete factorization of the inverse of a nonsymmetric matrix is developed, and the resulting factorized sparse approximate inverse is used as an explicit preconditioner f ..."
Abstract

Cited by 192 (22 self)
 Add to MetaCart
This paper is concerned with a new approach to preconditioning for large, sparse linear systems. A procedure for computing an incomplete factorization of the inverse of a nonsymmetric matrix is developed, and the resulting factorized sparse approximate inverse is used as an explicit preconditioner for conjugate gradient–type methods. Some theoretical properties of the preconditioner are discussed, and numerical experiments on test matrices from the Harwell–Boeing collection and from Tim Davis’s collection are presented. Our results indicate that the new preconditioner is cheaper to construct than other approximate inverse preconditioners. Furthermore, the new technique insures convergence rates of the preconditioned iteration which are comparable with those obtained with standard implicit preconditioners.
Preconditioning techniques for large linear systems: A survey
 J. COMPUT. PHYS
, 2002
"... This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization i ..."
Abstract

Cited by 189 (5 self)
 Add to MetaCart
(Show Context)
This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization issues, and block and multilevel extensions. Some of the challenges ahead are also discussed. An extensive bibliography completes the paper.
Approximate Inverse Preconditioners Via SparseSparse Iterations
, 1998
"... . The standard incomplete LU (ILU) preconditioners often fail for general sparse indefinite matrices because they give rise to `unstable' factors L and U . In such cases, it may be attractive to approximate the inverse of the matrix directly. This paper focuses on approximate inverse preconditi ..."
Abstract

Cited by 87 (17 self)
 Add to MetaCart
. The standard incomplete LU (ILU) preconditioners often fail for general sparse indefinite matrices because they give rise to `unstable' factors L and U . In such cases, it may be attractive to approximate the inverse of the matrix directly. This paper focuses on approximate inverse preconditioners based on minimizing kI \Gamma AMkF , where AM is the preconditioned matrix. An iterative descenttype method is used to approximate each column of the inverse. For this approach to be efficient, the iteration must be done in sparse mode, i.e., with `sparsematrix by sparsevector' operations. Numerical dropping is applied to maintain sparsity; compared to previous methods, this is a natural way to determine the sparsity pattern of the approximate inverse. This paper describes Newton, `global' and columnoriented algorithms, and discusses options for initial guesses, selfpreconditioning, and dropping strategies. Some limited theoretical results on the properties and convergence of approxima...
Krylov subspace methods on supercomputers
 SIAM J. SCI. STAT. COMPUT
, 1989
"... This paper presents a short survey of recent research on Krylov subspace methods with emphasis on implementation on vector and parallel computers. Conjugate gradient methods have proven very useful on traditional scalar computers, and their popularity is likely to increase as three dimensional model ..."
Abstract

Cited by 77 (4 self)
 Add to MetaCart
(Show Context)
This paper presents a short survey of recent research on Krylov subspace methods with emphasis on implementation on vector and parallel computers. Conjugate gradient methods have proven very useful on traditional scalar computers, and their popularity is likely to increase as three dimensional models gain importance. A conservative approach to derive effective iterative techniques for supercomputers has been to find efficient parallel / vector implementations of the standard algorithms. The main source of difficulty in the incomplete factorization preconditionings is in the solution of the triangular systems at each step. We describe in detail a few approaches consisting of implementing efficient forward and backward triangular solutions. Then we discuss polynomial preconditioning as an alternative to standard incomplete factorization techniques. Another efficient approach is to reorder the equations so as improve the structure of the matrix to achieve better parallelism or vectorization. We give an overview of these ideas and others and attempt to comment on their effectiveness or potential for different types of architectures.
A review on the inverse of symmetric tridiagonal and block tridiagonal matrices
 SIAM J. Matrix Anal. Appl
, 1992
"... Abstract. In this paper some results are reviewed concerning the characterization of inverses of symmetric tridiagonal and block tridiagonal matrices as well as results concerning the decay of the elements of the inverses. These results are obtained by relating the elements of inverses to elements o ..."
Abstract

Cited by 68 (1 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper some results are reviewed concerning the characterization of inverses of symmetric tridiagonal and block tridiagonal matrices as well as results concerning the decay of the elements of the inverses. These results are obtained by relating the elements of inverses to elements of the Cholesky decompositions of these matrices. This gives explicit formulas for the elements of the inverse and gives rise to stable algorithms to compute them. These expressions also lead to bounds for the decay of the elements of the inverse for problems arising from discretization schemes. Key words, block tridiagonal matrices, decay of elements AMS(MOS) subject classifications. 15A09, 65F50 1. Introduction. When
ILUM: A MultiElimination ILU Preconditioner For General Sparse Matrices
 SIAM J. Sci. Comput
, 1999
"... Standard preconditioning techniques based on incomplete LU (ILU) factorizations offer a limited degree of parallelism, in general. A few of the alternatives advocated so far consist of either using some form of polynomial preconditioning, or applying the usual ILU factorization to a matrix obtain ..."
Abstract

Cited by 59 (12 self)
 Add to MetaCart
(Show Context)
Standard preconditioning techniques based on incomplete LU (ILU) factorizations offer a limited degree of parallelism, in general. A few of the alternatives advocated so far consist of either using some form of polynomial preconditioning, or applying the usual ILU factorization to a matrix obtained from a multicolor ordering. In this paper we present an incomplete factorization technique based on independent set orderings and multicoloring. We note that in order to improve robustness, it is necessary to allow the preconditioner to have an arbitrarily high accuracy, as is done with ILUs based on threshold techniques. The ILUM factorization described in this paper is in this category. It can be viewed as a multifrontal version a Gaussian elimination procedure with threshold dropping which has a high degree of potential parallelism. The emphasis is on methods that deal specifically with general unstructured sparse matrices such as those arising from finite element methods on un...
BILUM: Block versions of multielimination and multilevel ILU preconditioner for general sparse linear systems
 SIAM J. Sci. Comput
, 1999
"... Abstract. We introduce block versions of the multielimination incomplete LU (ILUM) factorization preconditioning technique for solving general sparse unstructured linear systems. These preconditioners have a multilevel structure and, for certain types of problems, may exhibit properties that are typ ..."
Abstract

Cited by 54 (29 self)
 Add to MetaCart
Abstract. We introduce block versions of the multielimination incomplete LU (ILUM) factorization preconditioning technique for solving general sparse unstructured linear systems. These preconditioners have a multilevel structure and, for certain types of problems, may exhibit properties that are typically enjoyed by multigrid methods. Several heuristic strategies for forming blocks of independent sets are introduced and their relative merits are discussed. The advantages of block ILUM over point ILUM include increased robustness and efficiency. We compare several versions of the block ILUM, point ILUM, and the dualthresholdbased ILUT preconditioners. In particular, tests with some convectiondiffusion problems show that it may be possible to obtain convergence that is nearly independent of the Reynolds number as well as of the grid size.
Bounds for the entries of matrix functions with applications to preconditioning
 BIT
, 1999
"... Let A be a symmetric matrix and let f be a smooth function defined on an interval containing the spectrum of A. Generalizing a wellknown result of Demko, Moss and Smith on the decay of the inverse we show that when A is banded, the entries of f(A)are bounded in an exponentially decaying manner away ..."
Abstract

Cited by 44 (15 self)
 Add to MetaCart
(Show Context)
Let A be a symmetric matrix and let f be a smooth function defined on an interval containing the spectrum of A. Generalizing a wellknown result of Demko, Moss and Smith on the decay of the inverse we show that when A is banded, the entries of f(A)are bounded in an exponentially decaying manner away from the main diagonal. Bounds obtained by representing the entries of f(A) in terms of Riemann–Stieltjes integrals and by approximating such integrals by Gaussian quadrature rules are also considered. Applications of these bounds to preconditioning are suggested and illustrated by a few numerical examples.
Approximate Inverse Techniques for BlockPartitioned Matrices
 SIAM J. Sci. Comput
, 1995
"... This paper proposes some preconditioning options when the system matrix is in blockpartitioned form. This form may arise naturally, for example from the incompressible NavierStokes equations, or may be imposed after a domain decomposition reordering. Approximate inverse techniques are used to g ..."
Abstract

Cited by 43 (12 self)
 Add to MetaCart
This paper proposes some preconditioning options when the system matrix is in blockpartitioned form. This form may arise naturally, for example from the incompressible NavierStokes equations, or may be imposed after a domain decomposition reordering. Approximate inverse techniques are used to generate sparse approximate solutions whenever these are needed in forming the preconditioner. The storage requirements for these preconditioners may be much less than for ILU preconditioners for tough, largescale CFD problems. The numerical experiments reported show that these preconditioners can help us solve difficult linear systems whose coefficient matrices are highly indefinite. 1 Introduction Consider the block partitioning of a matrix A, in the form A = ` B F E C ' (1) where the blocking naturally occurs due the ordering of the equations and the variables. Matrices of this form arise in many applications, such as in the incompressible NavierStokes equations, where the sc...
Approximation with Kronecker Products
 Linear Algebra for Large Scale and Real Time Applications
, 1993
"... Let A be an mbyn matrix with m = m1m2 and n = n1n2 . We consider the problem of finding B 2 IR m 1 \Thetan 1 and C 2 IR m 2 \Thetan 2 so that k A \Gamma B\Omega C k F is minimized. This problem can be solved by computing the largest singular value and associated singular vectors of a permute ..."
Abstract

Cited by 41 (1 self)
 Add to MetaCart
Let A be an mbyn matrix with m = m1m2 and n = n1n2 . We consider the problem of finding B 2 IR m 1 \Thetan 1 and C 2 IR m 2 \Thetan 2 so that k A \Gamma B\Omega C k F is minimized. This problem can be solved by computing the largest singular value and associated singular vectors of a permuted version of A. If A is symmetric, definite, nonnegative, or banded, then the minimizing B and C are similarly structured. The idea of using Kronecker product preconditioners is briefly discussed. 1 Introduction Suppose A 2 IR m\Thetan with m = m 1 m 2 and n = n 1 n 2 . This paper is about the minimization of OE A (B; C) = k A \Gamma B\Omega C k 2 F where B 2 IR m1 \Thetan 1 , C 2 IR m2 \Thetan 2 , and "\Omega " denotes the Kronecker product. Our interest in this problem stems from preliminary experience with Kronecker product preconditioners in the conjugate gradient setting. Suppose A 2 IR n\Thetan with n = n 1 n 2 and that M is the preconditioner. For this solution process...