Results 1  10
of
44
Preconditioning techniques for large linear systems: A survey
 J. COMPUT. PHYS
, 2002
"... This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization i ..."
Abstract

Cited by 102 (4 self)
 Add to MetaCart
This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization issues, and block and multilevel extensions. Some of the challenges ahead are also discussed. An extensive bibliography completes the paper.
Robust approximate inverse preconditioning for the conjugate gradient method
 SIAM J. SCI. COMPUT
, 2000
"... We present a variant of the AINV factorized sparse approximate inverse algorithm which is applicable to any symmetric positive definite matrix. The new preconditioner is breakdownfree and, when used in conjunction with the conjugate gradient method, results in a reliable solver for highly illcondit ..."
Abstract

Cited by 48 (11 self)
 Add to MetaCart
We present a variant of the AINV factorized sparse approximate inverse algorithm which is applicable to any symmetric positive definite matrix. The new preconditioner is breakdownfree and, when used in conjunction with the conjugate gradient method, results in a reliable solver for highly illconditioned linear systems. We also investigate an alternative approach to a stable approximate inverse algorithm, based on the idea of diagonally compensated reduction of matrix entries. The results of numerical tests on challenging linear systems arising from finite element modeling of elasticity and diffusion problems are presented.
Preconditioning highly indefinite and nonsymmetric matrices
 SIAM J. SCI. COMPUT
, 2000
"... Standard preconditioners, like incomplete factorizations, perform well when the coefficient matrix is diagonally dominant, but often fail on general sparse matrices. We experiment with nonsymmetric permutationsand scalingsaimed at placing large entrieson the diagonal in the context of preconditionin ..."
Abstract

Cited by 40 (4 self)
 Add to MetaCart
Standard preconditioners, like incomplete factorizations, perform well when the coefficient matrix is diagonally dominant, but often fail on general sparse matrices. We experiment with nonsymmetric permutationsand scalingsaimed at placing large entrieson the diagonal in the context of preconditioning for general sparse matrices. The permutations and scalings are those developed by Olschowka and Neumaier [Linear Algebra Appl., 240 (1996), pp. 131–151] and by Duff and
Encapsulating Multiple CommunicationCost Metrics in Partitioning Sparse Rectangular Matrices for Parallel MatrixVector Multiplies
"... This paper addresses the problem of onedimensional partitioning of structurally unsymmetricsquare and rectangular sparse matrices for parallel matrixvector and matrixtransposevector multiplies. The objective is to minimize the communication cost while maintaining the balance on computational load ..."
Abstract

Cited by 35 (22 self)
 Add to MetaCart
This paper addresses the problem of onedimensional partitioning of structurally unsymmetricsquare and rectangular sparse matrices for parallel matrixvector and matrixtransposevector multiplies. The objective is to minimize the communication cost while maintaining the balance on computational loads of processors. Most of the existing partitioning models consider only the total message volume hoping that minimizing this communicationcost metric is likely to reduce other metrics. However, the total message latency (startup time) may be more important than the total message volume. Furthermore, the maximum message volume and latency handled by a single processor are also important metrics. We propose a twophase approach that encapsulates all these four communicationcost metrics. The objective in the first phase is to minimize the total message volume while maintainingthe computationalload balance. The objective in the second phase is to encapsulate the remaining three communicationcost metrics. We propose communicationhypergraph and partitioning models for the second phase. We then present several methods for partitioning communication hypergraphs. Experiments on a wide range of test matrices show that the proposed approach yields very effective partitioning results. A parallel implementation on a PC cluster verifies that the theoretical improvements shown by partitioning results hold in practice.
Bounds for the entries of matrix functions with applications to preconditioning
 BIT
, 1999
"... Let A be a symmetric matrix and let f be a smooth function defined on an interval containing the spectrum of A. Generalizing a wellknown result of Demko, Moss and Smith on the decay of the inverse we show that when A is banded, the entries of f(A)are bounded in an exponentially decaying manner away ..."
Abstract

Cited by 33 (14 self)
 Add to MetaCart
Let A be a symmetric matrix and let f be a smooth function defined on an interval containing the spectrum of A. Generalizing a wellknown result of Demko, Moss and Smith on the decay of the inverse we show that when A is banded, the entries of f(A)are bounded in an exponentially decaying manner away from the main diagonal. Bounds obtained by representing the entries of f(A) in terms of Riemann–Stieltjes integrals and by approximating such integrals by Gaussian quadrature rules are also considered. Applications of these bounds to preconditioning are suggested and illustrated by a few numerical examples.
Orderings for factorized sparse approximate inverse preconditioners
 SIAM J. SCI. COMPUT
, 2000
"... The influence of reorderings on the performance of factorized sparse approximate inverse preconditioners is considered. Some theoretical results on the effect of orderings on the fillin and decay behavior of the inverse factors of a sparse matrix are presented. It is shown experimentally that certa ..."
Abstract

Cited by 24 (9 self)
 Add to MetaCart
The influence of reorderings on the performance of factorized sparse approximate inverse preconditioners is considered. Some theoretical results on the effect of orderings on the fillin and decay behavior of the inverse factors of a sparse matrix are presented. It is shown experimentally that certain reorderings, like minimum degree and nested dissection, can be very beneficial. The benefit consists of a reduction in the storage and time required for constructing the preconditioner, and of faster convergence of the preconditioned iteration in many cases of practical interest.
Preconditioned AllAtOnce Methods for Large, Sparse Parameter Estimation Problems
, 2000
"... The problem of recovering a parameter function based on measurements of solutions of a system of partial differential equations in several space variables leads to a number of computational challenges. Upon discretization of a regularized formulation a large, sparse constrained optimization prob ..."
Abstract

Cited by 24 (4 self)
 Add to MetaCart
The problem of recovering a parameter function based on measurements of solutions of a system of partial differential equations in several space variables leads to a number of computational challenges. Upon discretization of a regularized formulation a large, sparse constrained optimization problem is obtained. Typically in the literature, the constraints are eliminated and the resulting unconstrained formulation is solved by some variant of Newton's method, usually the GaussNewton method. A preconditioned conjugate gradient algorithm is applied at each iteration for the resulting reduced Hessian system. In this paper we apply instead a preconditioned Krylov method directly to the KKT system arising from a Newtontype method for the constrained formulation (an "allatonce" approach). A variant of symmetric QMR is employed, and an effective preconditioner is obtained by solving the reduced Hessian system approximately. Since the reduced Hessian system presents significa...
A TwoLevel Parallel Preconditioner Based on Sparse Approximate Inverses
, 1999
"... We introduce a novel strategy for parallel preconditioning of largescale linear systems by means of a twolevel factorized sparse approximate inverse algorithm. Using graph partitioning and incomplete biconjugation we are able to obtain a highly parallel preconditioner. The algorithm has been imple ..."
Abstract

Cited by 21 (7 self)
 Add to MetaCart
We introduce a novel strategy for parallel preconditioning of largescale linear systems by means of a twolevel factorized sparse approximate inverse algorithm. Using graph partitioning and incomplete biconjugation we are able to obtain a highly parallel preconditioner. The algorithm has been implemented using MPI on a SGI Origin 2000 computer at Los Alamos National Laboratory and is currently being used to solve unstructured linear systems with up to a few million unknowns from a variety of applications. The numerical experiments demonstrate the excellent scalability of the algorithm for sufficiently large problems.
Approximate inverse preconditioning in the parallel solution of sparse eigenproblems
"... A preconditioned scheme for solving sparse symmetric eigenproblems is proposed. The solution strategy relies upon the DACG algorithm, which is a Preconditioned Conjugate Gradient algorithm for minimizing the Rayleigh Quotient. A comparison with the well established ARPACK code, shows that when a sma ..."
Abstract

Cited by 19 (9 self)
 Add to MetaCart
A preconditioned scheme for solving sparse symmetric eigenproblems is proposed. The solution strategy relies upon the DACG algorithm, which is a Preconditioned Conjugate Gradient algorithm for minimizing the Rayleigh Quotient. A comparison with the well established ARPACK code, shows that when a small number of the leftmost eigenpairs is to be computed, DACG is more efficient than ARPACK. Effective convergence acceleration of DACG is shown to be performed by a suitable approximate inverse preconditioner (AINV). The performance of such a preconditioner is shown to be safe, i.e. not highly dependent on a drop tolerance parameter. On sequential machines, AINV preconditioning proves a practicable alternative to the effective incomplete Cholesky factorization, and is more efficient than Block Jacobi. Due to its parallelizability, the AINV preconditioner is exploited for a parallel implementation of the DACG algorithm. Numerical tests account for the high degree of parallelization attainable on a Cray T3E machine and confirm the satisfactory scalability properties of the algorithm. A final comparison with PARPACK shows the (relative) higher efficiency of AINVDACG. KEY WORDS generalized eigenproblem, sparse approximate inverse, parallel algorithm 1.
A Parallel Solver for LargeScale Markov Chains
 APPL. NUMER. MATH
, 2002
"... We consider the parallel computation of the stationary probability distribution vector of ergodic Markov chains with large state spaces by preconditioned Krylov subspace methods. The parallel preconditioner is obtained as an explicit approximation, in factorized form, of a particular generalized ..."
Abstract

Cited by 17 (7 self)
 Add to MetaCart
We consider the parallel computation of the stationary probability distribution vector of ergodic Markov chains with large state spaces by preconditioned Krylov subspace methods. The parallel preconditioner is obtained as an explicit approximation, in factorized form, of a particular generalized inverse of the infinitesimal generator of the Markov process. Conditions that guarantee the existence of the preconditioner are given, and the results of a parallel implementation are presented.