Results 1  10
of
113
Preconditioning techniques for large linear systems: A survey
 J. COMPUT. PHYS
, 2002
"... This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization i ..."
Abstract

Cited by 105 (5 self)
 Add to MetaCart
This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization issues, and block and multilevel extensions. Some of the challenges ahead are also discussed. An extensive bibliography completes the paper.
Iterative Solution of Linear Systems
 Acta Numerica
, 1992
"... this paper is as follows. In Section 2, we present some background material on general Krylov subspace methods, of which CGtype algorithms are a special case. We recall the outstanding properties of CG and discuss the issue of optimal extensions of CG to nonHermitian matrices. We also review GMRES ..."
Abstract

Cited by 100 (8 self)
 Add to MetaCart
this paper is as follows. In Section 2, we present some background material on general Krylov subspace methods, of which CGtype algorithms are a special case. We recall the outstanding properties of CG and discuss the issue of optimal extensions of CG to nonHermitian matrices. We also review GMRES and related methods, as well as CGlike algorithms for the special case of Hermitian indefinite linear systems. Finally, we briefly discuss the basic idea of preconditioning. In Section 3, we turn to Lanczosbased iterative methods for general nonHermitian linear systems. First, we consider the nonsymmetric Lanczos process, with particular emphasis on the possible breakdowns and potential instabilities in the classical algorithm. Then we describe recent advances in understanding these problems and overcoming them by using lookahead techniques. Moreover, we describe the quasiminimal residual algorithm (QMR) proposed by Freund and Nachtigal (1990), which uses the lookahead Lanczos process to obtain quasioptimal approximate solutions. Next, a survey of transposefree Lanczosbased methods is given. We conclude this section with comments on other related work and some historical remarks. In Section 4, we elaborate on CGNR and CGNE and we point out situations where these approaches are optimal. The general class of Krylov subspace methods also contains parameterdependent algorithms that, unlike CGtype schemes, require explicit information on the spectrum of the coefficient matrix. In Section 5, we discuss recent insights in obtaining appropriate spectral information for parameterdependent Krylov subspace methods. After that, 4 R.W. Freund, G.H. Golub and N.M. Nachtigal
The Partition of Unity Finite Element Method: Basic Theory and Applications
, 1996
"... The paper presents the basic ideas and the mathematical foundation of the partition of unity finite element method (PUFEM). We will show how the PUFEM can be used to employ the structure of the differential equation under consideration to construct effective and robust methods. Although the method a ..."
Abstract

Cited by 97 (6 self)
 Add to MetaCart
The paper presents the basic ideas and the mathematical foundation of the partition of unity finite element method (PUFEM). We will show how the PUFEM can be used to employ the structure of the differential equation under consideration to construct effective and robust methods. Although the method and its theory are valid in n dimensions, a detailed and illustrative analysis will be given for a one dimensional model problem. We identify some classes of nonstandard problems which can profit highly from the advantages of the PUFEM and conclude this paper with some open questions concerning implementational aspects of the PUFEM.
A restarted GMRES method augmented with eigenvectors
 SIAM J. Matrix Anal. Appl
, 1995
"... Abstract. The GMRES method for solving nonsymmetric linear equations is generally used with restarting to reduce storage and orthogonalization costs. Restarting slows down the convergence. However, it is possible to save some important information at the time of the restart. It is proposed that appr ..."
Abstract

Cited by 77 (9 self)
 Add to MetaCart
Abstract. The GMRES method for solving nonsymmetric linear equations is generally used with restarting to reduce storage and orthogonalization costs. Restarting slows down the convergence. However, it is possible to save some important information at the time of the restart. It is proposed that approximate eigenvectors corresponding to a few of the smallest eigenvalues be formed and added to the subspace for GMRES. The convergence can be much faster, and the minimum residual property is retained. Key words. GMRES, conjugate gradient, Krylov subspaces, iterative methods, nonsymmetric systems AMS subject classifications. 65F15, 15A18
Objectoriented software for quadratic programming
 ACM Transactions on Mathematical Software
, 2001
"... The objectoriented software package OOQP for solving convex quadratic programming problems (QP) is described. The primaldual interior point algorithms supplied by OOQP are implemented in a way that is largely independent of the problem structure. Users may exploit problem structure by supplying li ..."
Abstract

Cited by 60 (2 self)
 Add to MetaCart
The objectoriented software package OOQP for solving convex quadratic programming problems (QP) is described. The primaldual interior point algorithms supplied by OOQP are implemented in a way that is largely independent of the problem structure. Users may exploit problem structure by supplying linear algebra, problem data, and variable classes that are customized to their particular applications. The OOQP distribution contains default implementations that solve several important QP problem types, including general sparse and dense QPs, boundconstrained QPs, and QPs arising from support vector machines and Huber regression. The implementations supplied with the OOQP distribution are based on such well known linear algebra packages as MA27/57, LAPACK, and PETSc. OOQP demonstrates the usefulness of objectoriented design in optimization software development, and establishes standards that can be followed in the design of software packages for other classes of optimization problems. A number of the classes in OOQP may also be reusable directly in other codes.
Orderings for incomplete factorization preconditioning of nonsymmetric problems
 SIAM J. SCI. COMPUT
, 1999
"... Numerical experiments are presented whereby the effect of reorderings on the convergence of preconditioned Krylov subspace methods for the solution of nonsymmetric linear systems is shown. The preconditioners used in this study are different variants of incomplete factorizations. It is shown that c ..."
Abstract

Cited by 52 (11 self)
 Add to MetaCart
Numerical experiments are presented whereby the effect of reorderings on the convergence of preconditioned Krylov subspace methods for the solution of nonsymmetric linear systems is shown. The preconditioners used in this study are different variants of incomplete factorizations. It is shown that certain reorderings for direct methods, such as reverse Cuthill–McKee, can be very beneficial. The benefit can be seen in the reduction of the number of iterations and also in measuring the deviation of the preconditioned operator from the identity.
The Partition of Unity Method
 International Journal of Numerical Methods in Engineering
, 1996
"... A new finite element method is presented that features the ability to include in the finite element space knowledge about the partial differential equation being solved. This new method can therefore be more efficient than the usual finite element methods. An additional feature of the partitionofu ..."
Abstract

Cited by 52 (2 self)
 Add to MetaCart
A new finite element method is presented that features the ability to include in the finite element space knowledge about the partial differential equation being solved. This new method can therefore be more efficient than the usual finite element methods. An additional feature of the partitionofunity method is that finite element spaces of any desired regularity can be constructed very easily. This paper includes a convergence proof of this method and illustrates its efficiency by an application to the Helmholtz equation for high wave numbers. The basic estimates for aposteriori error estimation for this new method are also proved. Key words: Finite element method, meshless finite element method, finite element methods for highly oscillatory solutions TICAM, The University of Texas at Austin, Austin, TX 78712. Research was partially supported by US Office of Naval Research under grant N0001490J1030 y Seminar for Applied Mathematics, ETH Zurich, CH8092 Zurich, Switzerland....
Truncation Strategies For Optimal Krylov Subspace Methods
 SIAM J. Numer. Anal
, 1999
"... Optimal Krylov subspace methods like GMRES and GCR have to compute an orthogonal basis for the entire Krylov subspace to compute the minimal residual approximation to the solution. Therefore, when the number of iterations becomes large, the amount of work and the storage requirements become excessiv ..."
Abstract

Cited by 45 (6 self)
 Add to MetaCart
Optimal Krylov subspace methods like GMRES and GCR have to compute an orthogonal basis for the entire Krylov subspace to compute the minimal residual approximation to the solution. Therefore, when the number of iterations becomes large, the amount of work and the storage requirements become excessive. In practice one has to limit the resources. The most obvious ways to do this are to restart GMRES after some number of iterations and to keep only some number of the most recent vectors in GCR. This may lead to very poor convergence and even stagnation. Therefore, we will describe a method that reveals which subspaces of the Krylov space were important for convergence thus far and exactly how important they are. This information is then used to select which subspace to keep for orthogonalizing future search directions. Numerical results indicate this to be a very e#ective strategy. Key words. GMRES, GCR, restart, truncation, Krylov subspace methods, iterative methods, nonHermitian linear systems AMS subject classifications. Primary, 65F10; Secondary, 15A18, 65N22 PII. S0036142997315950 1.
Preconditioning highly indefinite and nonsymmetric matrices
 SIAM J. SCI. COMPUT
, 2000
"... Standard preconditioners, like incomplete factorizations, perform well when the coefficient matrix is diagonally dominant, but often fail on general sparse matrices. We experiment with nonsymmetric permutationsand scalingsaimed at placing large entrieson the diagonal in the context of preconditionin ..."
Abstract

Cited by 43 (4 self)
 Add to MetaCart
Standard preconditioners, like incomplete factorizations, perform well when the coefficient matrix is diagonally dominant, but often fail on general sparse matrices. We experiment with nonsymmetric permutationsand scalingsaimed at placing large entrieson the diagonal in the context of preconditioning for general sparse matrices. The permutations and scalings are those developed by Olschowka and Neumaier [Linear Algebra Appl., 240 (1996), pp. 131–151] and by Duff and
Solution of Shifted Linear Systems by QuasiMinimal Residual Iterations
 in Numerical Linear Algebra
, 1993
"... Highorder implicit methods for solving timedependent partial differential equations and frequency response computations in control theory give rise to shifted systems of linear equations. Such systems have identical righthand sides, and their coefficient matrices differ from each other only by sc ..."
Abstract

Cited by 41 (4 self)
 Add to MetaCart
Highorder implicit methods for solving timedependent partial differential equations and frequency response computations in control theory give rise to shifted systems of linear equations. Such systems have identical righthand sides, and their coefficient matrices differ from each other only by scalar multiples of the identity matrix. This paper explores the use of two quasiminimal residual iterations, the QMR and the TFQMR algorithm, for the solution of such shifted linear systems. It is shown that both algorithms can exploit the special structure, and that, for any family of shifted linear systems, the number of matrixvector products and the number of inner products is the same as for a single linear system. Convergence results for the QMR and TFQMR algorithms are presented. This research was performed at the Research Institute for Advanced Computer Science (RIACS), NASA Ames Research Center, Moffett Field, California 94035, and it was supported by Cooperative Agreement NCC 238...