Results 1  10
of
18
Recent computational developments in Krylov subspace methods for linear systems
 NUMER. LINEAR ALGEBRA APPL
, 2007
"... Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are metho ..."
Abstract

Cited by 50 (12 self)
 Add to MetaCart
Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are methods specifically tailored to systems with special properties such as special forms of symmetry and those depending on one or more parameters.
Theory of inexact Krylov subspace methods and applications to scientific computing
, 2002
"... Abstract. We provide a general frameworkfor the understanding of inexact Krylov subspace methods for the solution of symmetric and nonsymmetric linear systems of equations, as well as for certain eigenvalue calculations. This frameworkallows us to explain the empirical results reported in a series o ..."
Abstract

Cited by 48 (6 self)
 Add to MetaCart
Abstract. We provide a general frameworkfor the understanding of inexact Krylov subspace methods for the solution of symmetric and nonsymmetric linear systems of equations, as well as for certain eigenvalue calculations. This frameworkallows us to explain the empirical results reported in a series of CERFACS technical reports by Bouras, Frayssé, and Giraud in 2000. Furthermore, assuming exact arithmetic, our analysis can be used to produce computable criteria to bound the inexactness of the matrixvector multiplication in such a way as to maintain the convergence of the Krylov subspace method. The theory developed is applied to several problems including the solution of Schur complement systems, linear systems which depend on a parameter, and eigenvalue problems. Numerical experiments for some of these scientific applications are reported.
Using mixed precision for sparse matrix computations to enhance the performance while achieving 64bit accuracy
 ACM Trans. Math. Softw
"... By using a combination of 32bit and 64bit floating point arithmetic the performance of many sparse linear algebra algorithms can be significantly enhanced while maintaining the 64bit accuracy of the resulting solution. These ideas can be applied to sparse multifrontal and supernodal direct techni ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
By using a combination of 32bit and 64bit floating point arithmetic the performance of many sparse linear algebra algorithms can be significantly enhanced while maintaining the 64bit accuracy of the resulting solution. These ideas can be applied to sparse multifrontal and supernodal direct techniques and sparse iterative techniques such as Krylov subspace methods. The approach presented here can apply not only to conventional processors but also to exotic technologies such as
Iterative solution of augmented systems arising in interior methods
 SIAM JOURNAL ON OPTIMIZATION
, 2007
"... Iterative methods are proposed for certain augmented systems of linear equations that arise in interior methods for general nonlinear optimization. Interior methods define a sequence of KKT equations that represent the symmetrized (but indefinite) equations associated with Newton’s method for a po ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Iterative methods are proposed for certain augmented systems of linear equations that arise in interior methods for general nonlinear optimization. Interior methods define a sequence of KKT equations that represent the symmetrized (but indefinite) equations associated with Newton’s method for a point satisfying the perturbed optimality conditions. These equations involve both the primal and dual variables and become increasingly illconditioned as the optimization proceeds. In this context, an iterative linear solver must not only handle the illconditioning but also detect the occurrence of KKT matrices with the wrong matrix inertia. A oneparameter family of equivalent linear equations is formulated that includes the KKT system as a special case. The discussion focuses on a particular system from this family, known as the “doubly augmented system, ” that is positive definite with respect to both the primal and dual variables. This property means that a standard preconditioned conjugategradient method involving both primal and dual variables will either terminate successfully or detect if the KKT matrix has the wrong inertia. Constraint preconditioning is a wellknown technique for preconditioning the conjugategradient method on augmented systems. A family of constraint preconditioners is proposed that provably eliminates the inherent illconditioning in the augmented system. A considerable benefit of combining constraint preconditioning with the doubly augmented system is that the preconditioner need not be applied exactly. Two particular “activese ” constraint preconditioners are formulated that involve only a subset of the rows of the augmented system and thereby may be applied with considerably less work. Finally, some numerical experiments illustrate the numerical performance of the proposed preconditioners and highlight some theoretical properties of the preconditioned matrices.
CHEBYSHEV SEMIITERATION IN PRECONDITIONING FOR PROBLEMS INCLUDING THE MASS MATRIX ∗
"... Dedicated to Víctor Pereyra on the occasion of his 70th birthday Abstract. It is widely believed that Krylov subspace iterative methods are better than Chebyshev semiiterative methods. When the solution of a linear system with a symmetric and positive definite coefficient matrix is required, the Co ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
Dedicated to Víctor Pereyra on the occasion of his 70th birthday Abstract. It is widely believed that Krylov subspace iterative methods are better than Chebyshev semiiterative methods. When the solution of a linear system with a symmetric and positive definite coefficient matrix is required, the Conjugate Gradient method will compute the optimal approximate solution from the appropriate Krylov subspace, that is, it will implicitly compute the optimal polynomial. Hence a semiiterative method, which requires eigenvalue bounds and computes an explicit polynomial, must, for just a little less computational work, give an inferior result. In this manuscript, we identify a specific situation in the context of preconditioning where finite element mass matrices arise as certain blocks in a larger matrix problem when the Chebyshev semiiterative method is the method of choice, since it has properties which make it superior to the Conjugate Gradient method. In particular, the Chebyshev method gives preconditioners which are linear operators, whereas corresponding use of conjugate gradients would be nonlinear. We give numerical results for two example problems, the Stokes problem and a PDE control problem, where such nonlinearity causes poor convergence. Key words. Iteration, linear systems, preconditioning, finite elements, mass matrix
Faulttolerant iterative methods via selective reliability
, 2011
"... Current iterative methods for solving linear equations assume reliability of data (no “bit flips”) and arithmetic (correct up to rounding error). If faults occur, the solver usually either aborts, or computes the wrong answer without indication. System reliability guarantees consume energy or reduce ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Current iterative methods for solving linear equations assume reliability of data (no “bit flips”) and arithmetic (correct up to rounding error). If faults occur, the solver usually either aborts, or computes the wrong answer without indication. System reliability guarantees consume energy or reduces performance. As processor counts continue to grow, these costs will become unbearable. Instead, we show that if the system lets applications apply reliability selectively, we can develop iterations that compute the right answer despite faults. These “faulttolerant ” methods either converge eventually, at a rate that degrades gracefully with increased fault rate, or return a clear failure indication in the rare case that they cannot converge. If faults are infrequent, these algorithms spend most of their time in unreliable mode. This can save energy, improve performance, and avoid restarting from checkpoints. We illustrate convergence for a sample algorithm, FaultTolerant GMRES, for representative test problems and fault rates.
Accelerating Scientific Computations with Mixed Precision Algorithms
, 2008
"... On modern architectures, the performance of 32bit operations is often at least twice as fast as the performance of 64bit operations. By using a combination of 32bit and 64bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanc ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
On modern architectures, the performance of 32bit operations is often at least twice as fast as the performance of 64bit operations. By using a combination of 32bit and 64bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64bit accuracy of the resulting solution. The approach presented here can apply not only to conventional processors but also to other technologies such as Field Programmable Gate Arrays (FPGA), Graphical Processing Units (GPU), and the STI Cell BE processor. Results on modern processor architectures and the STI Cell BE are presented. 1
A flexible Generalized Conjugate Residual method with inner orthogonalization and deflated restarting
, 2010
"... This work is concerned with the development and study of a minimum residual norm subspace method based on the Generalized Conjugate Residual method with inner Orthogonalization (GCRO) method that allows flexible preconditioning and deflated restarting for the solution of nonsymmetric or nonHermiti ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
This work is concerned with the development and study of a minimum residual norm subspace method based on the Generalized Conjugate Residual method with inner Orthogonalization (GCRO) method that allows flexible preconditioning and deflated restarting for the solution of nonsymmetric or nonHermitian linear systems. First we recall the main features of Flexible Generalized Minimum Residual with deflated restarting (FGMRESDR), a recently proposed algorithm of the same family but based on the GMRES method. Next we introduce the new innerouter subspace method named FGCRODR. A theoretical comparison of both algorithms is then made in the case of flexible preconditioning. It is proved that FGCRODR and FGMRESDR are algebraically equivalent if a collinearity condition is satisfied. While being nearly as expensive as FGMRESDR in terms of computational operations per cycle, FGCRODR offers the additional advantage to be suitable for the solution of sequences of slowly changing linear systems (where both the matrix and righthand side can change) through subspace recycling. Numerical experiments on the solution of multidimensional elliptic partial differential equations show the efficiency of FGCRODR when solving sequences of linear systems. Key words. flexible or innerouter Krylov subspace methods, variable preconditioning, deflation, iterative solver
Exploiting Mixed Precision Floating Point Hardware in Scientific Computations
, 2007
"... By using a combination of 32bit and 64bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64bit accuracy of the resulting solution. The approach presented here can apply not only to conventional proc ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
By using a combination of 32bit and 64bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64bit accuracy of the resulting solution. The approach presented here can apply not only to conventional processors but also