Results 1  10
of
17
Recent computational developments in Krylov subspace methods for linear systems
 NUMER. LINEAR ALGEBRA APPL
, 2007
"... Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are metho ..."
Abstract

Cited by 48 (12 self)
 Add to MetaCart
Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are methods specifically tailored to systems with special properties such as special forms of symmetry and those depending on one or more parameters.
Flexible innerouter Krylov subspace methods
 SIAM J. NUMER. ANAL
, 2003
"... Flexible Krylov methodsrefersto a classof methodswhich accept preconditioning that can change from one step to the next. Given a Krylov subspace method, such as CG, GMRES, QMR, etc. for the solution of a linear system Ax = b, instead of having a fixed preconditioner M and the (right) preconditione ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
Flexible Krylov methodsrefersto a classof methodswhich accept preconditioning that can change from one step to the next. Given a Krylov subspace method, such as CG, GMRES, QMR, etc. for the solution of a linear system Ax = b, instead of having a fixed preconditioner M and the (right) preconditioned equation AM −1 y = b (Mx = y), one may have a different matrix, say Mk, at each step. In this paper, the case where the preconditioner itself is a Krylov subspace method is studied. There are several papers in the literature where such a situation is presented and numerical examples given. A general theory is provided encompassing many of these cases, including truncated methods. The overall space where the solution is approximated is no longer a Krylov subspace but a subspace of a larger Krylov space. We show how this subspace keeps growing as the outer iteration progresses, thus providing a convergence theory for these innerouter methods. Numerical tests illustrate some important implementation aspects that make the discussed innerouter methods very appealing in practical circumstances.
Domain decomposition for the incompressible NavierStokes equations: solving subdomain problems accurately and inaccurately
, 1995
"... For the solution of practical flow problems in arbitrarily shaped domains, simple Schwarz ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
For the solution of practical flow problems in arbitrarily shaped domains, simple Schwarz
Using mixed precision for sparse matrix computations to enhance the performance while achieving 64bit accuracy
 ACM Trans. Math. Softw
"... By using a combination of 32bit and 64bit floating point arithmetic the performance of many sparse linear algebra algorithms can be significantly enhanced while maintaining the 64bit accuracy of the resulting solution. These ideas can be applied to sparse multifrontal and supernodal direct techni ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
By using a combination of 32bit and 64bit floating point arithmetic the performance of many sparse linear algebra algorithms can be significantly enhanced while maintaining the 64bit accuracy of the resulting solution. These ideas can be applied to sparse multifrontal and supernodal direct techniques and sparse iterative techniques such as Krylov subspace methods. The approach presented here can apply not only to conventional processors but also to exotic technologies such as
Relaxation strategies for nested Krylov methods
 Journal of Computational and Applied Mathematics
, 2003
"... There are classes of linear problems for which the matrixvector product is a time consuming operation because an expensive approximation method is required to compute it to a given accuracy. In recent years di#erent authors have investigated the use of, what is called, relaxation strategies for ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
There are classes of linear problems for which the matrixvector product is a time consuming operation because an expensive approximation method is required to compute it to a given accuracy. In recent years di#erent authors have investigated the use of, what is called, relaxation strategies for various Krylov subspace methods. These relaxation strategies aim to minimize the amount of work that is spent in the computation of the matrixvector product without compromising the accuracy of the method or the convergence speed too much. In order to achieve this goal, the accuracy of the matrixvector product is decreased when the iterative process comes closer to the solution. In this paper we show that a further significant reduction in computing time can be obtained by combining a relaxation strategy with the nesting of inexact Krylov methods. Flexible Krylov subspace methods allow variable preconditioning and therefore can be used in the outer most loop of our overall method. We analyze for several flexible Krylov methods strategies for controlling the accuracy of both the inexact matrixvector products and of the inner iterations. The results of our analysis will be illustrated with an example that models global ocean circulation.
Accelerating Scientific Computations with Mixed Precision Algorithms
, 2008
"... On modern architectures, the performance of 32bit operations is often at least twice as fast as the performance of 64bit operations. By using a combination of 32bit and 64bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanc ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
On modern architectures, the performance of 32bit operations is often at least twice as fast as the performance of 64bit operations. By using a combination of 32bit and 64bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64bit accuracy of the resulting solution. The approach presented here can apply not only to conventional processors but also to other technologies such as Field Programmable Gate Arrays (FPGA), Graphical Processing Units (GPU), and the STI Cell BE processor. Results on modern processor architectures and the STI Cell BE are presented. 1
An investigation of Schwarz domain decomposition using accurate and inaccurate solution of subdomains
, 1995
"... For the solution of practical complex problems in arbitrarily shaped domains, simple ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
For the solution of practical complex problems in arbitrarily shaped domains, simple
Exploiting Mixed Precision Floating Point Hardware in Scientific Computations
, 2007
"... By using a combination of 32bit and 64bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64bit accuracy of the resulting solution. The approach presented here can apply not only to conventional proc ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
By using a combination of 32bit and 64bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64bit accuracy of the resulting solution. The approach presented here can apply not only to conventional processors but also
unknown title
"... The Krylov accelerated simple(r) method for flow problems in industrial furnaces ..."
Abstract
 Add to MetaCart
The Krylov accelerated simple(r) method for flow problems in industrial furnaces
Parallel Domain Decomposition with Incomplete Subdomain Solution
, 1996
"... In this paper we outline a parallel implementation of Krylovaccelerated Schwarz domain decomposition in which subdomain problems are solved to low precision. By so doing, computational time is focused on the convergence of the global iteration rather than wasted on ineffective subdomain iterations. ..."
Abstract
 Add to MetaCart
In this paper we outline a parallel implementation of Krylovaccelerated Schwarz domain decomposition in which subdomain problems are solved to low precision. By so doing, computational time is focused on the convergence of the global iteration rather than wasted on ineffective subdomain iterations. We consider the GCR method using classical GramSchmidt and Householder orthogonalization methods. Our goal is to apply this approach to the incompressible NavierStokes equations. For the parallel implementation, we assume a distributed memory system with message passing.