Results 1  10
of
20
Recent computational developments in Krylov subspace methods for linear systems
 NUMER. LINEAR ALGEBRA APPL
, 2007
"... Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are metho ..."
Abstract

Cited by 51 (12 self)
 Add to MetaCart
Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are methods specifically tailored to systems with special properties such as special forms of symmetry and those depending on one or more parameters.
Flexible innerouter Krylov subspace methods
 SIAM J. NUMER. ANAL
, 2003
"... Flexible Krylov methodsrefersto a classof methodswhich accept preconditioning that can change from one step to the next. Given a Krylov subspace method, such as CG, GMRES, QMR, etc. for the solution of a linear system Ax = b, instead of having a fixed preconditioner M and the (right) preconditione ..."
Abstract

Cited by 22 (2 self)
 Add to MetaCart
Flexible Krylov methodsrefersto a classof methodswhich accept preconditioning that can change from one step to the next. Given a Krylov subspace method, such as CG, GMRES, QMR, etc. for the solution of a linear system Ax = b, instead of having a fixed preconditioner M and the (right) preconditioned equation AM −1 y = b (Mx = y), one may have a different matrix, say Mk, at each step. In this paper, the case where the preconditioner itself is a Krylov subspace method is studied. There are several papers in the literature where such a situation is presented and numerical examples given. A general theory is provided encompassing many of these cases, including truncated methods. The overall space where the solution is approximated is no longer a Krylov subspace but a subspace of a larger Krylov space. We show how this subspace keeps growing as the outer iteration progresses, thus providing a convergence theory for these innerouter methods. Numerical tests illustrate some important implementation aspects that make the discussed innerouter methods very appealing in practical circumstances.
Domain decomposition for the incompressible NavierStokes equations: solving subdomain problems accurately and inaccurately
, 1995
"... ..."
Using mixed precision for sparse matrix computations to enhance the performance while achieving 64bit accuracy
 ACM Trans. Math. Softw
"... By using a combination of 32bit and 64bit floating point arithmetic the performance of many sparse linear algebra algorithms can be significantly enhanced while maintaining the 64bit accuracy of the resulting solution. These ideas can be applied to sparse multifrontal and supernodal direct techni ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
(Show Context)
By using a combination of 32bit and 64bit floating point arithmetic the performance of many sparse linear algebra algorithms can be significantly enhanced while maintaining the 64bit accuracy of the resulting solution. These ideas can be applied to sparse multifrontal and supernodal direct techniques and sparse iterative techniques such as Krylov subspace methods. The approach presented here can apply not only to conventional processors but also to exotic technologies such as
Relaxation strategies for nested Krylov methods
 Journal of Computational and Applied Mathematics
, 2003
"... There are classes of linear problems for which the matrixvector product is a time consuming operation because an expensive approximation method is required to compute it to a given accuracy. In recent years di#erent authors have investigated the use of, what is called, relaxation strategies for ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
There are classes of linear problems for which the matrixvector product is a time consuming operation because an expensive approximation method is required to compute it to a given accuracy. In recent years di#erent authors have investigated the use of, what is called, relaxation strategies for various Krylov subspace methods. These relaxation strategies aim to minimize the amount of work that is spent in the computation of the matrixvector product without compromising the accuracy of the method or the convergence speed too much. In order to achieve this goal, the accuracy of the matrixvector product is decreased when the iterative process comes closer to the solution. In this paper we show that a further significant reduction in computing time can be obtained by combining a relaxation strategy with the nesting of inexact Krylov methods. Flexible Krylov subspace methods allow variable preconditioning and therefore can be used in the outer most loop of our overall method. We analyze for several flexible Krylov methods strategies for controlling the accuracy of both the inexact matrixvector products and of the inner iterations. The results of our analysis will be illustrated with an example that models global ocean circulation.
Parallel Implementation of a Multiblock Method with Approximate Subdomain Solution
, 1998
"... Solution of large linear systems encountered in computational fluid dynamics often naturally leads to some form of domain decomposition, especially when it is desired to use parallel machines. It has been proposed to use approximate solvers to obtain fast but rough solutions on the separate subdomai ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
Solution of large linear systems encountered in computational fluid dynamics often naturally leads to some form of domain decomposition, especially when it is desired to use parallel machines. It has been proposed to use approximate solvers to obtain fast but rough solutions on the separate subdomains. In this paper a number of approximate solvers are considered, and numerical experiments are included showing speedups obtained on a cluster of workstations as well as on a distributed memory parallel computer. Additionally, some remarks are made pertaining to the practical application of Householder reflections as an orthogonalization procedure within Krylov subspace methods.
Accelerating Scientific Computations with Mixed Precision Algorithms
, 2008
"... On modern architectures, the performance of 32bit operations is often at least twice as fast as the performance of 64bit operations. By using a combination of 32bit and 64bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanc ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
On modern architectures, the performance of 32bit operations is often at least twice as fast as the performance of 64bit operations. By using a combination of 32bit and 64bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64bit accuracy of the resulting solution. The approach presented here can apply not only to conventional processors but also to other technologies such as Field Programmable Gate Arrays (FPGA), Graphical Processing Units (GPU), and the STI Cell BE processor. Results on modern processor architectures and the STI Cell BE are presented.
An investigation of Schwarz domain decomposition using accurate and inaccurate solution of subdomains
, 1995
"... ..."
Exploiting Mixed Precision Floating Point Hardware in Scientific Computations
, 2007
"... By using a combination of 32bit and 64bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64bit accuracy of the resulting solution. The approach presented here can apply not only to conventional proc ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
By using a combination of 32bit and 64bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64bit accuracy of the resulting solution. The approach presented here can apply not only to conventional processors but also
problems in industrial furnaces
"... Numerical modeling of the melting and combustion process is an important tool in gaining understanding of the physical and chemical phenomena that occur in a gas or oilfired glassmelting furnace. The incompressible Navier–Stokes equations are used to model the gas flow in the furnace. The discret ..."
Abstract
 Add to MetaCart
(Show Context)
Numerical modeling of the melting and combustion process is an important tool in gaining understanding of the physical and chemical phenomena that occur in a gas or oilfired glassmelting furnace. The incompressible Navier–Stokes equations are used to model the gas flow in the furnace. The discrete Navier–Stokes equations are solved by the SIMPLE(R) pressurecorrection method. In these applications, many SIMPLE(R) iterations are necessary to obtain an accurate solution. In this paper, Krylov accelerated versions are proposed: GCRSIMPLE(R). The properties of these methods are investigated for a simple twodimensional flow. Thereafter, the efficiencies of the methods are compared for threedimensional flows in industrial glassmelting furnaces. Copyright © 2000 John Wiley & Sons, Ltd. KEY WORDS: combustion; efficiency; flow problem; Krylov acceleration; SIMPLE(R) method 1.