Results 1  10
of
204
Recent computational developments in Krylov subspace methods for linear systems
 NUMER. LINEAR ALGEBRA APPL
, 2007
"... Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are metho ..."
Abstract

Cited by 85 (12 self)
 Add to MetaCart
Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are methods specifically tailored to systems with special properties such as special forms of symmetry and those depending on one or more parameters.
What color is your Jacobian? Graph coloring for computing derivatives
 SIAM REV
, 2005
"... Graph coloring has been employed since the 1980s to efficiently compute sparse Jacobian and Hessian matrices using either finite differences or automatic differentiation. Several coloring problems occur in this context, depending on whether the matrix is a Jacobian or a Hessian, and on the specific ..."
Abstract

Cited by 61 (11 self)
 Add to MetaCart
(Show Context)
Graph coloring has been employed since the 1980s to efficiently compute sparse Jacobian and Hessian matrices using either finite differences or automatic differentiation. Several coloring problems occur in this context, depending on whether the matrix is a Jacobian or a Hessian, and on the specifics of the computational techniques employed. We consider eight variant vertexcoloring problems here. This article begins with a gentle introduction to the problem of computing a sparse Jacobian, followed by an overview of the historical development of the research area. Then we present a unifying framework for the graph models of the variant matrixestimation problems. The framework is based upon the viewpoint that a partition of a matrixinto structurally orthogonal groups of columns corresponds to distance2 coloring an appropriate graph representation. The unified framework helps integrate earlier work and leads to fresh insights; enables the design of more efficient algorithms for many problems; leads to new algorithms for others; and eases the task of building graph models for new problems. We report computational results on two of the coloring problems to support our claims. Most of the methods for these problems treat a column or a row of a matrixas an atomic entity, and partition the columns or rows (or both). A brief review of methods that do not fit these criteria is provided. We also discuss results in discrete mathematics and theoretical computer science that intersect with the topics considered here.
NewtonGMRES preconditioning for discontinuous Galerkin discretizations of the NavierStokes equations
 SIAM J. Sci. Comput
, 2008
"... Abstract. We study preconditioners for the iterative solution of the linear systems arising in the implicit time integration of the compressible NavierStokes equations. The spatial discretization is carried out using a Discontinuous Galerkin method with fourth order polynomial interpolations on tri ..."
Abstract

Cited by 39 (11 self)
 Add to MetaCart
(Show Context)
Abstract. We study preconditioners for the iterative solution of the linear systems arising in the implicit time integration of the compressible NavierStokes equations. The spatial discretization is carried out using a Discontinuous Galerkin method with fourth order polynomial interpolations on triangular elements. The time integration is based on backward difference formulas resulting in a nonlinear system of equations which is solved at each timestep. This is accomplished using Newton’s method. The resulting linear systems are solved using a preconditioned GMRES iterative algorithm. We consider several existing preconditioners such as blockJacobi and GaussSeidel combined with multilevel schemes which have been developed and tested for specific applications. While our results are consistent with the claims reported, we find that these preconditioners lack robustness when used in more challenging situations involving low Mach numbers, stretched grids or high Reynolds number turbulent flows. We propose a preconditioner based on a coarse scale correction with postsmoothing based on a block incomplete LU factorization with zero fillin (ILU0) of the Jacobian matrix. The performance of the ILU0 smoother is found to depend critically on the element numbering. We propose a numbering strategy based on minimizing the discarded fillin in a greedy fashion. The coarse scale correction scheme is found to be important for diffusion dominated
Pseudotransient continuation and differentialalgebraic equations
 SIAM J. Sci. Comp
, 2003
"... Abstract. Pseudotransient continuation is a practical technique for globalizing the computation of steadystate solutions of nonlinear differential equations. The technique employs adaptive timestepping to integrate an initial value problem derived from an underlying ODE or PDE boundary value prob ..."
Abstract

Cited by 32 (10 self)
 Add to MetaCart
(Show Context)
Abstract. Pseudotransient continuation is a practical technique for globalizing the computation of steadystate solutions of nonlinear differential equations. The technique employs adaptive timestepping to integrate an initial value problem derived from an underlying ODE or PDE boundary value problem until sufficient accuracy in the desired steadystate root is achieved to switch over to Newton’s method and gain a rapid asymptotic convergence. The existing theory for pseudotransient continuation includes a global convergence result for differential equations written in semidiscretized methodoflines form. However, many problems are better formulated or can only sensibly be formulated as differentialalgebraic equations (DAEs). These include systems in which some of the equations represent algebraic constraints, perhaps arising from the spatial discretization of a PDE constraint. Multirate systems, in particular, are often formulated as differentialalgebraic systems to suppress fast time scales (acoustics, gravity waves, Alfven waves, near equilibrium chemical oscillations, etc.) that are irrelevant on the dynamical time scales of interest. In this paper we present a global convergence result for pseudotransient continuation applied to DAEs of index 1, and we illustrate it with numerical experiments on model incompressible flow and reacting flow problems, in which a constraint is employed to step over acoustic waves.
Parallel NewtonKrylov solver for the Euler equations discretized using simultaneousapproximation terms,
 AIAA Journal
, 2008
"... We present a parallel NewtonKrylov algorithm for solving the threedimensional Euler equations on multiblock structured meshes. The Euler equations are discretized on each block independently using secondorderaccurate summationbyparts operators and scalar numerical dissipation. Boundary condit ..."
Abstract

Cited by 20 (6 self)
 Add to MetaCart
(Show Context)
We present a parallel NewtonKrylov algorithm for solving the threedimensional Euler equations on multiblock structured meshes. The Euler equations are discretized on each block independently using secondorderaccurate summationbyparts operators and scalar numerical dissipation. Boundary conditions are imposed and block interfaces are coupled using simultaneousapproximation terms. The summationbyparts with simultaneousapproximationterms approach is timestable, requires only C 0 mesh continuity at block interfaces, accommodates arbitrary block topologies, and has low interblockcommunication overhead. The resulting discrete equations are solved iteratively using an inexactNewton method. At each Newton iteration, the linear system is solved inexactly using a Krylovsubspace iterative method, and both additive Schwarz and approximate Schur preconditioners are investigated. The algorithm is tested on the ONERA M6 wing geometry. We conclude that the approximate Schur preconditioner is an efficient alternative to the Schwarz preconditioner. Overall, the results demonstrate that the NewtonKrylov algorithm is very efficient: using 24 processors, a transonic flow on a 96block, 1millionnode mesh requires 12 minutes for a 10order reduction of the residual norm.
MOOSE: A parallel computational framework for coupled systems of nonlinear equations. Nuclear Engineering and Design,
, 2009
"... ABSTRACT Systems of coupled, nonlinear partial differential equations often arise in simulation of nuclear processes. MOOSE: Multiphysics Object Oriented Simulation Environment, a parallel computational framework targeted at solving these systems, is presented. As opposed to traditional dataflow o ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
(Show Context)
ABSTRACT Systems of coupled, nonlinear partial differential equations often arise in simulation of nuclear processes. MOOSE: Multiphysics Object Oriented Simulation Environment, a parallel computational framework targeted at solving these systems, is presented. As opposed to traditional dataflow oriented computational frameworks, MOOSE is founded on the mathematical principle of Jacobianfree NewtonKrylov (JFNK) solution methods. Utilizing the mathematical structure present in JFNK, physics are modularized into "Kernels" allowing for rapid production of new simulation tools. In addition, systems are solved fully coupled and fully implicit employing physics based preconditioning which allows for great flexibility even with large variance in time scales. A summary of the mathematics, an inspection of the structure of MOOSE, and several representative solutions from applications built on the framework are presented.
Globalization Techniques for NewtonKrylov Methods and Applications to the FullyCoupled SOLUTION OF THE NAVIERâSTOKES EQUATIONS
"... A NewtonâKrylov method is an implementation of Newton's method in which a Krylov subspace method is used to solve approximately the linear subproblems that determine Newton steps. To enhance robustness when good initial approximate solutions are not available, these methods are usually glob ..."
Abstract

Cited by 18 (5 self)
 Add to MetaCart
A NewtonâKrylov method is an implementation of Newton's method in which a Krylov subspace method is used to solve approximately the linear subproblems that determine Newton steps. To enhance robustness when good initial approximate solutions are not available, these methods are usually globalized, i.e., augmented with auxiliary procedures (globalizations) that improve the likelihood of convergence from a starting point that is not near a solution. In recent years, globalized NewtonKrylov methods have been used increasingly for the fully coupled solution of largescale problems. In this paper, we review several representative globalizations, discuss their properties, and report on a numerical study aimed at evaluating their relative merits on largescale two and threedimensional problems involving the steadystate NavierStokes equations.
Local/global model order reduction strategy for the simulation of
, 2011
"... quasibrittle fracture ..."
(Show Context)
Anderson acceleration for fixedpoint iterations
, 2009
"... Abstract. This paper concerns an acceleration method for fixedpoint iterations that originated in work of D. G. Anderson [J. Assoc. Comput. Mach., 12 (1965), pp. 547–560], which we accordingly call Anderson acceleration here. This method has enjoyed considerable success and wide usage in electronic ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
(Show Context)
Abstract. This paper concerns an acceleration method for fixedpoint iterations that originated in work of D. G. Anderson [J. Assoc. Comput. Mach., 12 (1965), pp. 547–560], which we accordingly call Anderson acceleration here. This method has enjoyed considerable success and wide usage in electronic structure computations, where it is known as Anderson mixing; however, it seems to have been untried or underexploited in many other important applications. Moreover, while other acceleration methods have been extensively studied by the mathematics and numerical analysis communities, this method has received relatively little attention from these communities over the years. A recent paper by H. Fang and Y. Saad [Numer. Linear Algebra Appl., 16 (2009), pp. 197–221] has clarified a remarkable relationship of Anderson acceleration to quasiNewton (secant updating) methods and extended it to define a broader Anderson family of acceleration methods. In this paper, our goals are to shed additional light on Anderson acceleration and to draw further attention to its usefulness as a general tool. We first show that, on linear problems, Anderson acceleration without truncation is “essentially equivalent ” in a certain sense to the generalized minimal residual (GMRES) method. We also show that the Type 1 variant in the Fang–Saad Anderson family is similarly essentially equivalent to the Arnoldi (full orthogonalization) method. We then discuss practical considerations for implementing Anderson acceleration and illustrate its performance through numerical experiments involving a variety of applications. Key words. acceleration methods, fixedpoint iterations, generalized minimal residual method, Arnoldi (full orthogonalization) method, iterative methods, expectationmaximization algorithm,