Results 1  10
of
188
Recent computational developments in Krylov subspace methods for linear systems
 NUMER. LINEAR ALGEBRA APPL
, 2007
"... Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are metho ..."
Abstract

Cited by 86 (12 self)
 Add to MetaCart
Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are methods specifically tailored to systems with special properties such as special forms of symmetry and those depending on one or more parameters.
What color is your Jacobian? Graph coloring for computing derivatives
 SIAM REV
, 2005
"... Graph coloring has been employed since the 1980s to efficiently compute sparse Jacobian and Hessian matrices using either finite differences or automatic differentiation. Several coloring problems occur in this context, depending on whether the matrix is a Jacobian or a Hessian, and on the specific ..."
Abstract

Cited by 59 (11 self)
 Add to MetaCart
(Show Context)
Graph coloring has been employed since the 1980s to efficiently compute sparse Jacobian and Hessian matrices using either finite differences or automatic differentiation. Several coloring problems occur in this context, depending on whether the matrix is a Jacobian or a Hessian, and on the specifics of the computational techniques employed. We consider eight variant vertexcoloring problems here. This article begins with a gentle introduction to the problem of computing a sparse Jacobian, followed by an overview of the historical development of the research area. Then we present a unifying framework for the graph models of the variant matrixestimation problems. The framework is based upon the viewpoint that a partition of a matrixinto structurally orthogonal groups of columns corresponds to distance2 coloring an appropriate graph representation. The unified framework helps integrate earlier work and leads to fresh insights; enables the design of more efficient algorithms for many problems; leads to new algorithms for others; and eases the task of building graph models for new problems. We report computational results on two of the coloring problems to support our claims. Most of the methods for these problems treat a column or a row of a matrixas an atomic entity, and partition the columns or rows (or both). A brief review of methods that do not fit these criteria is provided. We also discuss results in discrete mathematics and theoretical computer science that intersect with the topics considered here.
NewtonGMRES preconditioning for discontinuous Galerkin discretizations of the NavierStokes equations
 SIAM J. Sci. Comput
, 2008
"... Abstract. We study preconditioners for the iterative solution of the linear systems arising in the implicit time integration of the compressible NavierStokes equations. The spatial discretization is carried out using a Discontinuous Galerkin method with fourth order polynomial interpolations on tri ..."
Abstract

Cited by 39 (11 self)
 Add to MetaCart
(Show Context)
Abstract. We study preconditioners for the iterative solution of the linear systems arising in the implicit time integration of the compressible NavierStokes equations. The spatial discretization is carried out using a Discontinuous Galerkin method with fourth order polynomial interpolations on triangular elements. The time integration is based on backward difference formulas resulting in a nonlinear system of equations which is solved at each timestep. This is accomplished using Newton’s method. The resulting linear systems are solved using a preconditioned GMRES iterative algorithm. We consider several existing preconditioners such as blockJacobi and GaussSeidel combined with multilevel schemes which have been developed and tested for specific applications. While our results are consistent with the claims reported, we find that these preconditioners lack robustness when used in more challenging situations involving low Mach numbers, stretched grids or high Reynolds number turbulent flows. We propose a preconditioner based on a coarse scale correction with postsmoothing based on a block incomplete LU factorization with zero fillin (ILU0) of the Jacobian matrix. The performance of the ILU0 smoother is found to depend critically on the element numbering. We propose a numbering strategy based on minimizing the discarded fillin in a greedy fashion. The coarse scale correction scheme is found to be important for diffusion dominated
Pseudotransient continuation and differentialalgebraic equations
 SIAM J. Sci. Comp
, 2003
"... Abstract. Pseudotransient continuation is a practical technique for globalizing the computation of steadystate solutions of nonlinear differential equations. The technique employs adaptive timestepping to integrate an initial value problem derived from an underlying ODE or PDE boundary value prob ..."
Abstract

Cited by 31 (10 self)
 Add to MetaCart
(Show Context)
Abstract. Pseudotransient continuation is a practical technique for globalizing the computation of steadystate solutions of nonlinear differential equations. The technique employs adaptive timestepping to integrate an initial value problem derived from an underlying ODE or PDE boundary value problem until sufficient accuracy in the desired steadystate root is achieved to switch over to Newton’s method and gain a rapid asymptotic convergence. The existing theory for pseudotransient continuation includes a global convergence result for differential equations written in semidiscretized methodoflines form. However, many problems are better formulated or can only sensibly be formulated as differentialalgebraic equations (DAEs). These include systems in which some of the equations represent algebraic constraints, perhaps arising from the spatial discretization of a PDE constraint. Multirate systems, in particular, are often formulated as differentialalgebraic systems to suppress fast time scales (acoustics, gravity waves, Alfven waves, near equilibrium chemical oscillations, etc.) that are irrelevant on the dynamical time scales of interest. In this paper we present a global convergence result for pseudotransient continuation applied to DAEs of index 1, and we illustrate it with numerical experiments on model incompressible flow and reacting flow problems, in which a constraint is employed to step over acoustic waves.
Globalization Techniques for NewtonKrylov Methods and Applications to the FullyCoupled SOLUTION OF THE NAVIERâSTOKES EQUATIONS
"... A NewtonâKrylov method is an implementation of Newton's method in which a Krylov subspace method is used to solve approximately the linear subproblems that determine Newton steps. To enhance robustness when good initial approximate solutions are not available, these methods are usually glob ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
A NewtonâKrylov method is an implementation of Newton's method in which a Krylov subspace method is used to solve approximately the linear subproblems that determine Newton steps. To enhance robustness when good initial approximate solutions are not available, these methods are usually globalized, i.e., augmented with auxiliary procedures (globalizations) that improve the likelihood of convergence from a starting point that is not near a solution. In recent years, globalized NewtonKrylov methods have been used increasingly for the fully coupled solution of largescale problems. In this paper, we review several representative globalizations, discuss their properties, and report on a numerical study aimed at evaluating their relative merits on largescale two and threedimensional problems involving the steadystate NavierStokes equations.
Anderson acceleration for fixedpoint iterations
, 2009
"... Abstract. This paper concerns an acceleration method for fixedpoint iterations that originated in work of D. G. Anderson [J. Assoc. Comput. Mach., 12 (1965), pp. 547–560], which we accordingly call Anderson acceleration here. This method has enjoyed considerable success and wide usage in electronic ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
(Show Context)
Abstract. This paper concerns an acceleration method for fixedpoint iterations that originated in work of D. G. Anderson [J. Assoc. Comput. Mach., 12 (1965), pp. 547–560], which we accordingly call Anderson acceleration here. This method has enjoyed considerable success and wide usage in electronic structure computations, where it is known as Anderson mixing; however, it seems to have been untried or underexploited in many other important applications. Moreover, while other acceleration methods have been extensively studied by the mathematics and numerical analysis communities, this method has received relatively little attention from these communities over the years. A recent paper by H. Fang and Y. Saad [Numer. Linear Algebra Appl., 16 (2009), pp. 197–221] has clarified a remarkable relationship of Anderson acceleration to quasiNewton (secant updating) methods and extended it to define a broader Anderson family of acceleration methods. In this paper, our goals are to shed additional light on Anderson acceleration and to draw further attention to its usefulness as a general tool. We first show that, on linear problems, Anderson acceleration without truncation is “essentially equivalent ” in a certain sense to the generalized minimal residual (GMRES) method. We also show that the Type 1 variant in the Fang–Saad Anderson family is similarly essentially equivalent to the Arnoldi (full orthogonalization) method. We then discuss practical considerations for implementing Anderson acceleration and illustrate its performance through numerical experiments involving a variety of applications. Key words. acceleration methods, fixedpoint iterations, generalized minimal residual method, Arnoldi (full orthogonalization) method, iterative methods, expectationmaximization algorithm,
Local/global model order reduction strategy for the simulation of
, 2011
"... quasibrittle fracture ..."
(Show Context)
DIMENSIONAL REDUCTION OF THE FOKKERPLANCK EQUATION FOR STOCHASTIC CHEMICAL REACTIONS
, 2005
"... The FokkerPlanck equation models chemical reactions on a mesoscale. The solution is a probability density function for the copy number of the different molecules. The number of dimensions of the problem can be large making numerical simulation of the reactions computationally intractable. The numbe ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
The FokkerPlanck equation models chemical reactions on a mesoscale. The solution is a probability density function for the copy number of the different molecules. The number of dimensions of the problem can be large making numerical simulation of the reactions computationally intractable. The number of dimensions is reduced here by deriving partial differential equations for the first moments of some of the species and coupling them to a FokkerPlanck equation for the remaining species. With more simplifying assumptions, another system of equations is derived consisting of integrodifferential equations and a FokkerPlanck equation. In this way, the simulation of the chemical networks is possible without the exponential growth in computatational work and memory of the original equation and with better modelling accuracy than the macroscopic reaction rate equations. Some terms in the equations are small and are ignored. Conditions are given for the influence of these terms to be small on the equations and the solutions. The difference between different models is illustrated in a numerical example.