Results 1  10
of
33
Choosing the Forcing Terms in an Inexact Newton Method
 SIAM J. SCI. COMPUT
, 1994
"... An inexact Newton method is a generalization of Newton's method for solving F(x) = 0, F:/ /, in which, at the kth iteration, the step sk from the current approximate solution xk is required to satisfy a condition ]lF(x) + F'(x)s]l _< /]lF(xk)]l for a "forcing term" / [0,1). I ..."
Abstract

Cited by 156 (6 self)
 Add to MetaCart
(Show Context)
An inexact Newton method is a generalization of Newton's method for solving F(x) = 0, F:/ /, in which, at the kth iteration, the step sk from the current approximate solution xk is required to satisfy a condition ]lF(x) + F'(x)s]l _< /]lF(xk)]l for a "forcing term" / [0,1). In typical applications, the choice of the forcing terms is critical to the efficiency of the method and can affect robustness as well. Promising choices of the forcing terms arc given, their local convergence properties are analyzed, and their practical performance is shown on a representative set of test problems.
Continuation and Path Following
, 1992
"... CONTENTS 1 Introduction 1 2 The Basics of PredictorCorrector Path Following 3 3 Aspects of Implementations 7 4 Applications 15 5 PiecewiseLinear Methods 34 6 Complexity 41 7 Available Software 44 References 48 1. Introduction Continuation, embedding or homotopy methods have long served as useful ..."
Abstract

Cited by 95 (6 self)
 Add to MetaCart
CONTENTS 1 Introduction 1 2 The Basics of PredictorCorrector Path Following 3 3 Aspects of Implementations 7 4 Applications 15 5 PiecewiseLinear Methods 34 6 Complexity 41 7 Available Software 44 References 48 1. Introduction Continuation, embedding or homotopy methods have long served as useful theoretical tools in modern mathematics. Their use can be traced back at least to such venerated works as those of Poincar'e (18811886), Klein (1882 1883) and Bernstein (1910). Leray and Schauder (1934) refined the tool and presented it as a global result in topology, viz., the homotopy invariance of degree. The use of deformations to solve nonlinear systems of equations Partially supported by the National Science Foundation via grant # DMS9104058 y Preprint, Colorado State University, August 2 E. Allgower and K. Georg may be traced back at least to Lahaye (1934). The classical embedding methods were the
TrustRegion InteriorPoint Algorithms For Minimization Problems With Simple Bounds
 SIAM J. CONTROL AND OPTIMIZATION
, 1995
"... Two trustregion interiorpoint algorithms for the solution of minimization problems with simple bounds are analyzed and tested. The algorithms scale the local model in a way similar to Coleman and Li [1]. The first algorithm is more usual in that the trust region and the local quadratic model are c ..."
Abstract

Cited by 56 (18 self)
 Add to MetaCart
Two trustregion interiorpoint algorithms for the solution of minimization problems with simple bounds are analyzed and tested. The algorithms scale the local model in a way similar to Coleman and Li [1]. The first algorithm is more usual in that the trust region and the local quadratic model are consistently scaled. The second algorithm proposed here uses an unscaled trust region. A global convergence result for these algorithms is given and dogleg and conjugategradient algorithms to compute trial steps are introduced. Some numerical examples that show the advantages of the second algorithm are presented.
Krylov subspace acceleration of nonlinear multigrid schemes
 Electronic Transactions in Numerical Analysis, 6:271290
, 1997
"... Abstract. In this paper we present a Krylov acceleration technique for nonlinear PDEs. As a ‘preconditioner’ we use nonlinear multigrid schemes such as the Full Approximation Scheme (FAS) [1]. The benefits of nonlinear multigrid used in combination with the new accelerator are illustrated by difficu ..."
Abstract

Cited by 23 (3 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we present a Krylov acceleration technique for nonlinear PDEs. As a ‘preconditioner’ we use nonlinear multigrid schemes such as the Full Approximation Scheme (FAS) [1]. The benefits of nonlinear multigrid used in combination with the new accelerator are illustrated by difficult nonlinear elliptic scalar problems, such as the Bratu problem, and for systems of nonlinear equations, such as the NavierStokes equations. Key words. nonlinear Krylov acceleration, nonlinear multigrid, robustness, restarting conditions. AMS subject classifications. 65N55, 65H10, 65Bxx. 1. Introduction. It is well known that multigrid solution methods are optimal O(N) solvers, when all components in a method are chosen correctly. For difficult problems, such as some systems of nonlinear equations, it is far from trivial to choose these optimal components. The influence on the multigrid convergence of combinations of complicated factors, like convectiondominance, anisotropies, nonlinearities or non Mmatrix properties (the
Anderson acceleration for fixedpoint iterations
, 2009
"... Abstract. This paper concerns an acceleration method for fixedpoint iterations that originated in work of D. G. Anderson [J. Assoc. Comput. Mach., 12 (1965), pp. 547–560], which we accordingly call Anderson acceleration here. This method has enjoyed considerable success and wide usage in electronic ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
(Show Context)
Abstract. This paper concerns an acceleration method for fixedpoint iterations that originated in work of D. G. Anderson [J. Assoc. Comput. Mach., 12 (1965), pp. 547–560], which we accordingly call Anderson acceleration here. This method has enjoyed considerable success and wide usage in electronic structure computations, where it is known as Anderson mixing; however, it seems to have been untried or underexploited in many other important applications. Moreover, while other acceleration methods have been extensively studied by the mathematics and numerical analysis communities, this method has received relatively little attention from these communities over the years. A recent paper by H. Fang and Y. Saad [Numer. Linear Algebra Appl., 16 (2009), pp. 197–221] has clarified a remarkable relationship of Anderson acceleration to quasiNewton (secant updating) methods and extended it to define a broader Anderson family of acceleration methods. In this paper, our goals are to shed additional light on Anderson acceleration and to draw further attention to its usefulness as a general tool. We first show that, on linear problems, Anderson acceleration without truncation is “essentially equivalent ” in a certain sense to the generalized minimal residual (GMRES) method. We also show that the Type 1 variant in the Fang–Saad Anderson family is similarly essentially equivalent to the Arnoldi (full orthogonalization) method. We then discuss practical considerations for implementing Anderson acceleration and illustrate its performance through numerical experiments involving a variety of applications. Key words. acceleration methods, fixedpoint iterations, generalized minimal residual method, Arnoldi (full orthogonalization) method, iterative methods, expectationmaximization algorithm,
Analysis of Inexact TrustRegion InteriorPoint SQP Algorithms
, 1995
"... In this paper we analyze inexact trustregion interiorpoint (TRIP) sequential quadratic programming (SQP) algorithms for the solution of optimization problems with nonlinear equality constraints and simple bound constraints on some of the variables. Such problems arise in many engineering applicati ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
(Show Context)
In this paper we analyze inexact trustregion interiorpoint (TRIP) sequential quadratic programming (SQP) algorithms for the solution of optimization problems with nonlinear equality constraints and simple bound constraints on some of the variables. Such problems arise in many engineering applications, in particular in optimal control problems with bounds on the control. The nonlinear constraints often come from the discretization of partial differential equations. In such cases the calculation of derivative information and the solution of linearized equations is expensive. Often, the solution of linear systems and derivatives are computed inexactly yielding nonzero residuals. This paper analyzes the effect of the inexactness onto the convergence of TRIP SQP and gives practical rules to control the size of the residuals of these inexact calculations. It is shown that if the size of the residuals is of the order of both the size of the constraints and the trustregion radius, t...
Accelerated Inexact Newton Schemes for Large Systems of Nonlinear Equations
, 1995
"... Classical iteration methods for linear systems, such as Jacobi Iteration, can be accelerated considerably by Krylov subspace methods like GMRES. In this paper, we describe how Inexact Newton methods for nonlinear problems can be accelerated in a similar way and how this leads to a general framework ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Classical iteration methods for linear systems, such as Jacobi Iteration, can be accelerated considerably by Krylov subspace methods like GMRES. In this paper, we describe how Inexact Newton methods for nonlinear problems can be accelerated in a similar way and how this leads to a general framework that includes many wellknown techniques for solving linear and nonlinear systems, as well as new ones. Inexact Newton methods are frequently used in practice to avoid the expensive exact solution of the large linear system arising in the (possibly also inexact) linearization step of Newton's process. Our framework includes acceleration techniques for the "linear steps" as well as for the "nonlinear steps" in Newton's process. The described class of methods, the AIN (Accelerated Inexact Newton) methods, contains methods like GMRES and GMRESR for linear systems, Arnoldi and JacobiDavidson for linear eigenproblems, and many variants of Newton's method, like Damped Newton, for general nonlin...
A parallel Newton multigrid method for high order finite elements and its application to numerical existence proofs for elliptic boundary value equations
, 1996
"... . We describe a parallel algorithm for the numerical computation of guaranteed bounds for solutions of elliptic boundary value equations of second order. We use C 2 Hermite elements and a parallel Newton multigrid method to produce approximations of high accuracy. Then, we compute upper bounds fo ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
. We describe a parallel algorithm for the numerical computation of guaranteed bounds for solutions of elliptic boundary value equations of second order. We use C 2 Hermite elements and a parallel Newton multigrid method to produce approximations of high accuracy. Then, we compute upper bounds for the defect and enclosures for the eigenvalues of the linearization. In order to obtain verified bounds, these computations are realized in interval arithmetic. The application of the NewtonKantorovich theorem yields the existence of a solution and error bounds for the approximation. The method is implemented on a 256 processor transputer grid and tested for the Bratu problem \Gamma\Deltau = exp(u). AMS Symbol classifications: 65N55, 65N30, 65Y05, 65N15 Key words: parallel multigrid, nonlinear elliptic boundary equations, error bounds, Bratu problem 1 Introduction We describe a numerical procedure to solve nonlinear boundary equations of the form \Gamma\Deltau(x) + F (x; u(x)) = 0 for...
Numerical enclosures for solutions of the NavierStokes equation for small Reynolds numbers
"... . We describe a method to compute verified enclosures for solutions of the stationary NavierStokes equation in twodimensional bounded domains. In order to obtain error bounds for numerical approximations, we use the theorem of NewtonKantorovich. Therefore, we compute approximations for the strea ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
. We describe a method to compute verified enclosures for solutions of the stationary NavierStokes equation in twodimensional bounded domains. In order to obtain error bounds for numerical approximations, we use the theorem of NewtonKantorovich. Therefore, we compute approximations for the stream function of the flow and upper bounds of the defect in H \Gamma2 (\Omega ); we determine an upper bound for the norm of the inverse of the linearization by a perturbation argument; upper bounds for some embedding constants yield a Lipschitz constant. For small Reynolds numbers we apply our method to the drivencavity problem. We consider solutions (u; p) 2 V \Theta L 2 (\Omega ) of the NavierStokes equation \Gamma\Deltau + Re(u \Delta r)u +rp = f; where V = fv 2 H 1 0 (\Omega ; R 2 ) j r \Delta v = 0g, Re denotes the Reynolds number and \Omega ae R 2 is a bounded Lipschitz domain. We explain a method to determine error bounds kr(u \Gamma ~ u)k L 2 (\Omega ) r for some ...
Krylov Subspace Acceleration Method for Nonlinear Multigrid Schemes
 ETNA, Electron. Trans. Numer. Anal
, 1997
"... : In this paper we present a Krylov acceleration technique for nonlinear PDEs. As a `preconditioner ' we use nonlinear multigrid schemes, like FAS [1]. The benefits of the combination of nonlinear multigrid and the new proposed accelerator is shown for difficult nonlinear elliptic scalar probl ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
: In this paper we present a Krylov acceleration technique for nonlinear PDEs. As a `preconditioner ' we use nonlinear multigrid schemes, like FAS [1]. The benefits of the combination of nonlinear multigrid and the new proposed accelerator is shown for difficult nonlinear elliptic scalar problems, like the Bratu problem and for systems of nonlinear equations, like the NavierStokes equations. 1 Introduction It is wellknown that multigrid solution methods are the optimal O(N) solvers, when all components in a method are chosen properly. For difficult problems, like for certain systems of nonlinear equations, however, it is far from trivial to choose these optimal components. The influence on the multigrid convergence of combinations of complicated phenomena, like convectiondominance, anisotropies, nonlinearities or non Mmatrix properties (Bratu problem) is often hard to predict. Problems might then occur with the choice of the best underrelaxation parameter in the smoother, with th...