Results 1  10
of
15
Analysis of Inexact TrustRegion InteriorPoint SQP Algorithms
, 1995
"... In this paper we analyze inexact trustregion interiorpoint (TRIP) sequential quadratic programming (SQP) algorithms for the solution of optimization problems with nonlinear equality constraints and simple bound constraints on some of the variables. Such problems arise in many engineering applicati ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
In this paper we analyze inexact trustregion interiorpoint (TRIP) sequential quadratic programming (SQP) algorithms for the solution of optimization problems with nonlinear equality constraints and simple bound constraints on some of the variables. Such problems arise in many engineering applications, in particular in optimal control problems with bounds on the control. The nonlinear constraints often come from the discretization of partial differential equations. In such cases the calculation of derivative information and the solution of linearized equations is expensive. Often, the solution of linear systems and derivatives are computed inexactly yielding nonzero residuals. This paper analyzes the effect of the inexactness onto the convergence of TRIP SQP and gives practical rules to control the size of the residuals of these inexact calculations. It is shown that if the size of the residuals is of the order of both the size of the constraints and the trustregion radius, t...
Consistent Initial Condition Calculation For DifferentialAlgebraic Systems
, 1995
"... In this paper we describe a new algorithm for the calculation of consistent initial conditions for a class of systems of differentialalgebraic equations which includes semiexplicit indexone systems. We consider initial condition problems of two typesone where the differential variables are speci ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
In this paper we describe a new algorithm for the calculation of consistent initial conditions for a class of systems of differentialalgebraic equations which includes semiexplicit indexone systems. We consider initial condition problems of two typesone where the differential variables are specified, and one where the derivative vector is specified. The algorithm requires a minimum of additional information from the user. We outline the implementation in a generalpurpose solver DASPK for differentialalgebraic equations, and present some numerical experiments which illustrate its effectiveness.
NewtonKrylovMultigrid Solvers for LargeScale, Highly Heterogeneous, Variably Saturated Flow Problems
, 2000
"... ..."
On the Convergence Theory of TrustRegionBased Algorithms for EqualityConstrained Optimization
, 1995
"... In this paper we analyze incxact trust region interior point (TRIP) sequential quadr tic programming (SOP) algorithms for the solution of optimization problems with nonlinear equality constraints and simple bound constraints on some of the variables. Such problems arise in many engineering applicati ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
In this paper we analyze incxact trust region interior point (TRIP) sequential quadr tic programming (SOP) algorithms for the solution of optimization problems with nonlinear equality constraints and simple bound constraints on some of the variables. Such problems arise in many engineering applications, in particular in optimal control problems with bounds on the control. The nonhnear constraints often come from the discretization of partial differential equations. In such cases the calculation of derivative information and the solution of hncarizcd equations is expensive. Often, the solution of hncar systems and derivatives arc computed incxactly yielding nonzero residuals. This paper
On Acceleration Methods for Coupled Nonlinear Elliptic Systems
, 2002
"... We compare both numerically and theoretically three techniques for accelerating the convergence of a nonlinear xed point iteration arising from a system of coupled partial dierential equations: Chebyshev acceleration, a second order stationary method, and a nonlinear version of the Generalized M ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
We compare both numerically and theoretically three techniques for accelerating the convergence of a nonlinear xed point iteration arising from a system of coupled partial dierential equations: Chebyshev acceleration, a second order stationary method, and a nonlinear version of the Generalized Minimal Residual Algorithm (GMRES) which we call NLGMR. All three approaches are implemented in `Jacobianfree' mode, i.e., only a subroutine which returns T (u) as a function of u is required. We present a set of numerical comparisons for the driftdiusion semiconductor model. For the mapping T which corresponds to the nonlinear block GauSeidel algorithm for the solution of this nonlinear elliptic system, NLGMR is found to be superior to the second order stationary method and the Chebychev acceleration. We analyze the local convergence of the nonlinear iterations in terms of the spectrum [T u (u )] of the derivative T u at the solution u . The convergence of the original iteration is governed by the spectral radius [T U (u )]. In contrast the convergence of the two second order accelerations are related to the convex hull of [T u (u )], while the convergence of the GMRESbased approach is related to the local clustering in [I T u (u )].
Trust Region SQP Methods With Inexact Linear System Solves For LargeScale Optimization
, 2006
"... by ..."
Adaptive Stable Finite Element Methods for the Compressible NavierStokes Equations
, 1995
"... Many problems involving fluid flow can now be simulated numerically, providing a useful predictive tool for a wide range of engineering applications. Of particular interest in this thesis are computational methods for solving the problem of compressible fluid flow around aerodynamic configurations. ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Many problems involving fluid flow can now be simulated numerically, providing a useful predictive tool for a wide range of engineering applications. Of particular interest in this thesis are computational methods for solving the problem of compressible fluid flow around aerodynamic configurations. A finite element method is presented for solving the compressible NavierStokes equations in two dimensions on unstructured meshes. The method is stabilized by the addition of a leastsquares operator (an inexpensive simplification of the Galerkin leastsquares method), leading to solutions free of spurious oscillations. Convergence to steady state is reached via a backward Euler timestepping scheme, and the use of local timesteps allows convergence to be accelerated. The choice of both the nonlinear solver, which is employed to solve the algebraic system arising at each timestep, and the iterative method used within this solver, is fully discussed, along with an inexpensive technique for...
Globally convergent techniques in nonlinear NewtonKrylov algorithms
, 1989
"... This paper presents some convergence theory for nonlinear Krylov subspace methods. The basic idea of these methods, which have been described by the authors in an earlier paper, is to use variants of Newton's iteration in conjunction with a Krylov subspace method for solving the Jacobian linear s ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
This paper presents some convergence theory for nonlinear Krylov subspace methods. The basic idea of these methods, which have been described by the authors in an earlier paper, is to use variants of Newton's iteration in conjunction with a Krylov subspace method for solving the Jacobian linear systems. These methods are variants of inexact Newton methods where the approximate Newton direction is taken from a subspace of small dimension. The main focus of this paper is to analyze these methods when they are combined with global strategies such as linesearch techniques and model trust region algorithms. Most of the convergence results are formulated for projection onto general subspaces rather than just Krylov subspaces.
Some convergence results for the NewtonGMRES algorithm
, 1993
"... : In this paper, we consider both local and global convergence of the Newton algorithm to solve nonlinear problems when GMRES is used to invert the Jacobian at each Newton iteration. Under weak assumptions, we give a sufficient condition for an inexact solution of GMRES to be a descent direction in ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
: In this paper, we consider both local and global convergence of the Newton algorithm to solve nonlinear problems when GMRES is used to invert the Jacobian at each Newton iteration. Under weak assumptions, we give a sufficient condition for an inexact solution of GMRES to be a descent direction in order to apply a backtracking technique. Moreover, we extend this result to a finite difference scheme considering also the use of preconditioners. Then we show the impact of the condition number of the Jacobian on the local convergence of the NewtonGMRES algorithm. Keywords: convergence, descent direction, finite difference, GMRES, Newton, preconditioning. (R'esum'e : tsvp) choquet@irisa.fr erhel@irisa.fr Centre National de la Recherche Scientifique Institut National de Recherche en Informatique (URA 227) Universit e de Rennes 1  Insa de Rennes et en Automatique  unit e de recherche de Rennes Quelques r'esultats de convergence pour l'algorithme de NewtonGMRES R'esum'e : Dans...
A Parallel Inexact Newton Method Using A Krylov Multisplitting Algorithm
, 1994
"... We present a parallel variant of the inexact Newton algorithm that uses the Krylov multisplitting algorithm (KMS) to compute the approximate Newton direction. The algorithm can be used for solving unconstrained optimization problems or systems of nonlinear equations. The KMS algorithm is a more effi ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We present a parallel variant of the inexact Newton algorithm that uses the Krylov multisplitting algorithm (KMS) to compute the approximate Newton direction. The algorithm can be used for solving unconstrained optimization problems or systems of nonlinear equations. The KMS algorithm is a more efficient parallel implementation of Krylov subspace methods (GMRES, Arnoldi, etc.) with multisplitting preconditioners. The work of the KMS algorithm is divided into the multisplitting tasks and a direction forming task. There is a great deal of parallelism within each task and the number of synchronization points between the tasks is greatly reduced. We study the local and global convergence properties of the algorithm and present results of numerical examples on a sequential computer.