Results 1 
6 of
6
Analysis of Inexact TrustRegion SQP Algorithms
 RICE UNIVERSITY, DEPARTMENT OF
, 2000
"... In this paper we extend the design of a class of compositestep trustregion SQP methods and their global convergence analysis to allow inexact problem information. The inexact problem information can result from iterative linear systems solves within the trustregion SQP method or from approximatio ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
In this paper we extend the design of a class of compositestep trustregion SQP methods and their global convergence analysis to allow inexact problem information. The inexact problem information can result from iterative linear systems solves within the trustregion SQP method or from approximations of firstorder derivatives. Accuracy requirements in our trustregion SQP methods are adjusted based on feasibility and optimality of the iterates. Our accuracy requirements are stated in general terms, but we show how they can be enforced using information that is already available in matrixfree implementations of SQP methods. In the absence of inexactness our global convergence theory is equal to that of Dennis, ElAlem, Maciel (SIAM J. Optim., 7 (1997), pp. 177207). If all iterates are feasible, i.e., if all iterates satisfy the equality constraints, then our results are related to the known convergence analyses for trustregion methods with inexact gradient information fo...
Analysis of Inexact TrustRegion InteriorPoint SQP Algorithms
, 1995
"... In this paper we analyze inexact trustregion interiorpoint (TRIP) sequential quadratic programming (SQP) algorithms for the solution of optimization problems with nonlinear equality constraints and simple bound constraints on some of the variables. Such problems arise in many engineering applicati ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
In this paper we analyze inexact trustregion interiorpoint (TRIP) sequential quadratic programming (SQP) algorithms for the solution of optimization problems with nonlinear equality constraints and simple bound constraints on some of the variables. Such problems arise in many engineering applications, in particular in optimal control problems with bounds on the control. The nonlinear constraints often come from the discretization of partial differential equations. In such cases the calculation of derivative information and the solution of linearized equations is expensive. Often, the solution of linear systems and derivatives are computed inexactly yielding nonzero residuals. This paper analyzes the effect of the inexactness onto the convergence of TRIP SQP and gives practical rules to control the size of the residuals of these inexact calculations. It is shown that if the size of the residuals is of the order of both the size of the constraints and the trustregion radius, t...
Convergence to a SecondOrder Point of a TrustRegion Algorithm with a Nonmonotonic Penalty Parameter for Constrained Optimization
 Rice University
, 1996
"... In a recent paper, the author (Ref. 1) proposed a trustregion algorithm for solving the problem of minimizing a nonlinear function subject to a set of equality constraints. The main feature of the algorithm is that the penalty parameter in the merit function can be decreased whenever it is warrant ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
In a recent paper, the author (Ref. 1) proposed a trustregion algorithm for solving the problem of minimizing a nonlinear function subject to a set of equality constraints. The main feature of the algorithm is that the penalty parameter in the merit function can be decreased whenever it is warranted. He studied the behavior of the penalty parameter and proved several global and local convergence results. One of these results is that there exists a subsequence of the iterates generated by the algorithm, that converges to a point that satisfies the firstorder necessary conditions. In the current paper, we show that, for this algorithm, there exists a subsequence of iterates that converges to a point that satisfies both the firstorder and the secondorder necessary conditions. Key Words : Constrained optimization, equality constrained, penalty parameter, nonmonotonic penalty parameter, convergence, trustregion methods, firstorder point, secondorder point, necessary conditions. B 1...
On Global Convergence of A Trust Region and Affine Scaling Method for Nonlinearly Constrained Minimization
 A: Math. Gen
, 1994
"... . A nonlinearly constrained optimization problem can be solved by the exact penalty approach involving nondifferentiable functions P i jc i (x)j and P i max(0; c i (x)). In [11], a trust region affine scaling approach based on a 2norm subproblem is proposed for solving a nonlinear l 1 problem ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
. A nonlinearly constrained optimization problem can be solved by the exact penalty approach involving nondifferentiable functions P i jc i (x)j and P i max(0; c i (x)). In [11], a trust region affine scaling approach based on a 2norm subproblem is proposed for solving a nonlinear l 1 problem. The (quadratic) approximation and the trust region subproblem are defined using affine scaling techniques. Explicit sufficient decrease conditions are proposed to obtain a limit point satisfying complementarity, dual feasibility, and second order optimality. In this paper, we present the global convergence properties of this new approach. Key Words. nonlinearly constrained minimization, trust region, sufficient decrease conditions, affine scaling, exact penalty, nonlinear l 1 problem, global convergence 1 Research partially supported by the Applied Mathematical Sciences Research Program (KC0402) of the Office of Energy Research of the U.S. Department of Energy under grant DEFG0290ER25...
A Trust Region and Affine Scaling Method for Nonlinearly Constrained Minimization
, 1994
"... Abstract. A nonlinearly constrained minimization problem can be solved by the exact penalty approach involving nondifferentiable functions ..."
Abstract
 Add to MetaCart
Abstract. A nonlinearly constrained minimization problem can be solved by the exact penalty approach involving nondifferentiable functions
A WorstCase Example Using Linesearch Methods for Numerical Optimization with Inexact Gradient Evaluations.
, 1991
"... Two approaches often used to improve the robustness of numerical optimization algorithms are linesearch and trust region methods. Trust region methods have previously been shown to be extremely forgiving of high levels of noise and inaccuracy in gradient evaluations. We present a worstcase example ..."
Abstract
 Add to MetaCart
(Show Context)
Two approaches often used to improve the robustness of numerical optimization algorithms are linesearch and trust region methods. Trust region methods have previously been shown to be extremely forgiving of high levels of noise and inaccuracy in gradient evaluations. We present a worstcase example demonstrating that linesearch methods can be very fragile with respect to such noise. 1 Introduction Given the unconstrained minimization problem minimize f(x) ; f : ! n ! ! ; (1) we consider iterative numerical algorithms such as Newton's method or quasiNewton methods. These algorithms compute a local quadratic model about a given iterate x k and generate new iterates x k+1 = x k + s k using this model. For instance, if the quadratic model is / k (x k + s) = f(x k ) + g T k s + 1 2 s T B k s (2) (where g k approximates rf(x k ), the gradient of f at x k , and B k is a symmetric positive definite matrix approximating r 2 f(x k ), the Hessian of f at x k ), then the simplest quasiNe...