Results 1  10
of
21
Snopt: An SQP Algorithm For LargeScale Constrained Optimization
, 1997
"... Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first deriv ..."
Abstract

Cited by 327 (18 self)
 Add to MetaCart
Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first derivatives are available, and that the constraint gradients are sparse.
Sequential Quadratic Programming
, 1995
"... this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can ..."
Abstract

Cited by 114 (2 self)
 Add to MetaCart
this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can
A New Trust Region Algorithm For Equality Constrained Optimization
, 1995
"... . We present a new trust region algorithm for solving nonlinear equality constrained optimization problems. At each iterate a change of variables is performed to improve the ability of the algorithm to follow the constraint level sets. The algorithm employs L 2 penalty functions for obtaining global ..."
Abstract

Cited by 51 (7 self)
 Add to MetaCart
. We present a new trust region algorithm for solving nonlinear equality constrained optimization problems. At each iterate a change of variables is performed to improve the ability of the algorithm to follow the constraint level sets. The algorithm employs L 2 penalty functions for obtaining global convergence. Under certain assumptions we prove that this algorithm globally converges to a point satisfying the second order necessary optimality conditions; the local convergence rate is quadratic. Results of preliminary numerical experiments are presented. 1. Introduction. We consider the equality constrained optimization problem minimize f(x) subject to c(x) = 0 (1:1) where x 2 ! n and f : ! n ! !, and c : ! n ! ! m are smooth nonlinear functions. Problem (1.1) is often solved by successive quadratic programming (SQP) methods. At a current point x k 2 ! n , SQP methods determine a search direction d k by solving a quadratic programming problem minimize rf(x k ) T d + 1 2 ...
A Sqp Method For General Nonlinear Programs Using Only Equality Constrained Subproblems
 MATHEMATICAL PROGRAMMING
, 1993
"... In this paper we describe a new version of a sequential equality constrained quadratic programming method for general nonlinear programs with mixed equality and inequality constraints. Compared with an older version [34] it is much simpler to implement and allows any kind of changes of the working s ..."
Abstract

Cited by 46 (2 self)
 Add to MetaCart
In this paper we describe a new version of a sequential equality constrained quadratic programming method for general nonlinear programs with mixed equality and inequality constraints. Compared with an older version [34] it is much simpler to implement and allows any kind of changes of the working set in every step. Our method relies on a strong regularity condition. As far as it is applicable the new approach is superior to conventional SQPmethods, as demonstrated by extensive numerical tests.
A reduced Hessian method for largescale constrained optimization
 SIAM JOURNAL ON OPTIMIZATION
, 1995
"... ..."
Formulation and Analysis of a Sequential Quadratic Programming Method for the Optimal Dirichlet Boundary Control of NavierStokes Flow
, 1997
"... The optimal boundary control of NavierStokes flow is formulated as a constrained optimization problem and a sequential quadratic programming (SQP) approach is studied for its solution. Since SQP methods treat states and controls as independent variables and do not insist on satisfying the constrai ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
The optimal boundary control of NavierStokes flow is formulated as a constrained optimization problem and a sequential quadratic programming (SQP) approach is studied for its solution. Since SQP methods treat states and controls as independent variables and do not insist on satisfying the constraints during the iterations, care must be taken to avoid a possible incompatibility of Dirichlet boundary conditions and incompressibility constraint. In this paper, compatibility is enforced by choosing appropriate function spaces. The resulting optimization problem is analyzed. Differentiability of the constraints and surjectivity of linearized constraints are verified and adjoints are computed. An SQP method is applied to the optimization problem and compared with other approaches.
Inexact SQP methods for equality constrained optimization
 SIAM J. Opt
"... Abstract. We present an algorithm for largescale equality constrained optimization. The method is based on a characterization of inexact sequential quadratic programming (SQP) steps that can ensure global convergence. Inexact SQP methods are needed for largescale applications for which the iterati ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
Abstract. We present an algorithm for largescale equality constrained optimization. The method is based on a characterization of inexact sequential quadratic programming (SQP) steps that can ensure global convergence. Inexact SQP methods are needed for largescale applications for which the iteration matrix cannot be explicitly formed or factored and the arising linear systems must be solved using iterative linear algebra techniques. We address how to determine when a given inexact step makes sufficient progress toward a solution of the nonlinear program, as measured by an exact penalty function. The method is globalized by a line search. An analysis of the global convergence properties of the algorithm and numerical results are presented. Key words. largescale optimization, constrained optimization, sequential quadratic programming, inexact linear system solvers, Krylov subspace methods AMS subject classifications. 49M37, 65K05, 90C06, 90C30, 90C55 1. Introduction. In
Methods for nonlinear constraints in optimization calculations
 THE STATE OF THE ART IN NUMERICAL ANALYSIS
, 1996
"... ..."
Flexible Penalty Functions for Nonlinear Constrained Optimization
, 2007
"... We propose a globalization strategy for nonlinear constrained optimization. The method employs a “flexible” penalty function to promote convergence, where during each iteration the penalty parameter can be chosen as any number within a prescribed interval, rather than a fixed value. This increased f ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
We propose a globalization strategy for nonlinear constrained optimization. The method employs a “flexible” penalty function to promote convergence, where during each iteration the penalty parameter can be chosen as any number within a prescribed interval, rather than a fixed value. This increased flexibility in the step acceptance procedure is designed to promote long productive steps for fast convergence. An analysis of the global convergence properties of the approach in the context of a line search Sequential Quadratic Programming method and numerical results for the KNITRO software package are presented.
On the realization of the Wolfe conditions in reduced quasiNewton methods for equality constrained optimization
 SIAM Journal on Optimization
, 1997
"... Abstract. This paper describes a reduced quasiNewton method for solving equality constrained optimization problems. A major difficulty encountered by this type of algorithm is the design of a consistent technique for maintaining the positive definiteness of the matrices approximating the reduced He ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Abstract. This paper describes a reduced quasiNewton method for solving equality constrained optimization problems. A major difficulty encountered by this type of algorithm is the design of a consistent technique for maintaining the positive definiteness of the matrices approximating the reduced Hessian of the Lagrangian. A new approach is proposed in this paper. The idea is to search for the next iterate along a piecewise linear path. The path is designed so that some generalized Wolfe conditions can be satisfied. These conditions allow the algorithm to sustain the positive definiteness of the matrices from iteration to iteration by a mechanism that has turned out to be efficient in unconstrained optimization.