Results 1 
6 of
6
REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS
, 2011
"... We present the formulation and analysis of a new sequential quadratic programming (SQP) method for general nonlinearly constrained optimization. The method pairs a primaldual generalized augmented Lagrangian merit function with a flexible line search to obtain a sequence of improving estimates of t ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
We present the formulation and analysis of a new sequential quadratic programming (SQP) method for general nonlinearly constrained optimization. The method pairs a primaldual generalized augmented Lagrangian merit function with a flexible line search to obtain a sequence of improving estimates of the solution. This function is a primaldual variant of the augmented Lagrangian proposed by Hestenes and Powell in the early 1970s. A crucial feature of the method is that the QP subproblems are convex, but formed from the exact second derivatives of the original problem. This is in contrast to methods that use a less accurate quasiNewton approximation. Additional benefits of this approach include the following: (i) each QP subproblem is regularized; (ii) the QP subproblem always has a known feasible point; and (iii) a projected gradient method may be used to identify the QP active set when far from the solution.
On the Use of Piecewise Linear Models in Nonlinear Programming
, 2010
"... This paper presents an activeset algorithm for largescale optimization that occupies the middle ground between sequential quadratic programming (SQP) and sequential linearquadratic programming (SLQP) methods. It consists of two phases. The algorithm first minimizes a piecewise linear approximati ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper presents an activeset algorithm for largescale optimization that occupies the middle ground between sequential quadratic programming (SQP) and sequential linearquadratic programming (SLQP) methods. It consists of two phases. The algorithm first minimizes a piecewise linear approximation of the Lagrangian, subject to a linearization of the constraints, to determine a working set. Then, an equality constrained subproblem based on this working set and using second derivative information is solved in order to promote fast convergence. A study of the local and global convergence properties of the algorithm highlights the importance of the placement of the interpolation points that determine the piecewise linear model of the Lagrangian. 1
A SECONDDERIVATIVE TRUSTREGION SQP METHOD WITH A “TRUSTREGIONFREE ” PREDICTOR STEP ∗
, 2009
"... ..."
(will be inserted by the editor) An InteriorPoint TrustFunnel Algorithm for Nonlinear Optimization
, 2013
"... Abstract We present an interiorpoint trustfunnel algorithm for solving largescale nonlinear optimization problems. The method is based on an approach proposed by Gould and Toint (Math. Prog., 122(1):155196, 2010) that focused on solving equality constrained problems. Our method is similar in tha ..."
Abstract
 Add to MetaCart
Abstract We present an interiorpoint trustfunnel algorithm for solving largescale nonlinear optimization problems. The method is based on an approach proposed by Gould and Toint (Math. Prog., 122(1):155196, 2010) that focused on solving equality constrained problems. Our method is similar in that it achieves global convergence guarantees by combining a trustregion methodology with a funnel mechanism, but has the additional capability that it solves problems with both equality and inequality constraints. The prominent features of our algorithm are that (i) the subproblems that define each search direction may be solved approximately, (ii) criticality measures for feasibility and optimality aid in determining which subset of computations will be performed during each iteration, (iii) no merit function or filter is used, (iv) inexact sequential quadratic optimization steps may be utilized when advantageous, and (v) it may be implemented matrixfree so that derivative matrices need not be formed or factorized so long as matrixvector products with them can be performed. Keywords nonlinear optimization · constrained optimization · largescale optimization · interiorpoint
A Sequential Quadratic . . . WITH RAPID INFEASIBILITY DETECTION
, 2012
"... We present a sequential quadratic optimization (SQO) algorithm for nonlinear constrained optimization. The method attains all of the strong global and fast local convergence guarantees of classical SQO methods, but has the important additional feature that fast local convergence is guaranteed when ..."
Abstract
 Add to MetaCart
We present a sequential quadratic optimization (SQO) algorithm for nonlinear constrained optimization. The method attains all of the strong global and fast local convergence guarantees of classical SQO methods, but has the important additional feature that fast local convergence is guaranteed when the algorithm is employed to solve infeasible instances. A twophase strategy, carefully constructed parameter updates, and a line search are employed to promote such convergence. The first phase subproblem determines the highest level of improvement in linearized feasibility that can be attained locally. The second phase subproblem then seeks optimality in such a way that the resulting search direction attains a level of improvement in linearized feasibility that is proportional to that attained in the first phase. The subproblem formulations and parameter updates ensure that near an optimal solution, the algorithm reduces to a classical SQO method for optimization, and near an infeasible stationary point, the algorithm reduces to a (perturbed) SQO method for minimizing constraint violation. Global and local convergence guarantees for the algorithm are proved under common assumptions and numerical results are presented for a large set of test problems.
An Inexact sequential quadratic optimization Algorithm for LargeScale Nonlinear Optimization
 STEP COMPUTATIONS, SIAMJOURNALONSCIENTIFICCOMPUTING
"... We propose a sequential quadratic optimization method for solving nonlinear constrained optimization problems. The novel feature of the algorithm is that, during each iteration, the primaldual search direction is allowed to be an inexact solution of a given quadratic optimization subproblem. We p ..."
Abstract
 Add to MetaCart
We propose a sequential quadratic optimization method for solving nonlinear constrained optimization problems. The novel feature of the algorithm is that, during each iteration, the primaldual search direction is allowed to be an inexact solution of a given quadratic optimization subproblem. We present a set of generic, loose conditions that the search direction (i.e., inexact subproblem solution) must satisfy so that global convergence of the algorithm for solving the nonlinear problem is guaranteed. The algorithm can be viewed as a globally convergent inexact Newtonbased method. The results of numerical experiments are provided to illustrate the reliability and efficiency of the proposed numerical method.