Results 1 
8 of
8
REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS
, 2011
"... We present the formulation and analysis of a new sequential quadratic programming (SQP) method for general nonlinearly constrained optimization. The method pairs a primaldual generalized augmented Lagrangian merit function with a flexible line search to obtain a sequence of improving estimates of t ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
We present the formulation and analysis of a new sequential quadratic programming (SQP) method for general nonlinearly constrained optimization. The method pairs a primaldual generalized augmented Lagrangian merit function with a flexible line search to obtain a sequence of improving estimates of the solution. This function is a primaldual variant of the augmented Lagrangian proposed by Hestenes and Powell in the early 1970s. A crucial feature of the method is that the QP subproblems are convex, but formed from the exact second derivatives of the original problem. This is in contrast to methods that use a less accurate quasiNewton approximation. Additional benefits of this approach include the following: (i) each QP subproblem is regularized; (ii) the QP subproblem always has a known feasible point; and (iii) a projected gradient method may be used to identify the QP active set when far from the solution.
A Second Derivative SQP Method: Local Convergence 30 Practical Issues
 SIAM Journal of Optimization
"... results for a secondderivative SQP method for minimizing the exact ℓ1merit function for a fixed value of the penalty parameter. To establish this result, we used the properties of the socalled Cauchy step, which was itself computed from the socalled predictor step. In addition, we allowed for th ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
results for a secondderivative SQP method for minimizing the exact ℓ1merit function for a fixed value of the penalty parameter. To establish this result, we used the properties of the socalled Cauchy step, which was itself computed from the socalled predictor step. In addition, we allowed for the computation of a variety of (optional) SQP steps that were intended to improve the efficiency of the algorithm. Although we established global convergence of the algorithm, we did not discuss certain aspects that are critical when developing software capable of solving general optimization problems. In particular, we must have strategies for updating the penalty parameter and better techniques for defining the positivedefinite matrix Bk used in computing the predictor step. In this paper we address both of these issues. We consider two techniques for defining the positivedefinite matrix Bk—a simple diagonal approximation and a more sophisticated limitedmemory BFGS update. We also analyze a strategy for updating the penalty parameter based on approximately minimizing the ℓ1penalty function over a sequence of increasing values of the penalty parameter. Algorithms based on exact penalty functions have certain desirable properties. To be practical, however, these algorithms must be guaranteed to avoid the socalled Maratos effect. We show that a nonmonotone variant of our algorithm avoids this phenomenon and, therefore, results in asymptotically superlinear local convergence; this is verified by preliminary numerical results on the Hock and Shittkowski test set. Key words. Nonlinear programming, nonlinear inequality constraints, sequential quadratic programming, ℓ1penalty function, nonsmooth optimization AMS subject classifications. 49J52, 49M37, 65F22, 65K05, 90C26, 90C30, 90C55 1. Introduction. In [19]
On the Use of Piecewise Linear Models in Nonlinear Programming
, 2010
"... This paper presents an activeset algorithm for largescale optimization that occupies the middle ground between sequential quadratic programming (SQP) and sequential linearquadratic programming (SLQP) methods. It consists of two phases. The algorithm first minimizes a piecewise linear approximati ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
This paper presents an activeset algorithm for largescale optimization that occupies the middle ground between sequential quadratic programming (SQP) and sequential linearquadratic programming (SLQP) methods. It consists of two phases. The algorithm first minimizes a piecewise linear approximation of the Lagrangian, subject to a linearization of the constraints, to determine a working set. Then, an equality constrained subproblem based on this working set and using second derivative information is solved in order to promote fast convergence. A study of the local and global convergence properties of the algorithm highlights the importance of the placement of the interpolation points that determine the piecewise linear model of the Lagrangian. 1
A SECONDDERIVATIVE TRUSTREGION SQP METHOD WITH A “TRUSTREGIONFREE ” PREDICTOR STEP ∗
, 2009
"... ..."
(Show Context)
A Sequential Quadratic . . . WITH RAPID INFEASIBILITY DETECTION
, 2012
"... We present a sequential quadratic optimization (SQO) algorithm for nonlinear constrained optimization. The method attains all of the strong global and fast local convergence guarantees of classical SQO methods, but has the important additional feature that fast local convergence is guaranteed when ..."
Abstract
 Add to MetaCart
We present a sequential quadratic optimization (SQO) algorithm for nonlinear constrained optimization. The method attains all of the strong global and fast local convergence guarantees of classical SQO methods, but has the important additional feature that fast local convergence is guaranteed when the algorithm is employed to solve infeasible instances. A twophase strategy, carefully constructed parameter updates, and a line search are employed to promote such convergence. The first phase subproblem determines the highest level of improvement in linearized feasibility that can be attained locally. The second phase subproblem then seeks optimality in such a way that the resulting search direction attains a level of improvement in linearized feasibility that is proportional to that attained in the first phase. The subproblem formulations and parameter updates ensure that near an optimal solution, the algorithm reduces to a classical SQO method for optimization, and near an infeasible stationary point, the algorithm reduces to a (perturbed) SQO method for minimizing constraint violation. Global and local convergence guarantees for the algorithm are proved under common assumptions and numerical results are presented for a large set of test problems.
An Inexact sequential quadratic optimization Algorithm for LargeScale Nonlinear Optimization
 STEP COMPUTATIONS, SIAMJOURNALONSCIENTIFICCOMPUTING
"... We propose a sequential quadratic optimization method for solving nonlinear constrained optimization problems. The novel feature of the algorithm is that, during each iteration, the primaldual search direction is allowed to be an inexact solution of a given quadratic optimization subproblem. We p ..."
Abstract
 Add to MetaCart
We propose a sequential quadratic optimization method for solving nonlinear constrained optimization problems. The novel feature of the algorithm is that, during each iteration, the primaldual search direction is allowed to be an inexact solution of a given quadratic optimization subproblem. We present a set of generic, loose conditions that the search direction (i.e., inexact subproblem solution) must satisfy so that global convergence of the algorithm for solving the nonlinear problem is guaranteed. The algorithm can be viewed as a globally convergent inexact Newtonbased method. The results of numerical experiments are provided to illustrate the reliability and efficiency of the proposed numerical method.
An InteriorPoint TrustFunnel Algorithm for Nonlinear Optimization
, 2013
"... We present an interiorpoint trustfunnel algorithm for solving largescale nonlinear optimization problems. The method is based on an approach proposed by Gould and Toint (Math. Prog., 122(1):155196, 2010) that focused on solving equality constrained problems. Our method is similar in that it ac ..."
Abstract
 Add to MetaCart
(Show Context)
We present an interiorpoint trustfunnel algorithm for solving largescale nonlinear optimization problems. The method is based on an approach proposed by Gould and Toint (Math. Prog., 122(1):155196, 2010) that focused on solving equality constrained problems. Our method is similar in that it achieves global convergence guarantees by combining a trustregion methodology with a funnel mechanism, but has the additional capability that it solves problems with both equality and inequality constraints. The prominent features of our algorithm are that (i) the subproblems that define each search direction may be solved approximately, (ii) criticality measures for feasibility and optimality aid in determining which subset of computations will be performed during each iteration, (iii) no merit function or filter is used, (iv) inexact sequential quadratic optimization steps may be utilized when advantageous, and (v) it may be implemented matrixfree so that derivative matrices need not be formed or factorized so long as matrixvector products with them can be performed.
A Suboptimal and Analytical Solution to Mobile Robot Trajectory Generation amidst Moving Obstacles
"... Abstract—In this paper, we present a suboptimal and analytical solution to the trajectory generation of mobile robots operating in a dynamic environment with moving obstacles. The proposed solution explicitly addresses both the robot kinodynamic constraints and the geometric constraints due to obst ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—In this paper, we present a suboptimal and analytical solution to the trajectory generation of mobile robots operating in a dynamic environment with moving obstacles. The proposed solution explicitly addresses both the robot kinodynamic constraints and the geometric constraints due to obstacles while ensuring the suboptimal performance to a combined performance metric. In particular, the proposed design is based on a family of parameterized trajectories, which provides a unified way to embed the kinodynamic constraints, geometric constraints, and performance index into a set of parameterized constraint equations. To that end, the suboptimal solution to the constrained optimization problem can be analytically obtained. The solvability conditions to the constraint equations are explicitly established, and the proposed solution enhances the methodologies of realtime path planning for mobile robots with kinodynamic constraints. Both the simulation and experiment results verify the effectiveness of the proposed method. I.