Results 1 
8 of
8
Snopt: An SQP Algorithm For LargeScale Constrained Optimization
, 1997
"... Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first deriv ..."
Abstract

Cited by 327 (18 self)
 Add to MetaCart
Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first derivatives are available, and that the constraint gradients are sparse.
Trust Region SQP Methods With Inexact Linear System Solves For LargeScale Optimization
, 2006
"... by ..."
A TRUNCATED SQP METHOD BASED ON INEXACT INTERIORPOINT SOLUTIONS OF SUBPROBLEMS ∗
"... Abstract. We consider sequential quadratic programming (SQP) methods applied to optimization problems with nonlinear equality constraints and simple bounds. In particular, we propose and analyze a truncated SQP algorithm in which subproblems are solved approximately by an infeasible predictorcorrec ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
Abstract. We consider sequential quadratic programming (SQP) methods applied to optimization problems with nonlinear equality constraints and simple bounds. In particular, we propose and analyze a truncated SQP algorithm in which subproblems are solved approximately by an infeasible predictorcorrector interiorpoint method, followed by setting to zero some variables and some multipliers so that complementarity conditions for approximate solutions are enforced. Verifiable truncation conditions based on the residual of optimality conditions of subproblems are developed to ensure both global and fast local convergence. Global convergence is established under assumptions that are standard for linesearch SQP with exact solution of subproblems. The local superlinear convergence rate is shown under the weakest assumptions that guarantee this property for pure SQP with exact solution of subproblems, namely, the strict Mangasarian–Fromovitz constraint qualification and secondorder sufficiency. Local convergence results for our truncated method are presented as a special case of the local convergence for a more general perturbed SQP framework, which is of independent interest and is applicable even to some algorithms whose subproblems are not quadratic programs. For example, the framework can also be used to derive sharp local convergence results for linearly constrained Lagrangian methods. Preliminary numerical results confirm that it can be indeed beneficial to solve subproblems approximately, especially on early iterations. Key words. sequential quadratic programming, inexact sequential quadratic programming, truncated sequential quadratic programming, interiorpoint method, superlinear convergence
Recent Developments In InteriorPoint Methods
, 1999
"... The modern era of interiorpoint methods dates to 1984, when Karmarkar proposed his algorithm for linear programming. In the years since then, algorithms and software for linear programming have become quite sophisticated, while extensions to more general classes of problems, such as convex quadrati ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
The modern era of interiorpoint methods dates to 1984, when Karmarkar proposed his algorithm for linear programming. In the years since then, algorithms and software for linear programming have become quite sophisticated, while extensions to more general classes of problems, such as convex quadratic programming, semidefinite programming, and nonconvex and nonlinear problems, have reached varying levels of maturity. Interiorpoint methodology has been used as part of the solution strategy in many other optimization contexts as well, including analytic center methods and columngeneration algorithms for large linear programs. We review some core developments in the area and discuss their impact on these other problem areas.
Some Reflections on the Current State of ActiveSet and InteriorPoint Methods for Constrained Optimization
, 2003
"... We reect on the current state of activeset and interiorpoint methods for convex and nonconvex constrained optimization. We voice some concerns about current SQP methods. ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
We reect on the current state of activeset and interiorpoint methods for convex and nonconvex constrained optimization. We voice some concerns about current SQP methods.
GLOBAL AND FINITE TERMINATION OF A TWOPHASE AUGMENTED LAGRANGIAN FILTER METHOD FOR GENERAL QUADRATIC PROGRAMS
, 2007
"... We present a twophase algorithm for solving largescale quadratic programs (QPs). In the first phase, gradientprojection iterations approximately minimize a boundconstrained augmented Lagrangian function and provide an estimate of the optimal active set. In the second phase, an equalityconstrai ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We present a twophase algorithm for solving largescale quadratic programs (QPs). In the first phase, gradientprojection iterations approximately minimize a boundconstrained augmented Lagrangian function and provide an estimate of the optimal active set. In the second phase, an equalityconstrained QP defined by the current active set is approximately minimized in order to generate a secondorder search direction. A filter determines the required accuracy of the subproblem solutions and provides an acceptance criterion for the search directions. The resulting algorithm is globally and finitely convergent. The algorithm is suitable for largescale problems with many degrees of freedom, and provides an alternative to interiorpoint methods when iterative methods must be used to solve the underlying linear systems. Numerical experiments on a subset of the CUTEr QP test problems demonstrate the effectiveness of the approach.
SQP and PDIP algorithms for Nonlinear Programming
, 2007
"... Penalty and barrier methods are indirect ways of solving constrained optimization problems, using techniques developed in the unconstrained optimization realm. In what follows we shall give the foundation of two more direct ways of solving constrained optimization problems, namely Sequential Quadrat ..."
Abstract
 Add to MetaCart
Penalty and barrier methods are indirect ways of solving constrained optimization problems, using techniques developed in the unconstrained optimization realm. In what follows we shall give the foundation of two more direct ways of solving constrained optimization problems, namely Sequential Quadratic Programming (SQP) methods and PrimalDual Interior Point (PDIP) methods. 1 Sequential Quadratic Programming For the derivation of the Sequential Quadratic Programming method we shall use the equality constrained problem minimize f(x) x subject to h(x) = 0, (ECP) where f: R n → R and h: R n → R m are smooth functions. An understanding of this problem is essential in the design of SQP methods for general nonlinear programming problems. The KKT conditions for this problem are given by ∇f(x) + m� λi∇hi(x) = 0 (1a) i=1 1 h(x) = 0 (1b) where λ ∈ R m are the Lagrange multipliers associated with the equality constraints. If we use the Lagrangian L(x, λ) = f(x) + m� λihi(x) (2) we can write the KKT conditions (1) more compactly as ∇x L(x, λ) = 0. (EQKKT) ∇λ L(x, λ) As with Newton’s method unconstrained optimization, the main idead behind SQP is to model problem (ECP) at a given point x (k) by a quadratic programming subrpoblem and then use the solution to this problem to construct a more accurate approximation x (k+1). If we perform a Taylor series expansion of system (EQKKT) about (x (k) , λ (k) ) we obtain ∇x L(x (k) , λ (k))
ON MUTUAL IMPACT OF NUMERICAL LINEAR ALGEBRA AND LARGESCALE OPTIMIZATION WITH FOCUS ON INTERIOR POINT METHODS
"... and largescale optimization with focus on interior point methods ..."