Results 1  10
of
84
SNOPT: An SQP Algorithm For LargeScale Constrained Optimization
, 2002
"... Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first deriv ..."
Abstract

Cited by 597 (24 self)
 Add to MetaCart
(Show Context)
Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first derivatives are available, and that the constraint gradients are sparse. We discuss
Sequential Quadratic Programming
, 1995
"... this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can ..."
Abstract

Cited by 166 (4 self)
 Add to MetaCart
this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can
A Computationally Efficient Feasible Sequential Quadratic Programming Algorithm
 SIAM Journal on Optimization
, 2001
"... . A sequential quadratic programming (SQP) algorithm generating feasible iterates is described and analyzed. What distinguishes this algorithm from previous feasible SQP algorithms proposed by various authors is a reduction in the amount of computation required to generate a new iterate while the pr ..."
Abstract

Cited by 56 (0 self)
 Add to MetaCart
(Show Context)
. A sequential quadratic programming (SQP) algorithm generating feasible iterates is described and analyzed. What distinguishes this algorithm from previous feasible SQP algorithms proposed by various authors is a reduction in the amount of computation required to generate a new iterate while the proposed scheme still enjoys the same global and fast local convergence properties. A preliminary implementation has been tested and some promising numerical results are reported. Key words. sequential quadratic programming, SQP, feasible iterates, feasible SQP, FSQP AMS subject classifications. 49M37, 65K05, 65K10, 90C30, 90C53 PII. S1052623498344562 1.
On the constant positive linear dependence condition and its application to SQP methods
 SIAM Journal on Optimization
, 2000
"... Abstract. In this paper, we introduce a constant positive linear dependence condition (CPLD), which is weaker than the Mangasarian–Fromovitz constraint qualification (MFCQ) and the constant rank constraint qualification (CRCQ). We show that a limit point of a sequence of approximating Karush–Kuhn–Tu ..."
Abstract

Cited by 50 (3 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we introduce a constant positive linear dependence condition (CPLD), which is weaker than the Mangasarian–Fromovitz constraint qualification (MFCQ) and the constant rank constraint qualification (CRCQ). We show that a limit point of a sequence of approximating Karush–Kuhn–Tucker (KKT) points is a KKT point if the CPLD holds there. We show that a KKT point satisfying the CPLD and the strong secondorder sufficiency conditions (SSOSC) is an isolated KKT point. We then establish convergence of a general sequential quadratical programming (SQP) method under the CPLD and the SSOSC. Finally, we apply these results to analyze the feasible SQP method proposed by Panier and Tits in 1993 for inequality constrained optimization problems. We establish its global convergence under the SSOSC and a condition slightly weaker than the Mangasarian–Fromovitz constraint qualification, and we prove superlinear convergence of a modified version of this algorithm under the SSOSC and a condition slightly weaker than the linear independence constraint qualification.
On the implementation of an algorithm for largescale equality constrained optimization
 SIAM Journal on Optimization
, 1998
"... Abstract. This paper describes a software implementation of Byrd and Omojokun’s trust region algorithm for solving nonlinear equality constrained optimization problems. The code is designed for the efficient solution of large problems and provides the user with a variety of linear algebra techniques ..."
Abstract

Cited by 49 (12 self)
 Add to MetaCart
(Show Context)
Abstract. This paper describes a software implementation of Byrd and Omojokun’s trust region algorithm for solving nonlinear equality constrained optimization problems. The code is designed for the efficient solution of large problems and provides the user with a variety of linear algebra techniques for solving the subproblems occurring in the algorithm. Second derivative information can be used, but when it is not available, limited memory quasiNewton approximations are made. The performance of the code is studied using a set of difficult test problems from the CUTE collection.
A reduced Hessian method for largescale constrained optimization
 SIAM JOURNAL ON OPTIMIZATION
, 1995
"... ..."
Smooth SQP Methods for Mathematical Programs with Nonlinear Complementarity Constraints
 SIAM Journal on Optimization
, 1997
"... Mathematical programs with nonlinear complementarity constraints are reformulated using betterposed but nonsmooth constraints. We introduce a class of functions, parameterized by a real scalar, to approximate these nonsmooth problems by smooth nonlinear programs. This smoothing procedure has the ex ..."
Abstract

Cited by 47 (0 self)
 Add to MetaCart
(Show Context)
Mathematical programs with nonlinear complementarity constraints are reformulated using betterposed but nonsmooth constraints. We introduce a class of functions, parameterized by a real scalar, to approximate these nonsmooth problems by smooth nonlinear programs. This smoothing procedure has the extra benefits that it often improves the prospect of feasibility and stability of the constraints of the associated nonlinear programs and their quadratic approximations. We present two globally convergent algorithms based on sequential quadratic programming, SQP, as applied in exact penalty methods for nonlinear programs. Global convergence of the implicit smooth SQP method depends on existence of a lowerlevel nondegenerate (strictly complementary) limit point of the iteration sequence. Global convergence of the explicit smooth SQP method depends on a weaker property, i.e. existence of a limit point at which a generalized constraint qualification holds. We also discuss some practical matter...
Quadratically And Superlinearly Convergent Algorithms For The Solution Of Inequality Constrained Minimization Problems
, 1995
"... . In this paper some Newton and quasiNewton algorithms for the solution of inequality constrained minimization problems are considered. All the algorithms described produce sequences fx k g converging qsuperlinearly to the solution. Furthermore, under mild assumptions, a qquadratic convergence ra ..."
Abstract

Cited by 35 (12 self)
 Add to MetaCart
. In this paper some Newton and quasiNewton algorithms for the solution of inequality constrained minimization problems are considered. All the algorithms described produce sequences fx k g converging qsuperlinearly to the solution. Furthermore, under mild assumptions, a qquadratic convergence rate in x is also attained. Other features of these algorithms are that the solution of linear systems of equations only is required at each iteration and that the strict complementarity assumption is never invoked. First the superlinear or quadratic convergence rate of a Newtonlike algorithm is proved. Then, a simpler version of this algorithm is studied and it is shown that it is superlinearly convergent. Finally, quasiNewton versions of the previous algorithms are considered and, provided the sequence defined by the algorithms converges, a characterization of superlinear convergence extending the result of Boggs, Tolle and Wang is given. Key Words. Inequality constrained optimization, New...
The ULagrangian of a convex function
 TRANS. AMER. MATH. SOC
, 1999
"... At a given point p, a convex function f is differentiable in a certain subspace U (the subspace along which ∂f(p) has 0breadth). This property opens the way to defining a suitably restricted second derivative of f at p. We do this via an intermediate function, convex on U. We call this function t ..."
Abstract

Cited by 34 (7 self)
 Add to MetaCart
At a given point p, a convex function f is differentiable in a certain subspace U (the subspace along which ∂f(p) has 0breadth). This property opens the way to defining a suitably restricted second derivative of f at p. We do this via an intermediate function, convex on U. We call this function the ULagrangian; it coincides with the ordinary Lagrangian in composite cases: exact penalty, semidefinite programming. Also, we use this new theory to design a conceptual pattern for superlinearly convergent minimization algorithms. Finally, we establish a connection with the MoreauYosida regularization.