Results 1  10
of
20
Snopt: An SQP Algorithm For LargeScale Constrained Optimization
, 1997
"... Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first deriv ..."
Abstract

Cited by 328 (18 self)
 Add to MetaCart
Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first derivatives are available, and that the constraint gradients are sparse.
Sequential Quadratic Programming
, 1995
"... this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can ..."
Abstract

Cited by 114 (2 self)
 Add to MetaCart
this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can
A reduced Hessian method for largescale constrained optimization
 SIAM JOURNAL ON OPTIMIZATION
, 1995
"... ..."
Quadratically And Superlinearly Convergent Algorithms For The Solution Of Inequality Constrained Minimization Problems
, 1995
"... . In this paper some Newton and quasiNewton algorithms for the solution of inequality constrained minimization problems are considered. All the algorithms described produce sequences fx k g converging qsuperlinearly to the solution. Furthermore, under mild assumptions, a qquadratic convergence ra ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
. In this paper some Newton and quasiNewton algorithms for the solution of inequality constrained minimization problems are considered. All the algorithms described produce sequences fx k g converging qsuperlinearly to the solution. Furthermore, under mild assumptions, a qquadratic convergence rate in x is also attained. Other features of these algorithms are that the solution of linear systems of equations only is required at each iteration and that the strict complementarity assumption is never invoked. First the superlinear or quadratic convergence rate of a Newtonlike algorithm is proved. Then, a simpler version of this algorithm is studied and it is shown that it is superlinearly convergent. Finally, quasiNewton versions of the previous algorithms are considered and, provided the sequence defined by the algorithms converges, a characterization of superlinear convergence extending the result of Boggs, Tolle and Wang is given. Key Words. Inequality constrained optimization, New...
A Global Convergence Analysis Of An Algorithm For Large Scale Nonlinear Optimization Problems
, 1996
"... . In this paper we give a global convergence analysis of a basic version of an SQP algorithm described in [2] for the solution of large scale nonlinear inequalityconstrained optimization problems. Several procedures and options have been added to the basic algorithm to improve the practical perform ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
. In this paper we give a global convergence analysis of a basic version of an SQP algorithm described in [2] for the solution of large scale nonlinear inequalityconstrained optimization problems. Several procedures and options have been added to the basic algorithm to improve the practical performance; some of these are also analyzed. The important features of the algorithm include the use of a constrained merit function to assess the progress of the iterates and a sequence of approximate merit functions that are less expensive to evaluate. It also employs an interior point quadratic programming solver that can be terminated early to produce a truncated step. Key words. Sequential Quadratic Programming, Global Convergence, Merit Function, Large Scale Problems. AMS subject classifications. 49M37, 65K05, 90C30 1. Introduction. In this report we consider an algorithm to solve the inequalityconstrained minimization problem, min x f(x) subject to: g(x) 0; (1.1) where x 2 R n , and...
Sequential Quadratic Programming for LargeScale Nonlinear Optimization
 I⋅E I +w S⋅E S ES EI located Pareto optimum (a) (b) ZR E=w I⋅E I +w S⋅E S
, 1999
"... The sequential quadratic programming (SQP) algorithm has been one of the most successful general methods for solving nonlinear constrained optimization problems. We provide an introduction to the general method and show its relationship to recent developments in interiorpoint approaches. We emph ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
The sequential quadratic programming (SQP) algorithm has been one of the most successful general methods for solving nonlinear constrained optimization problems. We provide an introduction to the general method and show its relationship to recent developments in interiorpoint approaches. We emphasize largescale aspects. Key words: sequential quadratic programming, nonlinear optimization, Newton methods, interiorpoint methods, local convergence, global convergence ? Contribution of Sandia National Laboratories and not subject to copyright in the United States. Preprint submitted to Elsevier Preprint 1 July 1999 1 Introduction In this article we consider the general method of Sequential Quadratic Programming (hereafter denoted SQP) for solving the nonlinear programming problem minimize f(x) x subject to: h(x) = 0 g(x) 0 (NLP) where f : R n ! R, h : R n ! R m , and g : R n ! R p . Broadly defined, the SQP method is a procedure that generates iterates converging ...
Convergence Results on an Algorithm for Norm Constrained Regularization and Related Problems
, 1997
"... The constrained leastsquares regularization of nonlinear illposed problems is a nonlinear programming problem for which trustregion methods have been developed. In this paper the convergence theory of one of those methods is addressed. It will be proved that, under suitable hypotheses, local (sup ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
The constrained leastsquares regularization of nonlinear illposed problems is a nonlinear programming problem for which trustregion methods have been developed. In this paper the convergence theory of one of those methods is addressed. It will be proved that, under suitable hypotheses, local (superlinear or quadratic) convergence holds and every accumulation point is secondorder stationary. Key words. Trustregion methods, Regularization, Ill Conditioning, IllPosed Problems, Constrained Minimization, FixedPoint QuasiNewton methods. 1 Introduction Many practical problems in applied sciences and engineering give rise to illconditioned (linear or nonlinear) systems F (x) = y (1) where F : IR n ! IR m . Neither "exact solutions" of (1) (when they exist), nor global minimizers of kF (x) \Gamma yk have physical meaning since they are, to a great extent, contaminated by the influence of measuring and rounding errors and, perhaps, uncertainty in the model formulation. From the ...
SHARP PRIMAL SUPERLINEAR CONVERGENCE RESULTS FOR SOME NEWTONIAN METHODS FOR CONSTRAINED OPTIMIZATION
, 2009
"... As is well known, superlinear or quadratic convergence of the primaldual sequence generated by an optimization algorithm does not, in general, imply superlinear convergence of the primal part. Primal convergence, however, is often of particular interest. For the sequential quadratic programming (SQ ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
As is well known, superlinear or quadratic convergence of the primaldual sequence generated by an optimization algorithm does not, in general, imply superlinear convergence of the primal part. Primal convergence, however, is often of particular interest. For the sequential quadratic programming (SQP) algorithm, local primaldual quadratic convergence can be established under the assumptions of uniqueness of the Lagrange multiplier associated to the solution and the secondorder sufficient condition. At the same time, previous primal superlinear convergence results for SQP required to strengthen the first assumption to the linear independence constraint qualification. In this paper, we show that this strengthening of assumptions is actually not necessary. Specifically, we show that once primaldual convergence is assumed or already established, for primal superlinear rate one only needs a certain error bound estimate. This error bound holds, for example, under the secondorder sufficient condition, which is needed for primaldual local analysis in any case. Moreover, in some situations even secondorder sufficiency can be relaxed to the weaker assumption that the multiplier in question is noncritical. Our study is performed for a rather general perturbed SQP framework, which covers in addition to SQP and quasiNewton SQP some other algorithms as well. For example, as a byproduct,
A Robust Algorithm for Optimization With General Equality and Inequality Constraints
 of Unkown Multipath Channels Based on Block Precoding and Transmit Diversity,” in Asilomar Conference on Signals, Systems, and Computers
"... An algorithm for general nonlinearly constrained optimization is presented, which solves an unconstrained piecewise quadratic subproblem and a quadratic programming subproblem at each iterate. The algorithm is robust since it can circumvent the difficulties associated with the possible inconsistency ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
An algorithm for general nonlinearly constrained optimization is presented, which solves an unconstrained piecewise quadratic subproblem and a quadratic programming subproblem at each iterate. The algorithm is robust since it can circumvent the difficulties associated with the possible inconsistency of QP subproblem of the original SQP method. Moreover, the algorithm can converge to a point which satisfies a certain firstorder necessary optimality condition even when the original problem is itself infeasible, which is a feature of Burke and Han's methods(1989). Unlike Burke and Han's methods(1989), however, we do not introduce additional bound constraints. The algorithm solves the same subproblems as HanPowell SQP algorithm at feasible points of the original problem. Under certain assumptions, it is shown that the algorithm coincide with the HanPowell method when the iterates are sufficiently close to the solution. Some global convergence results are proved and local superlinear co...
Infeasibility Detection and SQP Methods for Nonlinear Optimization
, 2008
"... This paper addresses the need for nonlinear programming algorithms that provide fast local convergence guarantees no matter if a problem is feasible or infeasible. We present an activeset sequential quadratic programming method derived from an exact penalty approach that adjusts the penalty paramet ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
This paper addresses the need for nonlinear programming algorithms that provide fast local convergence guarantees no matter if a problem is feasible or infeasible. We present an activeset sequential quadratic programming method derived from an exact penalty approach that adjusts the penalty parameter appropriately to emphasize optimality over feasibility, or vice versa. Conditions are presented under which superlinear convergence is achieved in the infeasible case. Numerical experiments illustrate the practical behavior of the method.