Results 1  10
of
15
A Second Derivative SQP Method: Local Convergence 30 Practical Issues
 SIAM Journal of Optimization
"... results for a secondderivative SQP method for minimizing the exact ℓ1merit function for a fixed value of the penalty parameter. To establish this result, we used the properties of the socalled Cauchy step, which was itself computed from the socalled predictor step. In addition, we allowed for th ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
(Show Context)
results for a secondderivative SQP method for minimizing the exact ℓ1merit function for a fixed value of the penalty parameter. To establish this result, we used the properties of the socalled Cauchy step, which was itself computed from the socalled predictor step. In addition, we allowed for the computation of a variety of (optional) SQP steps that were intended to improve the efficiency of the algorithm. Although we established global convergence of the algorithm, we did not discuss certain aspects that are critical when developing software capable of solving general optimization problems. In particular, we must have strategies for updating the penalty parameter and better techniques for defining the positivedefinite matrix Bk used in computing the predictor step. In this paper we address both of these issues. We consider two techniques for defining the positivedefinite matrix Bk—a simple diagonal approximation and a more sophisticated limitedmemory BFGS update. We also analyze a strategy for updating the penalty parameter based on approximately minimizing the ℓ1penalty function over a sequence of increasing values of the penalty parameter. Algorithms based on exact penalty functions have certain desirable properties. To be practical, however, these algorithms must be guaranteed to avoid the socalled Maratos effect. We show that a nonmonotone variant of our algorithm avoids this phenomenon and, therefore, results in asymptotically superlinear local convergence; this is verified by preliminary numerical results on the Hock and Shittkowski test set. Key words. Nonlinear programming, nonlinear inequality constraints, sequential quadratic programming, ℓ1penalty function, nonsmooth optimization AMS subject classifications. 49J52, 49M37, 65F22, 65K05, 90C26, 90C30, 90C55 1. Introduction. In [19]
A SECOND DERIVATIVE SQP METHOD WITH IMPOSED DESCENT
, 2008
"... Sequential quadratic programming (SQP) methods form a class of highly efficient algorithms for solving nonlinearly constrained optimization problems. Although second derivative information may often be calculated, there is little practical theory that justifies exactHessian SQP methods. In particul ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Sequential quadratic programming (SQP) methods form a class of highly efficient algorithms for solving nonlinearly constrained optimization problems. Although second derivative information may often be calculated, there is little practical theory that justifies exactHessian SQP methods. In particular, the resulting quadratic programming (QP) subproblems are often nonconvex, and thus finding their global solutions may be computationally nonviable. This paper presents a secondderivative Sℓ1QP method based on quadratic subproblems that are either convex, and thus may be solved efficiently, or need not be solved globally. Additionally, an explicit descent constraint is imposed on certain QP subproblems, which “guides ” the iterates through areas in which nonconvexity is a concern. Global convergence of the resulting algorithm is established.
A SECOND DERIVATIVE SQP METHOD: THEORETICAL ISSUES
, 2008
"... Sequential quadratic programming (SQP) methods form a class of highly efficient algorithms for solving nonlinearly constrained optimization problems. Although second derivative information may often be calculated, there is little practical theory that justifies exactHessian SQP methods. In particul ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Sequential quadratic programming (SQP) methods form a class of highly efficient algorithms for solving nonlinearly constrained optimization problems. Although second derivative information may often be calculated, there is little practical theory that justifies exactHessian SQP methods. In particular, the resulting quadratic programming (QP) subproblems are often nonconvex, and thus finding their global solutions may be computationally nonviable. This paper presents a secondderivative SQP method based on quadratic subproblems that are either convex, and thus may be solved efficiently, or need not be solved globally. Additionally, an explicit descentconstraint is imposed on certain QP subproblems, which “guides” the iterates through areas in which nonconvexity is a concern. Global convergence of the resulting algorithm is established.
Multilevel algorithms for largescale interior point methods in bound constrained optimization
, 2006
"... We develop and compare multilevel algorithms for solving bound constrained nonlinear variational problems via interior point methods. Several equivalent formulations of the linear systems arising at each iteration of the interior point method are compared from the point of view of conditioning and i ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
We develop and compare multilevel algorithms for solving bound constrained nonlinear variational problems via interior point methods. Several equivalent formulations of the linear systems arising at each iteration of the interior point method are compared from the point of view of conditioning and iterative solution. Furthermore, we show how a multilevel continuation strategy can be used to obtain good initial guesses (“hot starts”) for each nonlinear iteration. A minimal surface problem is used to illustrate the various approaches.
A SECONDDERIVATIVE TRUSTREGION SQP METHOD WITH A “TRUSTREGIONFREE ” PREDICTOR STEP ∗
, 2009
"... ..."
(Show Context)
Acknowledgements
"... mathematical suggestions. I would also like to acknowledge the support of the Centre of Algebra at the University of Lisbon, and of ..."
Abstract
 Add to MetaCart
(Show Context)
mathematical suggestions. I would also like to acknowledge the support of the Centre of Algebra at the University of Lisbon, and of
Noname manuscript No. (will be inserted by the editor) A PenaltyInteriorPoint Algorithm for Nonlinear Constrained Optimization
, 2011
"... Abstract Penalty and interiorpoint methods for nonlinear optimization problems have enjoyed great successes for decades. Penalty methods have proved to be effective for a variety of problem classes due to their regularization effects on the constraints. They have also been shown to allow for rapid ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract Penalty and interiorpoint methods for nonlinear optimization problems have enjoyed great successes for decades. Penalty methods have proved to be effective for a variety of problem classes due to their regularization effects on the constraints. They have also been shown to allow for rapid infeasibility detection. Interiorpoint methods have become the workhorse in largescale optimization due to their Newtonlike qualities, both in terms of their scalability and convergence behavior. Each of these two strategies, however, have certain disadvantages that make their use either impractical or inefficient for certain classes of problems. The goal of this paper is to present a penaltyinteriorpoint method that possesses the advantages of penalty and interiorpoint techniques, but does not suffer from their disadvantages. Numerous attempts have been made along these lines in recent years, each with varying degrees of success. The novel feature of the algorithm in this paper is that our focus is not only on the formulation of the penaltyinteriorpoint subproblem itself, but on the design of updates for the penalty and interiorpoint parameters. The updates we propose are designed so that rapid convergence to a solution of the nonlinear optimization problem or an infeasible stationary point is attained. We motivate the convergence properties of our algorithm and illustrate its practical performance on a large set of problems, including sets of problems that exhibit degeneracy or are infeasible.
Adaptive Augmented Lagrangian Methods for LargeScale Equality Constrained Optimization
, 2012
"... We propose an augmented Lagrangian algorithm for solving largescale equality constrained optimization problems. The novel feature of the algorithm is an adaptive update for the penalty parameter motivated by recently proposed techniques for exact penalty methods. This adaptive updating scheme grea ..."
Abstract
 Add to MetaCart
(Show Context)
We propose an augmented Lagrangian algorithm for solving largescale equality constrained optimization problems. The novel feature of the algorithm is an adaptive update for the penalty parameter motivated by recently proposed techniques for exact penalty methods. This adaptive updating scheme greatly improves the overall performance of the algorithm without sacrificing the strengths of the core augmented Lagrangian framework, such as its attractive local convergence behavior and ability to be implemented matrixfree. This latter strength is particularly important due to interests in employing augmented Lagrangian algorithms for solving largescale optimization problems. We focus on a trust region algorithm, but also propose a line search algorithm that employs the same adaptive penalty parameter updating scheme. We provide theoretical results related to the global convergence behavior of our algorithms and illustrate by a set of numerical experiments that they outperform traditional augmented Lagrangian methods in terms of critical performance measures.
Optimization
, 2010
"... Publication details, including instructions for authors and subscription information: ..."
Abstract
 Add to MetaCart
Publication details, including instructions for authors and subscription information:
unknown title
"... convergence of a second derivative SQP method for minimizing the exact 1merit function for a fixed value of the penalty parameter. This result required the properties of a socalled Cauchy step, which was itself computed from a socalled predictor step. In addition, they allowed for the additional ..."
Abstract
 Add to MetaCart
(Show Context)
convergence of a second derivative SQP method for minimizing the exact 1merit function for a fixed value of the penalty parameter. This result required the properties of a socalled Cauchy step, which was itself computed from a socalled predictor step. In addition, they allowed for the additional computation of a variety of (optional) accelerator steps that were intended to improve the efficiency of the algorithm. The main purpose of this paper is to prove that a nonmonotone variant of the algorithm is quadratically convergent for two specific realizations of the accelerator step; this is verified with preliminary numerical results on the Hock and Schittkowski test set. Once fast local convergence is established, we consider two specific aspects of the algorithm that are important for an efficient implementation. First, we discuss a strategy for defining the positivedefinite matrix Bk used in computing the predictor step that is based on a limitedmemory BFGS update. Second, we provide a simple strategy for updating the penalty parameter based on approximately minimizing the 1penalty function over a sequence of increasing values of the penalty parameter.