Results 1  10
of
11
Sequential Quadratic Programming
, 1995
"... this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can ..."
Abstract

Cited by 115 (2 self)
 Add to MetaCart
this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can
On the implementation of an algorithm for largescale equality constrained optimization
 SIAM Journal on Optimization
, 1998
"... Abstract. This paper describes a software implementation of Byrd and Omojokun’s trust region algorithm for solving nonlinear equality constrained optimization problems. The code is designed for the efficient solution of large problems and provides the user with a variety of linear algebra techniques ..."
Abstract

Cited by 39 (11 self)
 Add to MetaCart
Abstract. This paper describes a software implementation of Byrd and Omojokun’s trust region algorithm for solving nonlinear equality constrained optimization problems. The code is designed for the efficient solution of large problems and provides the user with a variety of linear algebra techniques for solving the subproblems occurring in the algorithm. Second derivative information can be used, but when it is not available, limited memory quasiNewton approximations are made. The performance of the code is studied using a set of difficult test problems from the CUTE collection.
On Combining Feasibility, Descent and Superlinear Convergence in Inequality Constrained Optimization
 Mathematical Programming
, 1993
"... . Extension of quasiNewton techniques from unconstrained to constrained optimization via Sequential Quadratic Programming (SQP) presents several difficulties. Among these are the possible inconsistency, away from the solution, of first order approximations to the constraints, resulting in infeasibi ..."
Abstract

Cited by 29 (1 self)
 Add to MetaCart
. Extension of quasiNewton techniques from unconstrained to constrained optimization via Sequential Quadratic Programming (SQP) presents several difficulties. Among these are the possible inconsistency, away from the solution, of first order approximations to the constraints, resulting in infeasibility of the quadratic programs; and the task of selecting a suitable merit function, to induce global convergence. In the case of inequality constrained optimization, both of these difficulties disappear if the algorithm is forced to generate iterates that all satisfy the constraints, and that yield monotonically decreasing objective function values. (Feasibility of the successive iterates is in fact required in many contexts such as in realtime applications or when the objective function is not well defined outside the feasible set). It has been recently shown that this can be achieved while preserving local twostep superlinear convergence. In this note, the essential ingredients for an S...
A Merit Function for Inequality Constrained Nonlinear Programming Problems
 Internal Report 4702, National Institute of Standards and Technology
, 1993
"... We consider the use of the sequential quadratic programming (SQP) technique for solving the inequality constrained minimization problem min x f(x) subject to: g i (x) 0; i = 1; : : : ; m: SQP methods require the use of an auxiliary function, called a merit function or linesearch function, for asse ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
We consider the use of the sequential quadratic programming (SQP) technique for solving the inequality constrained minimization problem min x f(x) subject to: g i (x) 0; i = 1; : : : ; m: SQP methods require the use of an auxiliary function, called a merit function or linesearch function, for assessing the steps that are generated. We derive a merit function by adding slack variables to create an equality constrained problem and then using the merit function developed earlier by the authors for the equality constrained case. We stress that we do not solve the slack variable problem, but only use it to construct the merit function. The resulting function is simplified in a certain way that leads to an effective procedure for updating the squares of the slack variables. A globally convergent algorithm, based on this merit function, is suggested, and is demonstrated to be effective in practice. Contribution of the National Institute of Standards and Technology and not subject to copyr...
Exact Penalty Methods
 In I. Ciocco (Ed.), Algorithms for Continuous Optimization
, 1994
"... . Exact penalty methods for the solution of constrained optimization problems are based on the construction of a function whose unconstrained minimizing points are also solution of the constrained problem. In the first part of this paper we recall some definitions concerning exactness properties of ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
. Exact penalty methods for the solution of constrained optimization problems are based on the construction of a function whose unconstrained minimizing points are also solution of the constrained problem. In the first part of this paper we recall some definitions concerning exactness properties of penalty functions, of barrier functions, of augmented Lagrangian functions, and discuss under which assumptions on the constrained problem these properties can be ensured. In the second part of the paper we consider algorithmic aspects of exact penalty methods; in particular we show that, by making use of continuously differentiable functions that possess exactness properties, it is possible to define implementable algorithms that are globally convergent with superlinear convergence rate towards KKT points of the constrained problem. 1 Introduction "It would be a major theoretic breakthrough in nonlinear programming if a simple continuously differentiable function could be exhibited with th...
A Nonsmooth Equation Based BFGS Method for Solving KKT Systems in Mathematical Programming
 Journal of Optimization Theory and Applications
, 1998
"... In this paper, we present a BFGS method for solving a KKT system in mathematical programming, based on a nonsmooth equation reformulation of the KKT system. We successively split the nonsmooth equation into equivalent equations with particular structure. Based on the splitting, we develop a BFGS met ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
In this paper, we present a BFGS method for solving a KKT system in mathematical programming, based on a nonsmooth equation reformulation of the KKT system. We successively split the nonsmooth equation into equivalent equations with particular structure. Based on the splitting, we develop a BFGS method in which subproblems are systems of linear equations with symmetric and positive definite coefficient matrices. A suitable line search is introduced under which the generated iterates exhibit an approximately norm decent property. The method is well defined and, under suitable conditions, converges to a KKT point globally and superlinearly without convexity assumption on the problem.
PseudoLinear Programming
, 1998
"... This short note revisits an algorithm previously sketched by Mathis and Mathis, Siam Review 1995, and used to solve a nonlinear hospital fee optimization problem. An analysis of the problem structure reveals how the Simplex algorithm, viewed under the correct light, can be the driving force behind a ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This short note revisits an algorithm previously sketched by Mathis and Mathis, Siam Review 1995, and used to solve a nonlinear hospital fee optimization problem. An analysis of the problem structure reveals how the Simplex algorithm, viewed under the correct light, can be the driving force behind a successful algorithm for a nonlinear problem. 1 A seemingly nonlinear program In a past Classroom Notes column, [6] Mathis and Mathis introduces an interesting optimization problem. Practical, in this era of budget constraints, their model describes a facet of hospital revenue and is used by managers in Texas to help in decision making. Even more interesting, for the theoretically minded, is the fact that a trivial algorithm seems to solve, albeit without a convergence proof, a nonlinear, arguably difficult problem. We revisit this problem to give a strong mathematical foundation to a slightly modified algorithm and explain, along the way, why the problem is much simpler than expected at f...
Switching Stepsize Strategies for SQP
, 2010
"... An SQP algorithm is presented for solving constrained nonlinear programming problems. The algorithm uses three stepsize strategies in order to achieve global and superlinear convergence. Switching rules are implemented that combine the merits and avoid the drawbacks of the three stepsize strategies. ..."
Abstract
 Add to MetaCart
An SQP algorithm is presented for solving constrained nonlinear programming problems. The algorithm uses three stepsize strategies in order to achieve global and superlinear convergence. Switching rules are implemented that combine the merits and avoid the drawbacks of the three stepsize strategies. A penalty parameter is determined using an adaptive strategy that aims to achieve sufficient decrease of the activated merit function. Global convergence is established and it is also shown that, locally, unity step sizes are accepted, and therefore superlinear convergence is not impeded under standard assumptions. Global convergence and convergence of the stepsizes is displayed on test problems from the Hock and Schittkowski collection. Keywords:
A decomposition method based on SQP for a class of multistage nonlinear stochastic programs
"... Multistage stochastic programming problems arise in many practical situations, such as production and manpower planning, portfolio selections and so on. Generally, the size of the deterministic equivalent of stochastic programs can be very large and not be solvable directly by optimization approach ..."
Abstract
 Add to MetaCart
Multistage stochastic programming problems arise in many practical situations, such as production and manpower planning, portfolio selections and so on. Generally, the size of the deterministic equivalent of stochastic programs can be very large and not be solvable directly by optimization approaches. Sequential quadratic programming methods are iterative and very effective for solving mediumsize nonlinear programming. Based on scenario analysis, a decomposition method based on SQP for solving a class of multistage nonlinear stochastic programs is proposed, which generates the search direction by solving parallelly a set of quadratic programming subproblems with size much less than the original problem at each iteration. Conjugate gradient methods can be introduced to derive the estimates of the dual multiplier associated with the nonanticipativity constraints. By selecting the stepsize to reduce an exact penalty function sufficiently, the algorithm terminate finitely at an approxim...
A QuasiNewton L2Penalty Method for Minimization Subject to Nonlinear Equality Constraints
"... . We present a modified L 2 penalty function method for equality constrained optimization problems. The pivotal feature of our algorithm is that at every iterate we invoke a special change of variables to improve the ability of the algorithm to follow the constraint level sets. This change of variab ..."
Abstract
 Add to MetaCart
. We present a modified L 2 penalty function method for equality constrained optimization problems. The pivotal feature of our algorithm is that at every iterate we invoke a special change of variables to improve the ability of the algorithm to follow the constraint level sets. This change of variables gives rise to a suitable block diagonal approximation to the Hessian which is then used to construct a quasiNewton method. We show that the complete algorithm is globally convergent with a local Qsuperlinearly convergence rate. Preliminary computational results are given for a few problems. 1. Introduction. We construct a quasiNewton L 2 penalty method for solving the equality constrained optimization problem minimize f(x) subject to c(x) = 0 (1:1) where x 2 ! n , and f : ! n ! ! and c : ! n ! ! m are smooth nonlinear functions. This method possesses both strong global convergence properties and a local superlinear convergence rate by combining an L 2 penalty function method ...