Results 1  10
of
26
A New Trust Region Algorithm For Equality Constrained Optimization
, 1995
"... . We present a new trust region algorithm for solving nonlinear equality constrained optimization problems. At each iterate a change of variables is performed to improve the ability of the algorithm to follow the constraint level sets. The algorithm employs L 2 penalty functions for obtaining global ..."
Abstract

Cited by 51 (7 self)
 Add to MetaCart
. We present a new trust region algorithm for solving nonlinear equality constrained optimization problems. At each iterate a change of variables is performed to improve the ability of the algorithm to follow the constraint level sets. The algorithm employs L 2 penalty functions for obtaining global convergence. Under certain assumptions we prove that this algorithm globally converges to a point satisfying the second order necessary optimality conditions; the local convergence rate is quadratic. Results of preliminary numerical experiments are presented. 1. Introduction. We consider the equality constrained optimization problem minimize f(x) subject to c(x) = 0 (1:1) where x 2 ! n and f : ! n ! !, and c : ! n ! ! m are smooth nonlinear functions. Problem (1.1) is often solved by successive quadratic programming (SQP) methods. At a current point x k 2 ! n , SQP methods determine a search direction d k by solving a quadratic programming problem minimize rf(x k ) T d + 1 2 ...
An Algorithm for Nonlinear Optimization Using Linear Programming and Equality Constrained Subproblems
, 2003
"... This paper describes an activeset algorithm for largescale nonlinear programming based on the successive linear programming method proposed by Fletcher and Sainz de la Maza [10]. The step computation is performed in two stages. In the first stage a linear program is solved to estimate the activ ..."
Abstract

Cited by 41 (12 self)
 Add to MetaCart
This paper describes an activeset algorithm for largescale nonlinear programming based on the successive linear programming method proposed by Fletcher and Sainz de la Maza [10]. The step computation is performed in two stages. In the first stage a linear program is solved to estimate the active set at the solution. The linear program is obtained by making a linear approximation to the ` 1 penalty function inside a trust region. In the second stage, an equality constrained quadratic program (EQP) is solved involving only those constraints that are active at the solution of the linear program.
On the implementation of an algorithm for largescale equality constrained optimization
 SIAM Journal on Optimization
, 1998
"... Abstract. This paper describes a software implementation of Byrd and Omojokun’s trust region algorithm for solving nonlinear equality constrained optimization problems. The code is designed for the efficient solution of large problems and provides the user with a variety of linear algebra techniques ..."
Abstract

Cited by 38 (11 self)
 Add to MetaCart
Abstract. This paper describes a software implementation of Byrd and Omojokun’s trust region algorithm for solving nonlinear equality constrained optimization problems. The code is designed for the efficient solution of large problems and provides the user with a variety of linear algebra techniques for solving the subproblems occurring in the algorithm. Second derivative information can be used, but when it is not available, limited memory quasiNewton approximations are made. The performance of the code is studied using a set of difficult test problems from the CUTE collection.
On attraction of Newtontype iterates to multipliers violating secondorder sufficiency conditions
, 2009
"... Assuming that the primal part of the sequence generated by a Newtontype (e.g., SQP) method applied to an equalityconstrained problem converges to a solution where the constraints are degenerate, we investigate whether the dual part of the sequence is attracted by those Lagrange multipliers which s ..."
Abstract

Cited by 16 (15 self)
 Add to MetaCart
Assuming that the primal part of the sequence generated by a Newtontype (e.g., SQP) method applied to an equalityconstrained problem converges to a solution where the constraints are degenerate, we investigate whether the dual part of the sequence is attracted by those Lagrange multipliers which satisfy secondorder sufficient condition (SOSC) for optimality, or by those multipliers which violate it. This question is relevant at least for two reasons: one is speed of convergence of standard methods; the other is applicability of some recently proposed approaches for handling degenerate constraints. We show that for the class of damped Newton methods, convergence of the dual sequence to multipliers satisfying SOSC is unlikely to occur. We support our findings by numerical experiments. We also suggest a simple auxiliary procedure for computing multiplier estimates, which does not have this
Methods for nonlinear constraints in optimization calculations
 THE STATE OF THE ART IN NUMERICAL ANALYSIS
, 1996
"... ..."
On the sequential quadratically constrained quadratic programming methods
 Math. Oper. Res
, 2004
"... doi 10.1287/moor.1030.0069 ..."
ON LOCAL CONVERGENCE OF SEQUENTIAL QUADRATICALLYCONSTRAINED QUADRATICPROGRAMMING TYPE METHODS, WITH AN EXTENSION TO VARIATIONAL PROBLEMS ∗
, 2005
"... We consider the class of quadraticallyconstrained quadraticprogramming methods in the framework extended from optimization to more general variational problems. Previously, in the optimization case, Anitescu (2002) showed superlinear convergence of the primal sequence under the MangasarianFromovi ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
We consider the class of quadraticallyconstrained quadraticprogramming methods in the framework extended from optimization to more general variational problems. Previously, in the optimization case, Anitescu (2002) showed superlinear convergence of the primal sequence under the MangasarianFromovitz constraint qualification and the quadratic growth condition. Quadratic convergence of the primaldual sequence was established by Fukushima, Luo and Tseng (2003) under the assumption of convexity, the Slater constraint qualification, and a strong secondorder sufficient condition. We obtain a new local convergence result, which complements the above (it is neither stronger nor weaker): we prove primaldual quadratic convergence under the linear independence constraint qualification, strict complementarity, and a secondorder sufficiency condition. Additionally, our results apply to variational problems beyond the optimization case. Finally, we provide a necessary and sufficient condition for superlinear convergence of the primal sequence under a DennisMoré type condition. Key words. Quadratically constrained quadratic programming, KarushKuhnTucker system, variational inequality, quadratic convergence, superlinear convergence, DennisMoré condition.
Relaxing Convergence Conditions To Improve The Convergence Rate
, 1999
"... Standard global convergence proofs are examined to determine why some algorithms perform better than other algorithms. We show that relaxing the conditions required to prove global convergence can improve an algorithm's performance. Further analysis indicates that minimizing an estimate of the dista ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Standard global convergence proofs are examined to determine why some algorithms perform better than other algorithms. We show that relaxing the conditions required to prove global convergence can improve an algorithm's performance. Further analysis indicates that minimizing an estimate of the distance to the minimum relaxes the convergence conditions in such a way as to improve an algorithm's convergence rate. A new linesearch algorithm based on these ideas is presented that does not force a reduction in the objective function at each iteration, yet it allows the objective function to increase during an iteration only if this will result in faster convergence. Unlike the nonmonotone algorithms in the literature, these new functions dynamically adjust to account for changes between the influence of curvature and descent. The result is an optimal algorithm in the sense that an estimate of the distance to the minimum is minimized at each iteration. The algorithm is shown to be well defi...
Exact Penalty Methods
 In I. Ciocco (Ed.), Algorithms for Continuous Optimization
, 1994
"... . Exact penalty methods for the solution of constrained optimization problems are based on the construction of a function whose unconstrained minimizing points are also solution of the constrained problem. In the first part of this paper we recall some definitions concerning exactness properties of ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
. Exact penalty methods for the solution of constrained optimization problems are based on the construction of a function whose unconstrained minimizing points are also solution of the constrained problem. In the first part of this paper we recall some definitions concerning exactness properties of penalty functions, of barrier functions, of augmented Lagrangian functions, and discuss under which assumptions on the constrained problem these properties can be ensured. In the second part of the paper we consider algorithmic aspects of exact penalty methods; in particular we show that, by making use of continuously differentiable functions that possess exactness properties, it is possible to define implementable algorithms that are globally convergent with superlinear convergence rate towards KKT points of the constrained problem. 1 Introduction "It would be a major theoretic breakthrough in nonlinear programming if a simple continuously differentiable function could be exhibited with th...
Cost Approximation Algorithms With Nonmonotone Line Searches for a General Class of Nonlinear Programs
, 1996
"... . When solving illconditioned nonlinear programs by descent algorithms, the descent requirement may induce the step lengths to become very small, thus resulting in very poor performances. Recently, suggestions have been made to circumvent this problem, among which is a class of approaches in which ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
. When solving illconditioned nonlinear programs by descent algorithms, the descent requirement may induce the step lengths to become very small, thus resulting in very poor performances. Recently, suggestions have been made to circumvent this problem, among which is a class of approaches in which the objective value may be allowed to increase temporarily. Grippo et al. [GLL91] introduce nonmonotone line searches in the class of deflected gradient methods in unconstrained differentiable optimization; this technique allows for longer steps (typically of unit length) to be taken, and is successfully applied to some illconditioned problems. This paper extends their nonmonotone approach and convergence results to the large class of cost approximation algorithms of Patriksson [Pat93b], and to optimization problems with both convex constraints and nondifferentiable objective functions. Key Words. Nondifferentiable optimization, cost approximation, nonmonotone algorithms Abbreviated Title...