Results 1 
7 of
7
Sequential Quadratic Programming
, 1995
"... this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can ..."
Abstract

Cited by 117 (3 self)
 Add to MetaCart
this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can
A Global Convergence Theory for General TrustRegionBased Algorithms for Equality Constrained Optimization
 SIAM Journal on Optimization
, 1992
"... This work presents a global convergence theory for a broad class of trustregion algorithms for the smooth nonlinear progro.mmln S problem with equality constraints. The main result generalizes Powell's 1975 result for unconstrained trustregion algorithms. ..."
Abstract

Cited by 44 (10 self)
 Add to MetaCart
This work presents a global convergence theory for a broad class of trustregion algorithms for the smooth nonlinear progro.mmln S problem with equality constraints. The main result generalizes Powell's 1975 result for unconstrained trustregion algorithms.
Nonlinear programming algorithms using trust regions and augmented Lagrangians with nonmonotone penalty parameters
, 1997
"... A model algorithm based on the successive quadratic programming method for solving the general nonlinear programming problem is presented. The objective function and the constraints of the problem are only required to be differentiable and their gradients to satisfy a Lipschitz condition. The strate ..."
Abstract

Cited by 20 (8 self)
 Add to MetaCart
A model algorithm based on the successive quadratic programming method for solving the general nonlinear programming problem is presented. The objective function and the constraints of the problem are only required to be differentiable and their gradients to satisfy a Lipschitz condition. The strategy for obtaining global convergence is based on the trust region approach. The merit function is a type of augmented Lagrangian. A new updating scheme is introduced for the penalty parameter, by means of which monotone increase is not necessary. Global convergence results are proved and numerical experiments are presented.
Practical aspects of variable reduction formulations and reduced basis algorithms in multidisciplinary design optimization
 in Multidisciplinary Design Optimization: StateoftheArt, N. Alexandrov and
, 1997
"... This paper discusses certain connections between nonlinear programming algorithms and the formulation of optimization problems for systems governed by state constraints. I work through the calculation of the sensitivities associated with the di erent formulations and present some useful relationship ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
This paper discusses certain connections between nonlinear programming algorithms and the formulation of optimization problems for systems governed by state constraints. I work through the calculation of the sensitivities associated with the di erent formulations and present some useful relationships between them. These relationships have practical consequences; if one uses a reduced basis nonlinear programming algorithm, then the implementations for the di erent formulations need only di er in a single step.
Convergence to a SecondOrder Point of a TrustRegion Algorithm with a Nonmonotonic Penalty Parameter for Constrained Optimization
 Rice University
, 1996
"... In a recent paper, the author (Ref. 1) proposed a trustregion algorithm for solving the problem of minimizing a nonlinear function subject to a set of equality constraints. The main feature of the algorithm is that the penalty parameter in the merit function can be decreased whenever it is warrant ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In a recent paper, the author (Ref. 1) proposed a trustregion algorithm for solving the problem of minimizing a nonlinear function subject to a set of equality constraints. The main feature of the algorithm is that the penalty parameter in the merit function can be decreased whenever it is warranted. He studied the behavior of the penalty parameter and proved several global and local convergence results. One of these results is that there exists a subsequence of the iterates generated by the algorithm, that converges to a point that satisfies the firstorder necessary conditions. In the current paper, we show that, for this algorithm, there exists a subsequence of iterates that converges to a point that satisfies both the firstorder and the secondorder necessary conditions. Key Words : Constrained optimization, equality constrained, penalty parameter, nonmonotonic penalty parameter, convergence, trustregion methods, firstorder point, secondorder point, necessary conditions. B 1...
Enlarging the Region of Convergence of Newton's Method for Constrained Optimization
, 1982
"... In this paper, we consider Newton's method for solving the system of necessary optimality conditions of optimization problems with equality and inequality constraints. The principal drawbacks of the method are the need for a good starting point, the inability to distinguish between local maxima ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper, we consider Newton's method for solving the system of necessary optimality conditions of optimization problems with equality and inequality constraints. The principal drawbacks of the method are the need for a good starting point, the inability to distinguish between local maxima and local minima, and, when inequality constraints are present, the necessity to solve a quadratic programming problem at each iteration. We show that all these drawbacks can be overcome to a great extent without sacrificing the superlinear convergence rate by making use of exact differentiable penalty functions introduced by Di Pillo and Grippo (Ref. 1). We also show that there is a close relationship between the class of penalty functions of Di Pillo and Grippo and the class of Fletcher (Ref. 2), and that the region of convergence of a variation of Newton's method can be enlarged by making use of one of Fletcher's penalty functions.