Results 1  10
of
16
Sequential Quadratic Programming
, 1995
"... this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can ..."
Abstract

Cited by 114 (2 self)
 Add to MetaCart
this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can
A Global Convergence Theory for General TrustRegionBased Algorithms for Equality Constrained Optimization
 SIAM Journal on Optimization
, 1992
"... This work presents a global convergence theory for a broad class of trustregion algorithms for the smooth nonlinear progro.mmln S problem with equality constraints. The main result generalizes Powell's 1975 result for unconstrained trustregion algorithms. ..."
Abstract

Cited by 42 (10 self)
 Add to MetaCart
This work presents a global convergence theory for a broad class of trustregion algorithms for the smooth nonlinear progro.mmln S problem with equality constraints. The main result generalizes Powell's 1975 result for unconstrained trustregion algorithms.
On the implementation of an algorithm for largescale equality constrained optimization
 SIAM Journal on Optimization
, 1998
"... Abstract. This paper describes a software implementation of Byrd and Omojokun’s trust region algorithm for solving nonlinear equality constrained optimization problems. The code is designed for the efficient solution of large problems and provides the user with a variety of linear algebra techniques ..."
Abstract

Cited by 38 (11 self)
 Add to MetaCart
Abstract. This paper describes a software implementation of Byrd and Omojokun’s trust region algorithm for solving nonlinear equality constrained optimization problems. The code is designed for the efficient solution of large problems and provides the user with a variety of linear algebra techniques for solving the subproblems occurring in the algorithm. Second derivative information can be used, but when it is not available, limited memory quasiNewton approximations are made. The performance of the code is studied using a set of difficult test problems from the CUTE collection.
A Practical Algorithm For General Large Scale Nonlinear Optimization Problems
 SIAM Journal on Optimization
, 1994
"... . We provide an effective and efficient implementation of a sequential quadratic programming (SQP) algorithm for the general large scale nonlinear programming problem. In this algorithm the quadratic programming subproblems are solved by an interior point method that can be prematurely halted by a t ..."
Abstract

Cited by 22 (10 self)
 Add to MetaCart
. We provide an effective and efficient implementation of a sequential quadratic programming (SQP) algorithm for the general large scale nonlinear programming problem. In this algorithm the quadratic programming subproblems are solved by an interior point method that can be prematurely halted by a trust region constraint. Numerous computational enhancements to improve the numerical performance are presented. These include a dynamic procedure for adjusting the merit function parameter and procedures for adjusting the trust region radius. Numerical results and comparisons are presented. Key words: nonlinear programming, interior point, SQP, merit function, trust region, large scale 1. Introduction. In a series of recent papers, [3], [6], and [8], the authors have developed a new algorithmic approach for solving large, nonlinear, constrained optimization problems. This proposed procedure is, in essence, a sequential quadratic programming (SQP) method that uses an interior point algorithm...
Nonlinear programming algorithms using trust regions and augmented Lagrangians with nonmonotone penalty parameters
, 1997
"... A model algorithm based on the successive quadratic programming method for solving the general nonlinear programming problem is presented. The objective function and the constraints of the problem are only required to be differentiable and their gradients to satisfy a Lipschitz condition. The strate ..."
Abstract

Cited by 21 (8 self)
 Add to MetaCart
A model algorithm based on the successive quadratic programming method for solving the general nonlinear programming problem is presented. The objective function and the constraints of the problem are only required to be differentiable and their gradients to satisfy a Lipschitz condition. The strategy for obtaining global convergence is based on the trust region approach. The merit function is a type of augmented Lagrangian. A new updating scheme is introduced for the penalty parameter, by means of which monotone increase is not necessary. Global convergence results are proved and numerical experiments are presented. Key words: Nonlinear programming, successive quadratic programming, trust regions, augmented Lagrangians, Lipschitz conditions. Department of Applied Mathematics, IMECCUNICAMP, University of Campinas, CP 6065, 13081970 Campinas SP, Brazil (chico@ime.unicamp.br). This author was supported by FAPESP (Grant 903724 6), FINEP and FAEPUNICAMP. y Department of Mathematics...
InexactRestoration Algorithm for Constrained Optimization
 Journal of Optimization Theory and Applications
, 1999
"... We introduce a new model algorithm for solving nonlinear programming problems. No slack variables are introduced for dealing with inequality constraints. Each iteration of the method proceeds in two phases. In the first phase, feasibility of the current iterate is improved and in second phase the ob ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
We introduce a new model algorithm for solving nonlinear programming problems. No slack variables are introduced for dealing with inequality constraints. Each iteration of the method proceeds in two phases. In the first phase, feasibility of the current iterate is improved and in second phase the objective function value is reduced in an approximate feasible set. The point that results from the second phase is compared with the current point using a nonsmooth merit function that combines feasibility and optimality. This merit function includes a penalty parameter that changes between different iterations. A suitable updating procedure for this penalty parameter is included by means of which it can be increased or decreased along different iterations. The conditions for feasibility improvement at the first phase and for optimality improvement at the second phase are mild, and largescale implementations of the resulting method are possible. We prove that under suitable conditions, that ...
Multilevel Algorithms for Nonlinear Optimization
 in Optimal Design and Control
, 1994
"... Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of th ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of the derivative computations for MDO. The algorithms are an extension to the nonlinear programming problem of the successful class of local BrownBrent algorithms for nonlinear equations. Our extensions allow the user to partition constraints into arbitrary blocks to fit the application, and they separately process each block and the objective function, restricted to certain subspaces. The methods use trust regions as a globalization startegy, and they have been shown to be globally convergent under reasonable assumptions. The multilevel algorithms can be applied to all classes of MDO formulations. Multilevel algorithms for solving nonlinear systems of equations are a special case of the multil...
Methods for nonlinear constraints in optimization calculations
 THE STATE OF THE ART IN NUMERICAL ANALYSIS
, 1996
"... ..."
A TwoPhase Model Algorithm with Global Convergence for Nonlinear Programming
 Journal of Optimization Theory and Applications
, 1998
"... . The family of feasible methods for minimization with nonlinear constraints includes Rosen's Nonlinear Projected Gradient Method, the Generalized Reduced Gradient Method (GRG) and many variants of the Sequential Gradient Restoration Algorithm (SGRA). Generally speaking, a particular iteration of an ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
. The family of feasible methods for minimization with nonlinear constraints includes Rosen's Nonlinear Projected Gradient Method, the Generalized Reduced Gradient Method (GRG) and many variants of the Sequential Gradient Restoration Algorithm (SGRA). Generally speaking, a particular iteration of any of these methods proceeds in two phases. In the Restoration Phase, feasibility is restored by means of the resolution of an auxiliary nonlinear problem, generally a nonlinear system of equations. In the Minimization Phase, optimality is improved by means of the consideration of the objective function, or its Lagrangian, on the tangent subspace to the constraints. In this paper, minimal assumptions are stated on the Restoration Phase and the Minimization Phase that ensure that the resulting algorithm is globally convergent. The key point is the possibility of comparing two successive nonfeasible iterates by means of a suitable merit function that combines feasibility and optimality. The mer...
Sequential Quadratic Programming for LargeScale Nonlinear Optimization
 I⋅E I +w S⋅E S ES EI located Pareto optimum (a) (b) ZR E=w I⋅E I +w S⋅E S
, 1999
"... The sequential quadratic programming (SQP) algorithm has been one of the most successful general methods for solving nonlinear constrained optimization problems. We provide an introduction to the general method and show its relationship to recent developments in interiorpoint approaches. We emph ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
The sequential quadratic programming (SQP) algorithm has been one of the most successful general methods for solving nonlinear constrained optimization problems. We provide an introduction to the general method and show its relationship to recent developments in interiorpoint approaches. We emphasize largescale aspects. Key words: sequential quadratic programming, nonlinear optimization, Newton methods, interiorpoint methods, local convergence, global convergence ? Contribution of Sandia National Laboratories and not subject to copyright in the United States. Preprint submitted to Elsevier Preprint 1 July 1999 1 Introduction In this article we consider the general method of Sequential Quadratic Programming (hereafter denoted SQP) for solving the nonlinear programming problem minimize f(x) x subject to: h(x) = 0 g(x) 0 (NLP) where f : R n ! R, h : R n ! R m , and g : R n ! R p . Broadly defined, the SQP method is a procedure that generates iterates converging ...