Results 1  10
of
10
Sequential Quadratic Programming
, 1995
"... this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can ..."
Abstract

Cited by 121 (3 self)
 Add to MetaCart
this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can
TrustRegion InteriorPoint SQP Algorithms For A Class Of Nonlinear Programming Problems
 SIAM J. CONTROL OPTIM
, 1997
"... In this paper a family of trustregion interiorpoint SQP algorithms for the solution of a class of minimization problems with nonlinear equality constraints and simple bounds on some of the variables is described and analyzed. Such nonlinear programs arise e.g. from the discretization of optimal co ..."
Abstract

Cited by 37 (8 self)
 Add to MetaCart
In this paper a family of trustregion interiorpoint SQP algorithms for the solution of a class of minimization problems with nonlinear equality constraints and simple bounds on some of the variables is described and analyzed. Such nonlinear programs arise e.g. from the discretization of optimal control problems. The algorithms treat states and controls as independent variables. They are designed to take advantage of the structure of the problem. In particular they do not rely on matrix factorizations of the linearized constraints, but use solutions of the linearized state equation and the adjoint equation. They are well suited for large scale problems arising from optimal control problems governed by partial differential equations. The algorithms keep strict feasibility with respect to the bound constraints by using an affine scaling method proposed for a different class of problems by Coleman and Li and they exploit trustregion techniques for equalityconstrained optimizatio...
On the Convergence Theory of TrustRegionBased Algorithms for EqualityConstrained Optimization
, 1995
"... In this paper we analyze incxact trust region interior point (TRIP) sequential quadr tic programming (SOP) algorithms for the solution of optimization problems with nonlinear equality constraints and simple bound constraints on some of the variables. Such problems arise in many engineering applicati ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
In this paper we analyze incxact trust region interior point (TRIP) sequential quadr tic programming (SOP) algorithms for the solution of optimization problems with nonlinear equality constraints and simple bound constraints on some of the variables. Such problems arise in many engineering applications, in particular in optimal control problems with bounds on the control. The nonhnear constraints often come from the discretization of partial differential equations. In such cases the calculation of derivative information and the solution of hncarizcd equations is expensive. Often, the solution of hncar systems and derivatives arc computed incxactly yielding nonzero residuals. This paper
On InteriorPoint Newton Algorithms For Discretized Optimal Control Problems With State Constraints
 OPTIM. METHODS SOFTW
, 1998
"... In this paper we consider a class of nonlinear programming problems that arise from the discretization of optimal control problems with bounds on both the state and the control variables. For this class of problems, we analyze constraint qualifications and optimality conditions in detail. We derive ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
In this paper we consider a class of nonlinear programming problems that arise from the discretization of optimal control problems with bounds on both the state and the control variables. For this class of problems, we analyze constraint qualifications and optimality conditions in detail. We derive an affinescaling and two primaldual interiorpoint Newton algorithms by applying, in an interiorpoint way, Newton's method to equivalent forms of the firstorder optimality conditions. Under appropriate assumptions, the interiorpoint Newton algorithms are shown to be locally welldefined with a qquadratic rate of local convergence. By using the structure of the problem, the linear algebra of these algorithms can be reduced to the null space of the Jacobian of the equality constraints. The similarities between the three algorithms are pointed out, and their corresponding versions for the general nonlinear programming problem are discussed.
An interior point algorithm for the general nonlinear programming problem with trust region globalization
, 1996
"... ..."
New Penalty Functions and Multipliers Method for Nonlinear Programming
"... In this paper we introduce an exact penalty function, a corresponding multipliers method, and an inexact penalty function for the solution of nonlinear programming problems. We motivate and introduce the multipliers method for a class of nonlinear programming problems where the equality constraints ..."
Abstract
 Add to MetaCart
In this paper we introduce an exact penalty function, a corresponding multipliers method, and an inexact penalty function for the solution of nonlinear programming problems. We motivate and introduce the multipliers method for a class of nonlinear programming problems where the equality constraints have a particular structure. The class models optimal control and engineering design problems with bounds on the state and control variables and has wide applicability. The case of general nonlinear programming is also considered. The multipliers method updates multipliers corresponding to inequality constraints (maintaining their nonnegativity) instead of dealing with multipliers associated with equality constraints. The basic local convergence properties of the method are proved and a dual framework is introduced. We also analyze the properties of the penalized problems related with the two penalty functions. Keywords. Nonlinear programming, optimal control problems, state constraints, ex...
An Analysis of Newton's Method for Equivalent KarushKuhnTucker Systems
, 1999
"... In this paper we analyze the application of Newton's method to the solution of systems of nonlinear equations arising from equivalent forms of the firstorder KarushKuhnTucker necessary conditions for constrained optimization. The analysis is carried out by using an abstract model for the ..."
Abstract
 Add to MetaCart
In this paper we analyze the application of Newton's method to the solution of systems of nonlinear equations arising from equivalent forms of the firstorder KarushKuhnTucker necessary conditions for constrained optimization. The analysis is carried out by using an abstract model for the original system of nonlinear equations and for an equivalent form of this system obtained by a reformulation that appears often when dealing with firstorder KarushKuhn Tucker necessary conditions. The model is used to determine the quantities that bound the difference between the Newton steps corresponding to the two equivalent systems of equations. The model is sufficiently abstract to include the cases of equalityconstrained optimization, minimization with simple bounds, and also a class of discretized optimal control problems. Keywords. Nonlinear programming, Newton's method, firstorder KarushKuhnTucker necessary conditions. AMS subject classifications. 49M37, 90C06, 90C30 1 In...
On the Convergence Analysis of a Multipliers Method
"... This paper adds to the development of the eld of augmented Lagrangian multipliers methods for general nonlinear programming by introducing a new update for multipliers corresponding to inequality constraints. The update naturally maintains the nonnegativity of the multipliers without the need for a ..."
Abstract
 Add to MetaCart
This paper adds to the development of the eld of augmented Lagrangian multipliers methods for general nonlinear programming by introducing a new update for multipliers corresponding to inequality constraints. The update naturally maintains the nonnegativity of the multipliers without the need for a positiveorthant projection, as result of the verication of the rstorder necessary conditions for the minimization of the augmented Lagrangian penalty function. It is shown that the basic properties of local convergence of the traditional multipliers method are also valid for the proposed method. The analysis of global convergence provided here for the new method is not totally satisfactory, but it is not clear for the authors, due to the type of update, how global convergence can be ensured without imposing a priori conditions on the sequence of iterates generated by the method. Numerical results obtained for smallscale dimension problems are included and show that the method shares some of the advantages and disadvantages of the class of augmented Lagrangian multipliers methods. Key words. Nonlinear Programming, Multipliers Methods, Augmented Lagrangian. AMS subject classications. 49M37, 90C06, 90C30 1.
ON EIGENVALUE OPTIMIZATION* ALEXANDER SHAPIRO
"... Abstract. In this paper we study optimization problems involving eigenvalues of symmetric matrices. One of the difficulties with numerical analysis of such problems is that the eigenvalues, considered as functions of a symmetric matrix, are not differentiable at those points where they coalesce. We ..."
Abstract
 Add to MetaCart
Abstract. In this paper we study optimization problems involving eigenvalues of symmetric matrices. One of the difficulties with numerical analysis of such problems is that the eigenvalues, considered as functions of a symmetric matrix, are not differentiable at those points where they coalesce. We present a general framework for a smooth (differentiable) approach to such problems. It is based on the concept of transversality borrowed from differential geometry. In that framework we discuss first and secondorder optimality conditions and rates of convergence of the corresponding secondorder algorithms. Finally we present some results on the sensitivity analysis of such problems. Key words, nonsmooth optimization, transversality condition, first and secondorder optimality conditions, Newton’s algorithm, quadratic rate of convergence, semiinfinite programming, sensitivity analysis AMS subject classifications.
Leastchange quasiNewton updates for equalityconstrained optimization
, 1999
"... . This paper investigates quasiNewton updates for equalityconstrained optimization in abstract vector spaces. Using a leastchange argument we derive a class of rank3 updates to approximations of the onesided projection of the Hessian of the Lagrangian which keeps the symmetric part positive def ..."
Abstract
 Add to MetaCart
. This paper investigates quasiNewton updates for equalityconstrained optimization in abstract vector spaces. Using a leastchange argument we derive a class of rank3 updates to approximations of the onesided projection of the Hessian of the Lagrangian which keeps the symmetric part positive definite. By imposing the usual assumptions we are able to prove 1step superlinear convergence for one of these updates. Encouraging numerical results and comparisons with other previously analyzed updates are presented. Key words. quasiNewton update  equalityconstrained optimization  superlinear convergence  variable metric method 1. Introduction and Background QuasiNewton methods for nonlinear optimization problems have been studied extensively since the late 60s. While there are a number of updates and convergence analyses for the unconstrained case (see, e.g.,[10] and [4]), the constrained case has only been discussed more recently, e.g., in [6],[22], [7], [19] and [14], and in Pe...