Results 11  20
of
32
Methods for nonlinear constraints in optimization calculations
 THE STATE OF THE ART IN NUMERICAL ANALYSIS
, 1996
"... ..."
Nonmonotone Trust Region Methods for Nonlinear Equality Constrained Optimization without a Penalty Function
 MATH. PROGRAM., SER. B
, 2000
"... We propose and analyze a class of penaltyfunctionfree nonmonotone trustregion methods for nonlinear equality constrained optimization problems. The algorithmic framework yields global convergence without using a merit function and allows nonmonotonicity independently for both, the constraint viol ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
We propose and analyze a class of penaltyfunctionfree nonmonotone trustregion methods for nonlinear equality constrained optimization problems. The algorithmic framework yields global convergence without using a merit function and allows nonmonotonicity independently for both, the constraint violation and the value of the Lagrangian function. Similar to the ByrdOmojokun class of algorithms, each step is composed of a quasinormal and a tangential step. Both steps are required to satisfy a decrease condition for their respective trustregion subproblems. The proposed mechanism for accepting steps combines nonmonotone decrease conditions on the constraint violation and/or the Lagrangian function, which leads to a flexibility and acceptance behavior comparable to filterbased methods. We establish the global convergence of the method. Furthermore, transition to quadratic local convergence is proved. Numerical tests are presented that confirm the robustness and efficiency of the approach.
LargeScale Nonlinear Constrained Optimization: A Current Survey
, 1994
"... . Much progress has been made in constrained nonlinear optimization in the past ten years, but most largescale problems still represent a considerable obstacle. In this survey paper we will attempt to give an overview of the current approaches, including interior and exterior methods and algorithm ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
. Much progress has been made in constrained nonlinear optimization in the past ten years, but most largescale problems still represent a considerable obstacle. In this survey paper we will attempt to give an overview of the current approaches, including interior and exterior methods and algorithms based upon trust regions and line searches. In addition, the importance of software, numerical linear algebra and testing will be addressed. We will try to explain why the difficulties arise, how attempts are being made to overcome them and some of the problems that still remain. Although there will be some emphasis on the LANCELOT and CUTE projects, the intention is to give a broad picture of the stateoftheart. 1 IBM T.J. Watson Research Center, P.O.Box 218, Yorktown Heights, NY 10598, USA 2 Parallel Algorithms Team, CERFACS, 42 Ave. G. Coriolis, 31057 Toulouse Cedex, France 3 Central Computing Department, Rutherford Appleton Laboratory, Chilton, Oxfordshire, OX11 0QX, England ...
SQP methods for largescale nonlinear programming
, 1999
"... We compare and contrast a number of recent sequential quadratic programming (SQP) methods that have been proposed for the solution of largescale nonlinear programming problems. Both linesearch and trustregion approaches are considered, as are the implications of interiorpoint and quadratic progr ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
We compare and contrast a number of recent sequential quadratic programming (SQP) methods that have been proposed for the solution of largescale nonlinear programming problems. Both linesearch and trustregion approaches are considered, as are the implications of interiorpoint and quadratic programming methods.
Inexact SQP methods for equality constrained optimization
 SIAM J. Opt
"... Abstract. We present an algorithm for largescale equality constrained optimization. The method is based on a characterization of inexact sequential quadratic programming (SQP) steps that can ensure global convergence. Inexact SQP methods are needed for largescale applications for which the iterati ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
Abstract. We present an algorithm for largescale equality constrained optimization. The method is based on a characterization of inexact sequential quadratic programming (SQP) steps that can ensure global convergence. Inexact SQP methods are needed for largescale applications for which the iteration matrix cannot be explicitly formed or factored and the arising linear systems must be solved using iterative linear algebra techniques. We address how to determine when a given inexact step makes sufficient progress toward a solution of the nonlinear program, as measured by an exact penalty function. The method is globalized by a line search. An analysis of the global convergence properties of the algorithm and numerical results are presented. Key words. largescale optimization, constrained optimization, sequential quadratic programming, inexact linear system solvers, Krylov subspace methods AMS subject classifications. 49M37, 65K05, 90C06, 90C30, 90C55 1. Introduction. In
On the Convergence Theory of TrustRegionBased Algorithms for EqualityConstrained Optimization
, 1995
"... In this paper we analyze incxact trust region interior point (TRIP) sequential quadr tic programming (SOP) algorithms for the solution of optimization problems with nonlinear equality constraints and simple bound constraints on some of the variables. Such problems arise in many engineering applicati ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
In this paper we analyze incxact trust region interior point (TRIP) sequential quadr tic programming (SOP) algorithms for the solution of optimization problems with nonlinear equality constraints and simple bound constraints on some of the variables. Such problems arise in many engineering applications, in particular in optimal control problems with bounds on the control. The nonhnear constraints often come from the discretization of partial differential equations. In such cases the calculation of derivative information and the solution of hncarizcd equations is expensive. Often, the solution of hncar systems and derivatives arc computed incxactly yielding nonzero residuals. This paper
Feasibility Control in Nonlinear Optimization
 in Foundations of Computational Mathematics
, 2000
"... We analyze the properties that optimization algorithms must possess in order to prevent convergence to nonstationary points for the merit function. We show that demanding the exact satisfaction of constraint linearizations results in difficulties in a wide range of optimization algorithms. Feasi ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
We analyze the properties that optimization algorithms must possess in order to prevent convergence to nonstationary points for the merit function. We show that demanding the exact satisfaction of constraint linearizations results in difficulties in a wide range of optimization algorithms. Feasibility control is a mechanism that prevents convergence to spurious solutions by ensuring that sufficient progress towards feasibility is made, even in the presence of certain rank deficiencies. The concept of feasibility control is studied in this paper in the context of Newton methods for nonlinear systems of equations and equality constrained optimization, as well as in interior methods for nonlinear programming. This work was supported by National Science Foundation grant CDA9726385 and by Department of Energy grant DEFG0287ER25047A004. y To appear in the proceedings of the Foundations of Computational Mathematics Meeting held in Oxford, England, in July 1999 z Department o...
Sequential Quadratic Programming for LargeScale Nonlinear Optimization
 I⋅E I +w S⋅E S ES EI located Pareto optimum (a) (b) ZR E=w I⋅E I +w S⋅E S
, 1999
"... The sequential quadratic programming (SQP) algorithm has been one of the most successful general methods for solving nonlinear constrained optimization problems. We provide an introduction to the general method and show its relationship to recent developments in interiorpoint approaches. We emph ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
The sequential quadratic programming (SQP) algorithm has been one of the most successful general methods for solving nonlinear constrained optimization problems. We provide an introduction to the general method and show its relationship to recent developments in interiorpoint approaches. We emphasize largescale aspects. Key words: sequential quadratic programming, nonlinear optimization, Newton methods, interiorpoint methods, local convergence, global convergence ? Contribution of Sandia National Laboratories and not subject to copyright in the United States. Preprint submitted to Elsevier Preprint 1 July 1999 1 Introduction In this article we consider the general method of Sequential Quadratic Programming (hereafter denoted SQP) for solving the nonlinear programming problem minimize f(x) x subject to: h(x) = 0 g(x) 0 (NLP) where f : R n ! R, h : R n ! R m , and g : R n ! R p . Broadly defined, the SQP method is a procedure that generates iterates converging ...
Steering Exact Penalty Methods
, 2004
"... This paper reviews the development of exact penalty methods for nonlinear optimization and discusses their increasingly important role in optimization algorithms and software. In their most recent stage of development, penalty methods adjust the penalty parameter dynamically. By controlling the deg ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
This paper reviews the development of exact penalty methods for nonlinear optimization and discusses their increasingly important role in optimization algorithms and software. In their most recent stage of development, penalty methods adjust the penalty parameter dynamically. By controlling the degree of linear feasibility achieved at every iteration, these methods balance progress toward optimality and feasibility. The choice of the penalty parameter thus ceases to be a heuristic and is determined, instead, by a subproblem with clearly defined objectives. The new penalty update strategy is presented in the context of sequential linearquadratic penalty methods, and is then extended to sequential quadratic programming. The paper concludes with a discussion of penalty parameters for merit functions used in line search methods.
A Global Convergence Theory for a General Class of TrustRegionBased Algorithms for Constrained Optimization Without Assuming Regularity
 SIAM Journal on Optimization
, 1997
"... This work presents a convergence theory for a general class of trustregionbased algorithms for solving the smooth nonlinear programming problem with equality constraints. The results are proved under very mild conditions on the quasinormal and tangential components of the trial steps. The Lagrang ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
This work presents a convergence theory for a general class of trustregionbased algorithms for solving the smooth nonlinear programming problem with equality constraints. The results are proved under very mild conditions on the quasinormal and tangential components of the trial steps. The Lagrange multiplier estimates and the Hessian estimates are assumed to be bounded. In addition, the regularity assumption is not made. In particular, the linear independence of the gradients of the constraints is not assumed. The theory proves global convergence for the class. In particular, it shows that a subsequence of the iteration sequence satisfies one of four types of MayerBliss stationary conditions in the limit. This theory holds for Dennis, ElAlem, and Maciel's class of trustregionbased algorithms. Key Words: Nonlinear programming, equality constrained problems, constrained optimization, global convergence, regularity assumption, augmented Lagrangian, MayerBliss points, stationary p...