Results 1  10
of
21
Sequential Quadratic Programming
, 1995
"... this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can ..."
Abstract

Cited by 114 (2 self)
 Add to MetaCart
this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can
Nonlinear programming algorithms using trust regions and augmented Lagrangians with nonmonotone penalty parameters
, 1997
"... A model algorithm based on the successive quadratic programming method for solving the general nonlinear programming problem is presented. The objective function and the constraints of the problem are only required to be differentiable and their gradients to satisfy a Lipschitz condition. The strate ..."
Abstract

Cited by 21 (8 self)
 Add to MetaCart
A model algorithm based on the successive quadratic programming method for solving the general nonlinear programming problem is presented. The objective function and the constraints of the problem are only required to be differentiable and their gradients to satisfy a Lipschitz condition. The strategy for obtaining global convergence is based on the trust region approach. The merit function is a type of augmented Lagrangian. A new updating scheme is introduced for the penalty parameter, by means of which monotone increase is not necessary. Global convergence results are proved and numerical experiments are presented. Key words: Nonlinear programming, successive quadratic programming, trust regions, augmented Lagrangians, Lipschitz conditions. Department of Applied Mathematics, IMECCUNICAMP, University of Campinas, CP 6065, 13081970 Campinas SP, Brazil (chico@ime.unicamp.br). This author was supported by FAPESP (Grant 903724 6), FINEP and FAEPUNICAMP. y Department of Mathematics...
Analysis and implementation of a dual algorithm for constrained optimization
 Journal of Optimization Theory and Applications
, 1993
"... Abstract. This paper analyzes a constrained optimization algorithm that combines an unconstrained minimization scheme like the conjugate gradient method, an augmented Lagrangian, and multiplier updates to obtain global quadratic convergence. Some of the issues that we focus on are the treatment of r ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
Abstract. This paper analyzes a constrained optimization algorithm that combines an unconstrained minimization scheme like the conjugate gradient method, an augmented Lagrangian, and multiplier updates to obtain global quadratic convergence. Some of the issues that we focus on are the treatment of rigid constraints that must be satisfied during the iterations and techniques for balancing the error associated with constraint violation with the error associated with optimality. A preconditioner is constructed with the property that the rigid constraints are satisfied while illconditioning due to penalty terms is alleviated. Various numerical linear algebra techniques required for the efficient implementation of the algorithm are presented, and convergence behavior is illustrated in a series of numerical experiments.
Quadratically And Superlinearly Convergent Algorithms For The Solution Of Inequality Constrained Minimization Problems
, 1995
"... . In this paper some Newton and quasiNewton algorithms for the solution of inequality constrained minimization problems are considered. All the algorithms described produce sequences fx k g converging qsuperlinearly to the solution. Furthermore, under mild assumptions, a qquadratic convergence ra ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
. In this paper some Newton and quasiNewton algorithms for the solution of inequality constrained minimization problems are considered. All the algorithms described produce sequences fx k g converging qsuperlinearly to the solution. Furthermore, under mild assumptions, a qquadratic convergence rate in x is also attained. Other features of these algorithms are that the solution of linear systems of equations only is required at each iteration and that the strict complementarity assumption is never invoked. First the superlinear or quadratic convergence rate of a Newtonlike algorithm is proved. Then, a simpler version of this algorithm is studied and it is shown that it is superlinearly convergent. Finally, quasiNewton versions of the previous algorithms are considered and, provided the sequence defined by the algorithms converges, a characterization of superlinear convergence extending the result of Boggs, Tolle and Wang is given. Key Words. Inequality constrained optimization, New...
A generating set direct search augmented Lagrangian algorithm for optimization with a combination of general and linear constraints
, 2006
"... We consider the solution of nonlinear programs in the case where derivatives of the objective function and nonlinear constraints are unavailable. To solve such problems, we propose an adaptation of a method due to Conn, Gould, Sartenaer, and Toint that proceeds by approximately minimizing a successi ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
We consider the solution of nonlinear programs in the case where derivatives of the objective function and nonlinear constraints are unavailable. To solve such problems, we propose an adaptation of a method due to Conn, Gould, Sartenaer, and Toint that proceeds by approximately minimizing a succession of linearly constrained augmented Lagrangians. Our modification is to use a derivativefree generating set direct search algorithm to solve the linearly constrained subproblems. The stopping criterion proposed by Conn, Gould, Sartenaer and Toint for the approximate solution of the subproblems requires explicit knowledge of derivatives. Such information is presumed absent in the generating set search method we employ. Instead, we show that stationarity results for linearly constrained generating set search methods provide a derivativefree stopping criterion, based on a steplength control parameter, that is sufficient to preserve the convergence properties of the original augmented Lagrangian algorithm.
A Merit Function for Inequality Constrained Nonlinear Programming Problems
 Internal Report 4702, National Institute of Standards and Technology
, 1993
"... We consider the use of the sequential quadratic programming (SQP) technique for solving the inequality constrained minimization problem min x f(x) subject to: g i (x) 0; i = 1; : : : ; m: SQP methods require the use of an auxiliary function, called a merit function or linesearch function, for asse ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
We consider the use of the sequential quadratic programming (SQP) technique for solving the inequality constrained minimization problem min x f(x) subject to: g i (x) 0; i = 1; : : : ; m: SQP methods require the use of an auxiliary function, called a merit function or linesearch function, for assessing the steps that are generated. We derive a merit function by adding slack variables to create an equality constrained problem and then using the merit function developed earlier by the authors for the equality constrained case. We stress that we do not solve the slack variable problem, but only use it to construct the merit function. The resulting function is simplified in a certain way that leads to an effective procedure for updating the squares of the slack variables. A globally convergent algorithm, based on this merit function, is suggested, and is demonstrated to be effective in practice. Contribution of the National Institute of Standards and Technology and not subject to copyr...
On the realization of the Wolfe conditions in reduced quasiNewton methods for equality constrained optimization
 SIAM Journal on Optimization
, 1997
"... Abstract. This paper describes a reduced quasiNewton method for solving equality constrained optimization problems. A major difficulty encountered by this type of algorithm is the design of a consistent technique for maintaining the positive definiteness of the matrices approximating the reduced He ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Abstract. This paper describes a reduced quasiNewton method for solving equality constrained optimization problems. A major difficulty encountered by this type of algorithm is the design of a consistent technique for maintaining the positive definiteness of the matrices approximating the reduced Hessian of the Lagrangian. A new approach is proposed in this paper. The idea is to search for the next iterate along a piecewise linear path. The path is designed so that some generalized Wolfe conditions can be satisfied. These conditions allow the algorithm to sustain the positive definiteness of the matrices from iteration to iteration by a mechanism that has turned out to be efficient in unconstrained optimization.
A primaldual augmented Lagrangian
 Computational Optimization and Applications
, 2010
"... Abstract. Nonlinearly constrained optimization problems can be solved by minimizing a sequence of simpler unconstrained or linearly constrained subproblems. In this paper, we discuss the formulation of subproblems in which the objective is a primaldual generalization of the HestenesPowell augmente ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Abstract. Nonlinearly constrained optimization problems can be solved by minimizing a sequence of simpler unconstrained or linearly constrained subproblems. In this paper, we discuss the formulation of subproblems in which the objective is a primaldual generalization of the HestenesPowell augmented Lagrangian function. This generalization has the crucial feature that it is minimized with respect to both the primal and the dual variables simultaneously. A benefit of this approach is that the quality of the dual variables is monitored explicitly during the solution of the subproblem. Moreover, each subproblem may be regularized by imposing explicit bounds on the dual variables. Two primaldual variants of conventional primal methods are proposed: a primaldual bound constrained Lagrangian (pdBCL) method and a primaldual ℓ1 linearly constrained Lagrangian (pdℓ1LCL) method. Key words. Nonlinear programming, nonlinear inequality constraints, augmented Lagrangian methods, bound constrained Lagrangian methods, linearly constrained Lagrangian methods, primaldual methods. AMS subject classifications. 49J20, 49J15, 49M37, 49D37, 65F05, 65K05, 90C30
A Nonlinear Programming Perspective on Sensitivity Calculations for Systems Governed By State Equations
, 1997
"... This paper discusses the calculation of sensitivities, or derivatives, for optimization problems involving systems governed by differential equations and other state relations. The subject is examined from the point of view of nonlinear programming, beginning with the analytical structure of the fir ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
This paper discusses the calculation of sensitivities, or derivatives, for optimization problems involving systems governed by differential equations and other state relations. The subject is examined from the point of view of nonlinear programming, beginning with the analytical structure of the first and second derivatives associated with such problems and the relation of these derivatives to implicit differentiation and equality constrained optimization. We also outline an error analysis of the analytical formulae and compare the results with similar results for finitedifference estimates of derivatives. We then attend to an investigation of the nature of the adjoint method and the adjoint equations and their relation to directions of steepest descent. We illustrate the points discussed with an optimization problem in which the variables are the coefficients in a differential operator. This research was supportedby the National Aeronautics and Space Administrationunder NASA Contra...
Preconditioned Techniques For Large Eigenvalue Problems
, 1997
"... This research focuses on finding a large number of eigenvalues and eigenvectors of a sparse symmetric or Hermitian matrix, for example, finding 1000 eigenpairs of a 100,000 \Theta 100,000 matrix. These eigenvalue problems are challenging because the matrix size is too large for traditional QR based ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
This research focuses on finding a large number of eigenvalues and eigenvectors of a sparse symmetric or Hermitian matrix, for example, finding 1000 eigenpairs of a 100,000 \Theta 100,000 matrix. These eigenvalue problems are challenging because the matrix size is too large for traditional QR based algorithms and the number of desired eigenpairs is too large for most common sparse eigenvalue algorithms. In this thesis, we approach this problem in two steps. First, we identify a sound preconditioned eigenvalue procedure for computing multiple eigenpairs. Second, we improve the basic algorithm through new preconditioning schemes and spectrum transformations. Through careful analysis, we see that both the Arnoldi and Davidson methods have an appropriate structure for computing a large number of eigenpairs with preconditioning. We also study three variations of these two basic algorithms. Without preconditioning, these methods are mathematically equivalent but they differ in numerical stab...