Results 1 
4 of
4
Snopt: An SQP Algorithm For LargeScale Constrained Optimization
, 1997
"... Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first deriv ..."
Abstract

Cited by 328 (18 self)
 Add to MetaCart
Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first derivatives are available, and that the constraint gradients are sparse.
LARGESCALE LINEARLY CONSTRAINED OPTIMIZATION
, 1978
"... An algorithm for solving largescale nonlinear ' programs with linear constraints is presented. The method combines efficient sparsematrix techniques as in the revised simplex method with stable quasiNewton methods for handling the nonlinearities. A generalpurpose production code (MINOS) is descr ..."
Abstract

Cited by 75 (11 self)
 Add to MetaCart
An algorithm for solving largescale nonlinear ' programs with linear constraints is presented. The method combines efficient sparsematrix techniques as in the revised simplex method with stable quasiNewton methods for handling the nonlinearities. A generalpurpose production code (MINOS) is described, along with computational experience on a wide variety of problems.
Automatic Decrease of the Penalty Parameter in Exact Penalty Function Methods
 European Journal of Operational Research
, 1995
"... This paper presents an analysis of the involvement of the penalty parameter in exact penalty function methods that yields modifications to the standard outer loop which decreases the penalty parameter (typically dividing it by a constant). The procedure presented is based on the simple idea of makin ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
This paper presents an analysis of the involvement of the penalty parameter in exact penalty function methods that yields modifications to the standard outer loop which decreases the penalty parameter (typically dividing it by a constant). The procedure presented is based on the simple idea of making explicit the dependence of the penalty function upon the penalty parameter and is illustrated on a linear programming problem with the l 1 exact penalty function and an activeset approach. The procedure decreases the penalty parameter, when needed, to the maximal value allowing the inner minimization algorithm to leave the current iterate. It moreover avoids unnecessary calculations in the iteration following the step in which the penalty parameter is decreased. We report on preliminary computational results which show that this method can require fewer iterations than the standard way to update the penalty parameter. This approach permits a better understanding of the performance of exac...
An Analysis of a Class of Neural Networks for Solving Linear Programming Problems
 IEEE Trans. Auto. Contr
, 1995
"... Abstract — A class of neural networks that solve linear programming problems is analyzed. The neural networks considered are modeled by dynamic gradient systems that are constructed using a parametric family of exact (nondifferentiable) penalty functions. It is proved that for a given linear program ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Abstract — A class of neural networks that solve linear programming problems is analyzed. The neural networks considered are modeled by dynamic gradient systems that are constructed using a parametric family of exact (nondifferentiable) penalty functions. It is proved that for a given linear programming problem and sufficiently large penalty parameters, any trajectory of the neural network converges in finite time to its solution set. For the analysis, Lyapunovtype theorems are developed for finite time convergence of nonsmooth sliding mode dynamic systems to invariant sets. The results are illustrated via numerical simulation examples. Index Terms—Invariant sets, linear programming, neural networks, nondifferentiable optimization, penalty functions, sliding modes. I.