Results 1 
3 of
3
LARGESCALE LINEARLY CONSTRAINED OPTIMIZATION
, 1978
"... An algorithm for solving largescale nonlinear ' programs with linear constraints is presented. The method combines efficient sparsematrix techniques as in the revised simplex method with stable quasiNewton methods for handling the nonlinearities. A generalpurpose production code (MINOS) is ..."
Abstract

Cited by 108 (21 self)
 Add to MetaCart
An algorithm for solving largescale nonlinear ' programs with linear constraints is presented. The method combines efficient sparsematrix techniques as in the revised simplex method with stable quasiNewton methods for handling the nonlinearities. A generalpurpose production code (MINOS) is described, along with computational experience on a wide variety of problems.
An algorithm for minimization using exact second derivatives
, 1973
"... A review of the methods currently available for the minimization of a function whose first and second derivatives can be calculated shows either that the method requires the eigensolution of the hessian, or with one exception that a simple example can be found which causes the nethod to fail. In thi ..."
Abstract

Cited by 35 (0 self)
 Add to MetaCart
A review of the methods currently available for the minimization of a function whose first and second derivatives can be calculated shows either that the method requires the eigensolution of the hessian, or with one exception that a simple example can be found which causes the nethod to fail. In this paper one of the successful methods that requires the eigensolution is modified so that at each iteration the solution of a number (approximately two) of systems of linear equations is required, instead of the eigenvalue calculation.
A new method to compute second derivatives
, 2001
"... In thisarticle we consider the problem of computing approximations to the second derivatives of functions of n variables using finite differences. We show how to derive different formulas and how to comput the errors of those approximations as functions of the increment h, both for first and second ..."
Abstract
 Add to MetaCart
In thisarticle we consider the problem of computing approximations to the second derivatives of functions of n variables using finite differences. We show how to derive different formulas and how to comput the errors of those approximations as functions of the increment h, both for first and second derivatives. Based upon those results we describe the methods of Gill and Murray and the one of gradient difference. On the other hand we introduce a new algorithm which use conjugate directions methods for minimizing functions without derivatives and the corresponding numerical comparisons with the other two methods. Finally, numerical experiences are given and the corresponding conclusions are discussed.