Results 1 
9 of
9
SNOPT: An SQP Algorithm For LargeScale Constrained Optimization
, 2002
"... Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first deriv ..."
Abstract

Cited by 597 (24 self)
 Add to MetaCart
(Show Context)
Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first derivatives are available, and that the constraint gradients are sparse. We discuss
SOME PRACTICAL PROCEDURES FOR THE SOLUTION OF NONLINEAR FINITE ELEMENT EQUATIONS
, 1980
"... Procedures for the solution of incremental finite element equations in practical nonlinear analysis are described and evaluated. The methods discussed are employed in static analysis and in dynamic analysis using implicit time integration. The solution procedures are implemented, and practical guide ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
Procedures for the solution of incremental finite element equations in practical nonlinear analysis are described and evaluated. The methods discussed are employed in static analysis and in dynamic analysis using implicit time integration. The solution procedures are implemented, and practical guidelines for their use are given.
Historical Development of the BFGS Secant Method and Its Characterization Properties
, 2009
"... ..."
Rank Modifications Of SemiDefinite Matrices With Applications To Secant Updates
"... . The BFGS and DFP updates are perhaps the most successful Hessian and inverse Hessian approximations respectively for unconstrained minimization problems. This paper describes these methods in terms of two successive steps: rank reduction and rank restoration. From rank subtractivity and a powerful ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
. The BFGS and DFP updates are perhaps the most successful Hessian and inverse Hessian approximations respectively for unconstrained minimization problems. This paper describes these methods in terms of two successive steps: rank reduction and rank restoration. From rank subtractivity and a powerful spectral result, the first step must necessarily result in a positive semidefinite matrix; and the second step is designed to restore positive definiteness. The goal of the research is to better understand the workings of the BFGS and DFP updates to see how they may be modified and yet retain their basic rank and spectral characteristics. The class of BFGS and DFP updates is generalized both in terms of choices for update vectors and rank of the modifications in the formulas. The rank restoration step generalizes naturally to rectangular matrices. Key words. Rankone reduction, Wedderburn theorem, BFGS update, DFP update, quasiNewton methods, rank subtractivity, rank additivity, AMS(MOS)...
AND
, 1981
"... Two recent methods (Shanno, 1978; Toint, 1980) for revising estimates of sparse second derivative matrices in quasiNewton optimization algorithms reduce to variable metric formulae when there are no sparsity conditions. It is proved that these methods are equivalent. Further, some examples arc give ..."
Abstract
 Add to MetaCart
Two recent methods (Shanno, 1978; Toint, 1980) for revising estimates of sparse second derivative matrices in quasiNewton optimization algorithms reduce to variable metric formulae when there are no sparsity conditions. It is proved that these methods are equivalent. Further, some examples arc given to show that the procedure may make the second derivative approximations worse when the objective function is quadratic. Therefore the convergence properties of the procedure are sometimes less good than the convergence properties of other published methods for revising sparse second derivative approximations. 1.
A RANKONE UPDATING APPROACH FOR SOLVING SYSTEMS OF LINEAR EQUATIONS IN THE LEAST SQUARES SENSE
"... Abstract. The solution of the linear system with anmatrix of maximal rank is considered. The method generates a sequence of!matrices and "! vectors so that #! the are positive semidefinite, ! the approximate the pseudoinverse of! and approximate the least squares solution $% & of. The met ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. The solution of the linear system with anmatrix of maximal rank is considered. The method generates a sequence of!matrices and "! vectors so that #! the are positive semidefinite, ! the approximate the pseudoinverse of! and approximate the least squares solution $% & of. The method is of the type of Broyden’s rankone updates and yields the pseudoinverse in steps.
ANZIAM J. 45(2004), 511–522 PERFORMANCE OF VARIOUS BFGS IMPLEMENTATIONS WITH LIMITED PRECISION SECONDORDER INFORMATION
, 2003
"... The BFGS formula is arguably the most well known and widely used update method for quasiNewton algorithms. Some authors have claimed that updating approximate Hessian information via the BFGS formula with a Cholesky factorisation offers greater numerical stability than the more straightforward appr ..."
Abstract
 Add to MetaCart
(Show Context)
The BFGS formula is arguably the most well known and widely used update method for quasiNewton algorithms. Some authors have claimed that updating approximate Hessian information via the BFGS formula with a Cholesky factorisation offers greater numerical stability than the more straightforward approach of performing the update directly. Other authors have claimed that no such advantage exists and that any such improvement is probably due to early implementations of the DFP formula in conjunction with low accuracy line searches. This paper supports the claim that there is no discernible advantage in choosing factorised implementations (over nonfactorised implementations) of BFGS methods when approximate Hessian information is available to full machine precision. However the results presented in this paper show that a factorisation strategy has clear advantages when approximate Hessian information is available only to limited precision. These results show that a conjugate directions factorisation outperforms the other methods considered in this paper (including Cholesky factorisation). 1.