Results 1  10
of
66
On the limited memory BFGS method for large scale optimization
 MATHEMATICAL PROGRAMMING
, 1989
"... ..."
Representations Of QuasiNewton Matrices And Their Use In Limited Memory Methods
, 1994
"... We derive compact representations of BFGS and symmetric rankone matrices for optimization. These representations allow us to efficiently implement limited memory methods for large constrained optimization problems. In particular, we discuss how to compute projections of limited memory matrices onto ..."
Abstract

Cited by 112 (8 self)
 Add to MetaCart
We derive compact representations of BFGS and symmetric rankone matrices for optimization. These representations allow us to efficiently implement limited memory methods for large constrained optimization problems. In particular, we discuss how to compute projections of limited memory matrices onto subspaces. We also present a compact representation of the matrices generated by Broyden's update for solving systems of nonlinear equations. Key words: QuasiNewton method, constrained optimization, limited memory method, largescale optimization. Abbreviated title: Representation of quasiNewton matrices. 1. Introduction. Limited memory quasiNewton methods are known to be effective techniques for solving certain classes of largescale unconstrained optimization problems (Buckley and Le Nir (1983), Liu and Nocedal (1989), Gilbert and Lemar'echal (1989)) . They make simple approximations of Hessian matrices, which are often good enough to provide a fast rate of linear convergence, and re...
Theory of Algorithms for Unconstrained Optimization
, 1992
"... this article I will attempt to review the most recent advances in the theory of unconstrained optimization, and will also describe some important open questions. Before doing so, I should point out that the value of the theory of optimization is not limited to its capacity for explaining the behavio ..."
Abstract

Cited by 92 (1 self)
 Add to MetaCart
this article I will attempt to review the most recent advances in the theory of unconstrained optimization, and will also describe some important open questions. Before doing so, I should point out that the value of the theory of optimization is not limited to its capacity for explaining the behavior of the most widely used techniques. The question
LBFGSB  Fortran Subroutines for LargeScale Bound Constrained Optimization
, 1994
"... LBFGSB is a limited memory algorithm for solving large nonlinear optimization problems subject to simple bounds on the variables. It is intended for problems in which information on the Hessian matrix is di cult to obtain, or for large dense problems. LBFGSB can also be used for unconstrained pr ..."
Abstract

Cited by 43 (2 self)
 Add to MetaCart
(Show Context)
LBFGSB is a limited memory algorithm for solving large nonlinear optimization problems subject to simple bounds on the variables. It is intended for problems in which information on the Hessian matrix is di cult to obtain, or for large dense problems. LBFGSB can also be used for unconstrained problems, and in this case performs similarly to its predecessor, algorithm LBFGS (Harwell routine VA15). The algorithm is implemented in Fortran 77.
Automatic preconditioning by limited memory QuasiNewton updating
 SIAM J. Optim
"... The paper proposes a preconditioner for the conjugate gradient method (CG) that is designed for solving systems of equations Ax = bi with di erent right hand side vectors, or for solving a sequence of slowly varying systems Akx = bk. The preconditioner has the form of a limited memory quasiNewton m ..."
Abstract

Cited by 34 (2 self)
 Add to MetaCart
The paper proposes a preconditioner for the conjugate gradient method (CG) that is designed for solving systems of equations Ax = bi with di erent right hand side vectors, or for solving a sequence of slowly varying systems Akx = bk. The preconditioner has the form of a limited memory quasiNewton matrix and is generated using information from the CG iteration. The automatic preconditioner does not require explicit knowledge of the coe cient matrix A and is therefore suitable for problems where only products of A times avector can be computed. Numerical experiments indicate that the preconditioner has most to o er when these matrixvector products are expensive to compute, and when low accuracy in the solution is required. The e ectiveness of the preconditioner is tested within a Hessianfree Newton method for optimization, and by solving certain linear systems arising in nite element models.
Sensitivity analysis in variational data assimilation
 J. Meteorol. Soc. Japan
, 1997
"... Optimal control theory is applied to a variational data assimilation problem in the context of the assimilation of altimeter data in a quasigeostrophic ocean model. Related to the issue of the minimization of the cost function, a sensitivity analysis is applied to the optimality system to derive the ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
(Show Context)
Optimal control theory is applied to a variational data assimilation problem in the context of the assimilation of altimeter data in a quasigeostrophic ocean model. Related to the issue of the minimization of the cost function, a sensitivity analysis is applied to the optimality system to derive the sensitivity of the retrieved control variable ( here the initial condition) with respect to the observations. The derivation of the sensitivity of a response function in the case of data assimilation is reviewed and a new method of performing the derivation of this sensitivity is proposed. 1 1
Testing a marine ecosystem model: sensitivity analysis and parameter optimization
 Journal of Marine Systems
, 2001
"... optimization ..."
Performance of 4DVar with Different Strategies for the Use of Adjoint Physics with the FSU Global Spectral Model
, 2000
"... A set of fourdimensional variational data assimilation (4DVar) experiments were conducted using both a standard method and an incremental method in an identical twin framework. The full physics adjoint model of the Florida State University global spectral model (FSUGSM) was used in the standard ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
A set of fourdimensional variational data assimilation (4DVar) experiments were conducted using both a standard method and an incremental method in an identical twin framework. The full physics adjoint model of the Florida State University global spectral model (FSUGSM) was used in the standard 4DVar, while the adjoint of only a few selected physical parameterizations was used in the incremental method. The impact of physical processes on 4DVar was examined in detail by comparing the results of these experiments. The inclusion of full physics turned out to be significantly beneficial in terms of assimilation error to the lower troposphere during the entire minimization process. The beneficial impact was found to be primarily related to boundary layer physics. The precipitation physics in the adjoint model also tended to have a beneficial impact after an intermediate number (50) of minimization iterations. Experiment results confirmed that the forecast from assimilation analyses with the full physics adjoint model displays a shorter precipitation spinup period. The beneficial impact on precipitation spinup did not result solely from the inclusion of the precipitation physics in the adjoint model, but rather from the combined impact of several physical processes. The inclusion of full physics in the adjoint model exhibited a detrimental impact on the rate of convergence at an early stage of the minimization process, but did not affect the final convergence.
Towards a Discrete Newton Method with Memory for LargeScale Optimization
, 1996
"... A new method for solving large nonlinear optimization problems is outlined. It attempts to combine the best properties of the discretetruncated Newton method and the limited memory BFGS method, to produce an algorithm that is both economical and capable of handling illconditioned problems. The ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
A new method for solving large nonlinear optimization problems is outlined. It attempts to combine the best properties of the discretetruncated Newton method and the limited memory BFGS method, to produce an algorithm that is both economical and capable of handling illconditioned problems. The key idea is to use the curvature information generated during the computation of the discrete Newton step to improvethelimited memory BFGS approximations. The numerical performance of the new method is studied using a family of functions whose nonlinearity and condition number can be controlled.