Results 1  10
of
11
On the limited memory BFGS method for large scale optimization
 Mathematical Programming
, 1989
"... this paper has appeared in ..."
A survey of nonlinear conjugate gradient methods
 Pacific Journal of Optimization
, 2006
"... Abstract. This paper reviews the development of different versions of nonlinear conjugate gradient methods, with special attention given to global convergence properties. ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
Abstract. This paper reviews the development of different versions of nonlinear conjugate gradient methods, with special attention given to global convergence properties.
Numerical experience with limitedMemory QuasiNewton methods and Truncated Newton methods
 SIAM J. Optimization
, 1992
"... Abstract. Computational experience with several limitedmemory quasiNewton and truncated Newton methods for unconstrained nonlinear optimization is described. Comparative tests were conducted on a wellknown test library [J. J. Mor, B. S. Garbow, and K. E. Hillstrom, ACM Trans. Math. Software, 7 (1 ..."
Abstract

Cited by 13 (9 self)
 Add to MetaCart
Abstract. Computational experience with several limitedmemory quasiNewton and truncated Newton methods for unconstrained nonlinear optimization is described. Comparative tests were conducted on a wellknown test library [J. J. Mor, B. S. Garbow, and K. E. Hillstrom, ACM Trans. Math. Software, 7 (1981), pp. 1741], on several synthetic problems allowing control of the clustering of eigenvalues in the Hessian spectrum, and on some largescale problems in oceanography and meteorology. The results indicate that among the tested limitedmemory quasiNewton methods, the LBFGS method [D. C. Liu and J. Nocedal, Math. Programming, 45 (1989), pp. 503528] has the best overall performance for the problems examined. The numerical performance of two truncated Newton methods, differing in the innerloop solution for the search vector, is competitive with that of LBFGS. Key words, limitedmemory quasiNewton methods, truncated Newton methods, synthetic cluster functions, largescale unconstrained minimization AMS subject classifications. 90C30, 93C20, 93C75, 65K10, 76C20 1. Introduction. Limitedmemory quasiNewton (LMQN) and truncated Newton
BFGS with update skipping and varying memory
 SIAM J. Optim
, 1998
"... Abstract. We give conditions under which limitedmemory quasiNewton methods with exact line searches will terminate in n steps when minimizing ndimensional quadratic functions. We show that although all Broyden family methods terminate in n steps in their fullmemory versions, only BFGS does so wi ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
Abstract. We give conditions under which limitedmemory quasiNewton methods with exact line searches will terminate in n steps when minimizing ndimensional quadratic functions. We show that although all Broyden family methods terminate in n steps in their fullmemory versions, only BFGS does so with limitedmemory. Additionally, we show that fullmemory Broyden family methods with exact line searches terminate in at most n + p steps when p matrix updates are skipped. We introduce new limitedmemory BFGS variants and test them on nonquadratic minimization problems.
Limitedmemory reducedHessian methods for unconstrained optimization, Numerical Analysis
 SIAM J.Optim
, 1997
"... Abstract. Limitedmemory BFGS quasiNewton methods approximate the Hessian matrix of second derivatives by the sum of a diagonal matrix and a fixed number of rankone matrices. These methods are particularly effective for large problems in which the approximate Hessian cannot be stored explicitly. I ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. Limitedmemory BFGS quasiNewton methods approximate the Hessian matrix of second derivatives by the sum of a diagonal matrix and a fixed number of rankone matrices. These methods are particularly effective for large problems in which the approximate Hessian cannot be stored explicitly. It can be shown that the conventional BFGS method accumulates approximate curvature in a sequence of expanding subspaces. This allows an approximate Hessian to be represented using a smaller reduced matrix that increases in dimension at each iteration. When the number of variables is large, this feature may be used to define limitedmemory reducedHessian methods in which the dimension of the reduced Hessian is limited to save storage. Limitedmemory reducedHessian methods have the benefit of requiring half the storage of conventional limitedmemory methods. In this paper, we propose a particular reducedHessian method with substantial computational advantages compared to previous reducedHessian methods. Numerical results from a set of unconstrained problems in the CUTE test collection indicate that our implementation is competitive with the limitedmemory codes LBFGS and LBFGSB. Key words. Unconstrained optimization, quasiNewton methods, BFGS method, reducedHessian methods, conjugatedirection methods AMS subject classifications. 65K05, 90C30
ON A CLASS OF LIMITED MEMORY PRECONDITIONERS FOR LARGE SCALE LINEAR SYSTEMS WITH MULTIPLE
"... Abstract. This work is concerned with the development and study of a class of limited memory preconditioners for the solution of sequences of linear systems. To this aim, we consider linear systems with the same symmetric positive definite matrix and multiple righthand sides available in sequence. ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. This work is concerned with the development and study of a class of limited memory preconditioners for the solution of sequences of linear systems. To this aim, we consider linear systems with the same symmetric positive definite matrix and multiple righthand sides available in sequence. We first propose a general class of preconditioners, called Limited Memory Preconditioners (LMP), whose construction involves only a small number of linearly independent vectors and their product with the matrix to precondition. After exploring and illustrating the theoretical properties of this new class of preconditioners, we more particularly study three members of the class named spectralLMP, quasiNewtonLMP and RitzLMP, and show that the two first correspond to two wellknown preconditioners (see [8] and [20], respectively), while the third one appears to be a new and quite promising preconditioner, as illustrated by numerical experiments.
ON THE BEHAVIOR OF THE CONJUGATEGRADIENT METHOD ON ILLCONDITIONED PROBLEMS
, 2006
"... We study the behavior of the conjugategradient method for solving a set of linear equations, where the matrix is symmetric and positive definite with one set of eigenvalues that are large and the remaining are small. We characterize the behavior of the residuals associated with the large eigenvalue ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We study the behavior of the conjugategradient method for solving a set of linear equations, where the matrix is symmetric and positive definite with one set of eigenvalues that are large and the remaining are small. We characterize the behavior of the residuals associated with the large eigenvalues throughout the iterations, and also characterize the behavior of the residuals associated with the small eigenvalues for the early iterations. Our results show that the residuals associated with the large eigenvalues are made small first, without changing very much the residuals associated with the small eigenvalues. A conclusion is that the illconditioning of the matrix is not reflected in the conjugategradient iterations until the residuals associated with the large eigenvalues have been made small. Key words. conjugategradient method, symmetric positivedefinite matrix, illconditioning AMS subject classifications. 65F10, 65F22, 65K05 1.
LARGESCALE KALMAN FILTERING USING THE LIMITED MEMORY BFGS METHOD ∗
"... Abstract. The standard formulations of the Kalman filter (KF) and extended Kalman filter (EKF) require the storage and multiplication of matrices of size n × n, where n is the size of the state space, and the inversion of matrices of size m × m, where m is the size of the observation space. Thus whe ..."
Abstract
 Add to MetaCart
Abstract. The standard formulations of the Kalman filter (KF) and extended Kalman filter (EKF) require the storage and multiplication of matrices of size n × n, where n is the size of the state space, and the inversion of matrices of size m × m, where m is the size of the observation space. Thus when both m and n are large, implementation issues arise. In this paper, we advocate the use of the limited memory BFGS method (LBFGS) to address these issues. A detailed description of how to use LBFGS within both the KF and EKF methods is given. The methodology is then tested on two examples: the first is largescale and linear, and the second is small scale and nonlinear. Our results indicate that the resulting methods, which we will denote LBFGSKF and LBFGSEKF, yield results that are comparable with those obtained using KF and EKF, respectively, and can be used on much larger scale problems. Key words. Kalman filter, Bayesian estimation, largescale optimization AMS subject classifications. 65K10, 15A29 1. Introduction. The Kalman filter (KF) for linear dynamical systems and the extended Kalman filter (EKF) for nonlinear but smoothly evolving dynamical systems are popular methods for use on state space estimation problems. As the dimension of the state space becomes very large, as is the case, for example, in numerical weather forecasting, the standard
ON THE CONNECTION BETWEEN THE CONJUGATE GRADIENT METHOD AND QUASINEWTON METHODS ON QUADRATIC PROBLEMS
, 2013
"... It is well known that the conjugate gradient method and a quasiNewton method, using any welldefined update matrix from the oneparameter Broyden family of updates, produce the same iterates on a quadratic problem with positivedefinite Hessian. This equivalence does not hold for any quasiNewton m ..."
Abstract
 Add to MetaCart
It is well known that the conjugate gradient method and a quasiNewton method, using any welldefined update matrix from the oneparameter Broyden family of updates, produce the same iterates on a quadratic problem with positivedefinite Hessian. This equivalence does not hold for any quasiNewton method. We discuss more precisely the conditions on the update matrix that give rise to this behavior, and show that the crucial fact is that the components of each update matrix is choosen in the last two dimensions of the Krylov subspaces defined by the conjugate gradient method. In the framework based on a sufficient condition to obtain mutually conjugate search directions, we show that the oneparameter Broyden family is complete. We also show that the update matrices from the oneparameter Broyden family is almost always welldefined on a quadratic problem with positivedefinite Hessian. The only exception is when the symmetric rankone update is used and the unit steplength is taken in the same iteration, in this case it is the Broyden parameter that becomes undefined. 1