Results 1  10
of
61
On the limited memory BFGS method for large scale optimization
 MATHEMATICAL PROGRAMMING
, 1989
"... ..."
On the formulation and theory of newton interior point methods for nonlinear programming
 Journal of Optimization Theory and Applications
, 1996
"... Abstract. In this work, we first study in detail the formulation of the primaldual interiorpoint method for linear programming. We show that, contrary to popular belief, it cannot be viewed.as adamped Newton method applied to the KarushKuhnTucker conditions for the logarithmic barrier function ..."
Abstract

Cited by 110 (5 self)
 Add to MetaCart
(Show Context)
Abstract. In this work, we first study in detail the formulation of the primaldual interiorpoint method for linear programming. We show that, contrary to popular belief, it cannot be viewed.as adamped Newton method applied to the KarushKuhnTucker conditions for the logarithmic barrier function problem. Next, we extend the formulation to general nonlinear programming, and then validate this extension by demonstrating that this algorithm can be implemented sothat it is locally and Qquadratically convergent under only the standard Newton method assumptions. We also establish a global convergence theory for this algorithm and include promising numerical experimentation. Key Words. Interiorpoint methods, primaldual methods, nonlinear programming, superlinear and quadratic convergence, global convergence. 1.
Theory of Algorithms for Unconstrained Optimization
, 1992
"... this article I will attempt to review the most recent advances in the theory of unconstrained optimization, and will also describe some important open questions. Before doing so, I should point out that the value of the theory of optimization is not limited to its capacity for explaining the behavio ..."
Abstract

Cited by 104 (1 self)
 Add to MetaCart
this article I will attempt to review the most recent advances in the theory of unconstrained optimization, and will also describe some important open questions. Before doing so, I should point out that the value of the theory of optimization is not limited to its capacity for explaining the behavior of the most widely used techniques. The question
A reduced Hessian method for largescale constrained optimization
 SIAM JOURNAL ON OPTIMIZATION
, 1995
"... ..."
A Class of Gradient Unconstrained Minimization Algorithms With Adaptive Stepsize
, 1999
"... In this paper the development, convergence theory and numerical testing of a class of gradient unconstrained minimization algorithms with adaptive stepsize are presented. The proposed class comprises four algorithms: the first two incorporate techniques for the adaptation of a common stepsize for al ..."
Abstract

Cited by 27 (16 self)
 Add to MetaCart
In this paper the development, convergence theory and numerical testing of a class of gradient unconstrained minimization algorithms with adaptive stepsize are presented. The proposed class comprises four algorithms: the first two incorporate techniques for the adaptation of a common stepsize for all coordinate directions and the other two allow an individual adaptive stepsize along each coordinate direction. All the algorithms are computationally efficient and possess interesting convergence properties utilizing estimates of the Lipschitz constant that are obtained without additional function or gradient evaluations. The algorithms have been implemented and tested on some wellknown test cases as well as on reallife artificial neural network applications and the results have been very satisfactory.
A Family of Variable Metric Proximal Methods
, 1993
"... We consider conceptual optimization methods combining two ideas: the MoreauYosida regularization in convex analysis, and quasiNewton approximations of smooth functions. We outline several approaches based on this combination, and establish their global convergence. Then we study theoretically the ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
We consider conceptual optimization methods combining two ideas: the MoreauYosida regularization in convex analysis, and quasiNewton approximations of smooth functions. We outline several approaches based on this combination, and establish their global convergence. Then we study theoretically the local convergence properties of one of these approaches, which uses quasiNewton updates of the objective function itself. Also, we obtain a globally and superlinearly convergent BFGS proximal method. At each step of our study, we single out the assumptions that are useful to derive the result concerned.
Superlinear Convergence And Implicit Filtering
, 1999
"... . In this note we show how the implicit filtering algorithm can be coupled with the BFGS quasiNewton update to obtain a superlinearly convergent iteration if the noise in the objective function decays sufficiently rapidly as the optimal point is approached. We show how known theory for the noisefr ..."
Abstract

Cited by 25 (3 self)
 Add to MetaCart
(Show Context)
. In this note we show how the implicit filtering algorithm can be coupled with the BFGS quasiNewton update to obtain a superlinearly convergent iteration if the noise in the objective function decays sufficiently rapidly as the optimal point is approached. We show how known theory for the noisefree case can be extended and thereby provide a partial explanation for the good performance of quasiNewton methods when coupled with implicit filtering. Key words. noisy optimization, implicit filtering, BFGS algorithm, superlinear convergence AMS subject classifications. 65K05, 65K10, 90C30 1. Introduction. In this paper we examine the local and global convergence behavior of the combination of the BFGS [4], [20], [17], [23] quasiNewton method with the implicit filtering algorithm. The resulting method is intended to minimize smooth functions that are perturbed with lowamplitude noise. Our results, which extend those of [5], [15], and [6], show that if the amplitude of the noise decays ...
An overview of unconstrained optimization
 Online]. Available: citeseer.ist.psu.edu/fletcher93overview.html 150
, 1993
"... bundle filter method for nonsmooth nonlinear ..."
A Modified BFGS Method and Its Global Convergence in Nonconvex Minimization
, 1998
"... In this paper, we propose a modication of the BFGS method for unconstrained optimization. A remarkable feature of the proposed method is that it possesses a global convergence property even without convexity assumption on the objective function. Under certain conditions, we also establish superlinea ..."
Abstract

Cited by 18 (5 self)
 Add to MetaCart
In this paper, we propose a modication of the BFGS method for unconstrained optimization. A remarkable feature of the proposed method is that it possesses a global convergence property even without convexity assumption on the objective function. Under certain conditions, we also establish superlinear convergence of the method. Key words: BFGS method, global convergence, superlinear convergence 1 Present address (available before October, 1999): Department of Applied Mathematics and Physics, Graduate School of Engineering, Kyoto University, Kyoto 606, Japan, email: lidh@kuamp.kyotou.ac.jp 1 Introduction Let f : R n ! R be continuously dierentiable. Consider the following unconstrained optimization problem: min f(x); x 2 R n : (1:1) Among numerous iterative methods for solving (1.1), quasiNewton methods constitute particularly important class. Throughout the paper, we assume that f in (1.1) has Lipschitz continuous gradients, i.e. there is a constant L > 0 such kg(x) g(y)k ...