Results 1  10
of
145
On the limited memory BFGS method for large scale optimization
 MATHEMATICAL PROGRAMMING
, 1989
"... ..."
A trust region method based on interior point techniques for nonlinear programming
 Mathematical Programming
, 1996
"... Jorge Nocedal z An algorithm for minimizing a nonlinear function subject to nonlinear inequality constraints is described. It applies sequential quadratic programming techniques to a sequence of barrier problems, and uses trust regions to ensure the robustness of the iteration and to allow the direc ..."
Abstract

Cited by 113 (19 self)
 Add to MetaCart
Jorge Nocedal z An algorithm for minimizing a nonlinear function subject to nonlinear inequality constraints is described. It applies sequential quadratic programming techniques to a sequence of barrier problems, and uses trust regions to ensure the robustness of the iteration and to allow the direct use of second order derivatives. This framework permits primal and primaldual steps, but the paper focuses on the primal version of the new algorithm. An analysis of the convergence properties of this method is presented. Key words: constrained optimization, interior point method, largescale optimization, nonlinear programming, primal method, primaldual method, SQP iteration, barrier method, trust region method.
Theory of Algorithms for Unconstrained Optimization
, 1992
"... this article I will attempt to review the most recent advances in the theory of unconstrained optimization, and will also describe some important open questions. Before doing so, I should point out that the value of the theory of optimization is not limited to its capacity for explaining the behavio ..."
Abstract

Cited by 92 (1 self)
 Add to MetaCart
this article I will attempt to review the most recent advances in the theory of unconstrained optimization, and will also describe some important open questions. Before doing so, I should point out that the value of the theory of optimization is not limited to its capacity for explaining the behavior of the most widely used techniques. The question
Newton's Method For Large BoundConstrained Optimization Problems
 SIAM JOURNAL ON OPTIMIZATION
, 1998
"... We analyze a trust region version of Newton's method for boundconstrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of constraints. The convergence theory holds for linearlyconstrained problems, and yields global and super ..."
Abstract

Cited by 82 (4 self)
 Add to MetaCart
We analyze a trust region version of Newton's method for boundconstrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of constraints. The convergence theory holds for linearlyconstrained problems, and yields global and superlinear convergence without assuming neither strict complementarity nor linear independence of the active constraints. We also show that the convergence theory leads to an efficient implementation for large boundconstrained problems.
Parallel LagrangeNewtonKrylovSchur methods for PDEconstrained optimization. Part I: The KrylovSchur solver
 SIAM J. Sci. Comput
, 2000
"... Abstract. Large scale optimization of systems governed by partial differential equations (PDEs) is a frontier problem in scientific computation. The stateoftheart for such problems is reduced quasiNewton sequential quadratic programming (SQP) methods. These methods take full advantage of existin ..."
Abstract

Cited by 78 (11 self)
 Add to MetaCart
(Show Context)
Abstract. Large scale optimization of systems governed by partial differential equations (PDEs) is a frontier problem in scientific computation. The stateoftheart for such problems is reduced quasiNewton sequential quadratic programming (SQP) methods. These methods take full advantage of existing PDE solver technology and parallelize well. However, their algorithmic scalability is questionable; for certain problem classes they can be very slow to converge. In this twopart article we propose a new method for steadystate PDEconstrained optimization, based on the idea of full space SQP with reduced space quasiNewton SQP preconditioning. The basic components of the method are: Newton solution of the firstorder optimality conditions that characterize stationarity of the Lagrangian function; Krylov solution of the KarushKuhnTucker (KKT) linear systems arising at each Newton iteration using a symmetric quasiminimum residual method; preconditioning of the KKT system using an approximate state/decision variable decomposition that replaces the forward PDE Jacobians by their own preconditioners, and the decision space Schur complement (the reduced Hessian) by a BFGS approximation or by a twostep stationary method. Accordingly, we term the new method LagrangeNewtonKrylov Schur (LNKS). It is fully parallelizable, exploits the structure of available parallel algorithms for the PDE forward problem, and is locally quadratically convergent. In the first part of the paper we investigate the effectiveness of the KKT linear system solver. We test the method on two optimal control problems in which the flow is described by the steadystate Stokes equations. The
Trust region Newton method for largescale logistic regression
 In Proceedings of the 24th International Conference on Machine Learning (ICML
, 2007
"... Largescale logistic regression arises in many applications such as document classification and natural language processing. In this paper, we apply a trust region Newton method to maximize the loglikelihood of the logistic regression model. The proposed method uses only approximate Newton steps in ..."
Abstract

Cited by 69 (12 self)
 Add to MetaCart
(Show Context)
Largescale logistic regression arises in many applications such as document classification and natural language processing. In this paper, we apply a trust region Newton method to maximize the loglikelihood of the logistic regression model. The proposed method uses only approximate Newton steps in the beginning, but achieves fast convergence in the end. Experiments show that it is faster than the commonly used quasi Newton approach for logistic regression. We also compare it with existing linear SVM implementations. 1
A reflective Newton method for minimizing a quadratic function subject to bounds on some of the variables
, 1992
"... . We propose a new algorithm, a reflective Newton method, for the minimization of a quadratic function of many variables subject to upper and lower bounds on some of the variables. The method applies to a general (indefinite) quadratic function, for which a local minimizer subject to bounds is requi ..."
Abstract

Cited by 56 (1 self)
 Add to MetaCart
. We propose a new algorithm, a reflective Newton method, for the minimization of a quadratic function of many variables subject to upper and lower bounds on some of the variables. The method applies to a general (indefinite) quadratic function, for which a local minimizer subject to bounds is required, and is particularily suitable for the largescale problem. Our new method exhibits strong convergence properties, global and quadratic convergence, and appears to have significant practical potential. Strictly feasible points are generated. Experimental results on moderately large and sparse problems support the claim of practicality for largescale problems. 1 Research partially supported by the Applied Mathematical Sciences Research Program (KC04 02) of the Office of Energy Research of the U.S. Department of Energy under grant DEFG0286ER25013. A000, and by the Computational Mathematics Program of the National Science Foundation under grant DMS8706133, and by the Cornell Theory Cen...
An Implicit Filtering Algorithm For Optimization Of Functions With Many Local Minima
 SIAM J. Optim
, 1995
"... . In this paper we describe and analyze an algorithm for certain box constrained optimization problems that may have several local minima. A paradigm for these problems is one in which the function to be minimized is the sum of a simple function, such as a convex quadratic, and high frequency, low a ..."
Abstract

Cited by 54 (16 self)
 Add to MetaCart
(Show Context)
. In this paper we describe and analyze an algorithm for certain box constrained optimization problems that may have several local minima. A paradigm for these problems is one in which the function to be minimized is the sum of a simple function, such as a convex quadratic, and high frequency, low amplitude terms which cause local minima away from the global minimum of the simple function. Our method is gradient based and therefore the performance can be improved by use of quasiNewton methods. Key words. filtering, projected gradient algorithm, quasiNewton method AMS(MOS) subject classifications. 65H10, 65K05, 65K10 1. Introduction. In this paper we describe and analyze an algorithm for bound constrained optimization problems that may have several local minima. The type of problem we have in mind is one in which the function to be minimized is the sum of a simple function, such as a convex quadratic, and high frequency, low amplitude terms which cause the local minima. Of particul...
A New MatrixFree Algorithm for the LargeScale TrustRegion Subproblem
, 1995
"... The trustregion subproblem arises frequently in linear algebra and optimization applications. Recently, matrixfree methods have been introduced to solve large scale trustregion subproblems. These methods only require a matrixvector product and do not rely on matrix factorizations [4, 7]. The ..."
Abstract

Cited by 53 (9 self)
 Add to MetaCart
The trustregion subproblem arises frequently in linear algebra and optimization applications. Recently, matrixfree methods have been introduced to solve large scale trustregion subproblems. These methods only require a matrixvector product and do not rely on matrix factorizations [4, 7]. These approaches recast the trust region subproblem in terms of a parameterized eigenvalue problem and then adjust the parameter to find the optimal solution from the eigenvector corresponding to the smallest eigenvalue of the parameterized eigenvalue problem. This paper presents a new matrixfree algorithm for the largescale trustregion subproblem. The new algorithm improves upon the previous algorithms by introducing a unified iteration that naturally includes the so called hard case. The new iteration is shown to be superlinearly convergent in all cases. Computational results are presented to illustrate convergence properties and robustness of the method.
TrustRegion InteriorPoint Algorithms For Minimization Problems With Simple Bounds
 SIAM J. Control and Optimization
, 1995
"... . Two trustregion interiorpoint algorithms for the solution of minimization problems with simple bounds are analyzed and tested. The algorithms scale the local model in a way similar to Coleman and Li [1]. The first algorithm is more usual in that the trust region and the local quadratic model a ..."
Abstract

Cited by 51 (19 self)
 Add to MetaCart
. Two trustregion interiorpoint algorithms for the solution of minimization problems with simple bounds are analyzed and tested. The algorithms scale the local model in a way similar to Coleman and Li [1]. The first algorithm is more usual in that the trust region and the local quadratic model are consistently scaled. The second algorithm proposed here uses an unscaled trust region. A global convergence result for these algorithms is given and dogleg and conjugategradient algorithms to compute trial steps are introduced. Some numerical examples that show the advantages of the second algorithm are presented. Keywords. trustregion methods, interiorpoint algorithms, DikinKarmarkar ellipsoid, Coleman and Li affine scaling, simple bounds. AMS subject classification. 49M37, 90C20, 90C30 1. Introduction. In this note we consider the boxconstrained minimization problem minimize f(x) subject to a x b; (1) where x 2 IR n , a 2 (IR [ f\Gamma1g) n , b 2 (IR [ f+1g) n and...