Results 1  10
of
15
Curvilinear Stabilization Techniques for Truncated Newton Methods in Large Scale Unconstrained Optimization: the . . .
 SIAM J. Optim
, 1998
"... The aim of this paper is to define a new class of minimization algorithms for solving large scale unconstrained problems. In particular we describe a stabilization framework, based on a curvilinear linesearch, which uses a combination of a Newtontype direction and a negative curvature direction. Th ..."
Abstract

Cited by 21 (7 self)
 Add to MetaCart
(Show Context)
The aim of this paper is to define a new class of minimization algorithms for solving large scale unconstrained problems. In particular we describe a stabilization framework, based on a curvilinear linesearch, which uses a combination of a Newtontype direction and a negative curvature direction. The motivation for using negative curvature direction is that of taking into account local nonconvexity of the objective function. On the basis of this framework, we propose an algorithm which uses the Lanczos method for determining at each iteration both a Newtontype direction and an effective negative curvature direction. The results of an extensive numerical testing is reported together with a comparison with the LANCELOT package. These results show that the algorithm is very competitive and this seems to indicate that the proposed approach is promising. 1 Introduction In this work, we deal with the definition of new efficient unconstrained minimization algorithms for solving large scal...
Convergence to Second Order Stationary Points in Inequality Constrained Optimization
 Mathematics of Operations Research
, 1998
"... : We propose a new algorithm for the nonlinear inequality constrained minimization problem, and prove that it generates a sequence converging to points satisfying the KKT second order necessary conditions for optimality. The algorithm is a line search algorithm using directions of negative curvature ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
(Show Context)
: We propose a new algorithm for the nonlinear inequality constrained minimization problem, and prove that it generates a sequence converging to points satisfying the KKT second order necessary conditions for optimality. The algorithm is a line search algorithm using directions of negative curvature and it can be viewed as a non trivial extension of corresponding known techniques from unconstrained to constrained problems. The main tools employed in the definition and in the analysis of the algorithm are a differentiable exact penalty function and results from the theory of LC 1 functions. Key Words: Inequality constrained optimization, KKT second order necessary conditions, penalty function, LC 1 function, negative curvature direction. 1 Introduction We are concerned with the inequality constrained minimization problem (P) min f(x) s.t. g(x) 0; where f : IR n ! IR and g : IR n ! IR m are three times continuously differentiable. Our aim is to develope an algorithm that g...
WOMBAT A program for Mixed Model Analyses by Restricted Maximum Likelihood Version 1.0 USER NOTES
"... This document has been typeset using L AT E X2e with the hyperref package. This gives a document which is fully navigable all references to other sections and citations are ‘clickable ’ within document links, and all links to external URLs can be accessed from within the PDF viewer (if this is conf ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
This document has been typeset using L AT E X2e with the hyperref package. This gives a document which is fully navigable all references to other sections and citations are ‘clickable ’ within document links, and all links to external URLs can be accessed from within the PDF viewer (if this is configured to do so). © Karin Meyer 2006–2010 Permission is granted to make and distribute verbatim copies of this document, provided it is preserved complete and unmodified.
Exploiting Negative Curvature Directions in Linesearch Methods for Unconstrained Optimization
, 1997
"... ..."
Relaxing Convergence Conditions To Improve The Convergence Rate
, 1999
"... Standard global convergence proofs are examined to determine why some algorithms perform better than other algorithms. We show that relaxing the conditions required to prove global convergence can improve an algorithm's performance. Further analysis indicates that minimizing an estimate of the ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Standard global convergence proofs are examined to determine why some algorithms perform better than other algorithms. We show that relaxing the conditions required to prove global convergence can improve an algorithm's performance. Further analysis indicates that minimizing an estimate of the distance to the minimum relaxes the convergence conditions in such a way as to improve an algorithm's convergence rate. A new linesearch algorithm based on these ideas is presented that does not force a reduction in the objective function at each iteration, yet it allows the objective function to increase during an iteration only if this will result in faster convergence. Unlike the nonmonotone algorithms in the literature, these new functions dynamically adjust to account for changes between the influence of curvature and descent. The result is an optimal algorithm in the sense that an estimate of the distance to the minimum is minimized at each iteration. The algorithm is shown to be well defi...
Cost Approximation Algorithms With Nonmonotone Line Searches for a General Class of Nonlinear Programs
, 1996
"... . When solving illconditioned nonlinear programs by descent algorithms, the descent requirement may induce the step lengths to become very small, thus resulting in very poor performances. Recently, suggestions have been made to circumvent this problem, among which is a class of approaches in which ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
. When solving illconditioned nonlinear programs by descent algorithms, the descent requirement may induce the step lengths to become very small, thus resulting in very poor performances. Recently, suggestions have been made to circumvent this problem, among which is a class of approaches in which the objective value may be allowed to increase temporarily. Grippo et al. [GLL91] introduce nonmonotone line searches in the class of deflected gradient methods in unconstrained differentiable optimization; this technique allows for longer steps (typically of unit length) to be taken, and is successfully applied to some illconditioned problems. This paper extends their nonmonotone approach and convergence results to the large class of cost approximation algorithms of Patriksson [Pat93b], and to optimization problems with both convex constraints and nondifferentiable objective functions. Key Words. Nondifferentiable optimization, cost approximation, nonmonotone algorithms Abbreviated Title...
A Randomized Nonmonotone Block Proximal Gradient Method for a Class of Structured Nonlinear Programming
, 2015
"... We propose a randomized nonmonotone block proximal gradient (RNBPG) method for minimizing the sum of a smooth (possibly nonconvex) function and a blockseparable (possibly nonconvex nonsmooth) function. At each iteration, this method randomly picks a block according to any prescribed probability dis ..."
Abstract
 Add to MetaCart
(Show Context)
We propose a randomized nonmonotone block proximal gradient (RNBPG) method for minimizing the sum of a smooth (possibly nonconvex) function and a blockseparable (possibly nonconvex nonsmooth) function. At each iteration, this method randomly picks a block according to any prescribed probability distribution and solves typically several associated proximal subproblems that usually have a closedform solution, until a certain progress on objective value is achieved. In contrast to the usual randomized block coordinate descent method [23, 20], our method has a nonmonotone flavor and uses variable stepsizes that can partially utilize the local curvature information of the smooth component of objective function. We show that any accumulation point of the solution sequence of the method is a stationary point of the problem almost surely and the method is capable of finding an approximate stationary point with high probability. We also establish a sublinear rate of convergence for the method in terms of the minimal expected squared norm of certain proximal gradients over the iterations. When the problem under consideration is convex, we show that the expected objective val
Optimization over Sparse Symmetric Sets via a Nonmonotone Projected Gradient Method
, 2015
"... We consider the problem of minimizing a Lipschitz differentiable function over a class of sparse symmetric sets that has wide applications in engineering and science. For this problem, it is known that any accumulation point of the classical projected gradient (PG) method with a constant stepsize 1/ ..."
Abstract
 Add to MetaCart
(Show Context)
We consider the problem of minimizing a Lipschitz differentiable function over a class of sparse symmetric sets that has wide applications in engineering and science. For this problem, it is known that any accumulation point of the classical projected gradient (PG) method with a constant stepsize 1/L satisfies the Lstationarity optimality condition that was introduced in [3]. In this paper we introduce a new optimality condition that is stronger than the Lstationarity optimality condition. We also propose a nonmonotone projected gradient (NPG) method for this problem by incorporating some supportchanging and coordinateswapping strategies into a projected gradient method with variable stepsizes. It is shown that any accumulation point of NPG satisfies the new optimality condition and moreover it is a coordinatewise stationary point. Under some suitable assumptions, we further show that it is a global or a local minimizer of the problem. Numerical experiments are conducted to compare the performance of PG and NPG. The computational results demonstrate that NPG has substantially better solution quality than PG, and moreover, it is at least comparable to, but sometimes can be much faster than PG in terms of speed.
FOR THE DEGREE OF
"... We present a general arc search algorithm for linearly constrained optimization. The method constructs and searches along smooth arcs that satisfy a small and practical set of properties. An activeset strategy is used to manage linear inequality constraints. When second derivatives are used, the me ..."
Abstract
 Add to MetaCart
(Show Context)
We present a general arc search algorithm for linearly constrained optimization. The method constructs and searches along smooth arcs that satisfy a small and practical set of properties. An activeset strategy is used to manage linear inequality constraints. When second derivatives are used, the method is shown to converge to a secondorder critical point and have a quadratic rate of convergence under standard conditions. The theory is applied to the methods of line search, curvilinear search, and modified gradient flow that have previously been proposed for unconstrained problems. A key issue when generalizing unconstrained methods to linearly constrained problems using an activeset strategy is the complexity of how the arc intersects hyperplanes. We introduce a new arc that is derived from the regularized Newton equation. Computing the intersection between this arc and a linear constraint reduces to finding the roots of a quadratic polynomial. The new arc scales to large problems, does not require modification to the Hessian, and is rarely dependent on the scaling of directions of negative curvature. Numerical experiments show the effectiveness of this arc search method on problems from the CUTEr test set and on a specific class of problems for which identifying negative curvature is critical. A second set of experiments demonstrates that when using SR1 quasiNewton updates, this arc search method is competitive with a line search method using
and Marine Systems
"... A nonmonotone truncated NewtonKrylov method exploiting negative curvature directions, for large scale unconstrained optimization: complete results Giovanni Fasano & Stefano LucidiA nonmonotone truncated NewtonKrylov method exploiting negative curvature directions, for large scale unconstrained ..."
Abstract
 Add to MetaCart
(Show Context)
A nonmonotone truncated NewtonKrylov method exploiting negative curvature directions, for large scale unconstrained optimization: complete results Giovanni Fasano & Stefano LucidiA nonmonotone truncated NewtonKrylov method exploiting negative curvature directions, for large scale unconstrained optimization: complete results