Results 1  10
of
38
R.J.: Interiorpoint methods for nonconvex nonlinear programming: orderings and higherorder methods
 Mathematical Programming Ser. B
, 2000
"... Abstract. In this paper, we present the formulation and solution of optimization problems with complementarity constraints using an interiorpoint method for nonconvex nonlinear programming. We identify possible difficulties that could arise, such as unbounded faces of dual variables, linear depend ..."
Abstract

Cited by 117 (8 self)
 Add to MetaCart
Abstract. In this paper, we present the formulation and solution of optimization problems with complementarity constraints using an interiorpoint method for nonconvex nonlinear programming. We identify possible difficulties that could arise, such as unbounded faces of dual variables, linear dependence of constraint gradients and initialization issues. We suggest remedies. We include encouraging numerical results on the MacMPEC test suite of problems.
Adaptive cubic regularisation methods for unconstrained optimization. Part II: worstcase function and . . .
, 2009
"... ..."
(Show Context)
Adaptive cubic overestimation methods for unconstrained optimization
"... An Adaptive Cubic Overestimation (ACO) algorithm for unconstrained optimization, generalizing a method due to Nesterov & Polyak (Math. Programming 108, 2006, pp 177205), is proposed. At each iteration of Nesterov & Polyak’s approach, the global minimizer of a local cubic overestimator of th ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
(Show Context)
An Adaptive Cubic Overestimation (ACO) algorithm for unconstrained optimization, generalizing a method due to Nesterov & Polyak (Math. Programming 108, 2006, pp 177205), is proposed. At each iteration of Nesterov & Polyak’s approach, the global minimizer of a local cubic overestimator of the objective function is determined, and this ensures a significant improvement in the objective so long as the Hessian of the objective is Lipschitz continuous and its Lipschitz constant is available. The twin requirements of global model optimality and the availability of Lipschitz constants somewhat limit the applicability of such an approach, particularly for largescale problems. However the promised powerful worstcase theoretical guarantees prompt us to investigate variants in which estimates of the required Lipschitz constant are refined and in which computationallyviable approximations to the global modelminimizer are sought. We show that the excellent global and local convergence properties and worstcase iteration complexity bounds obtained by Nesterov & Polyak are retained, and sometimes extended to a wider class of problems, by our ACO approach. Numerical experiments with smallscale test problems from the CUTEr set show superior performance of the ACO algorithm when compared to a trustregion implementation.
On the oracle complexity of firstorder and derivativefree algorithms for smooth nonconvex minimization
, 2010
"... On the oracle complexity of firstorder and derivativefree algorithms for smooth nonconvex minimization ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
On the oracle complexity of firstorder and derivativefree algorithms for smooth nonconvex minimization
Optimal Newtontype methods for nonconvex smooth optimization problems
, 2011
"... We consider a general class of secondorder iterations for unconstrained optimization that includes regularization and trustregion variants of Newton’s method. For each method in this class, we exhibit a smooth, boundedbelow objective function, whose gradient is globally Lipschitz continuous withi ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
We consider a general class of secondorder iterations for unconstrained optimization that includes regularization and trustregion variants of Newton’s method. For each method in this class, we exhibit a smooth, boundedbelow objective function, whose gradient is globally Lipschitz continuous within an open convex set containing any iterates encountered and whose Hessian is α−Hölder continuous (for given α ∈ [0,1]) on the path of the iterates, for which the method in question takes at least ⌊ǫ −(2+α)/(1+α) ⌋ functionevaluations to generate a first iterate whose gradient is smaller than ǫ in norm. This provides a lower bound on the evaluation complexity of secondorder methods in our class when applied to smooth problems satisfying our assumptions. Furthermore, for α = 1, this lower bound is of the same order in ǫ as the upper bound on the evaluation complexity of cubic regularization, thus implying cubic regularization has optimal worstcase evaluation complexity within our class of secondorder methods.
Nonlinear Stepsize Control, Trust Regions and Regularizations for Unconstrained Optimization
, 2008
"... A general class of algorithms for unconstrained optimization is introduced, which subsumes the classical trustregion algorithm and two of its newer variants, as well as the cubic and quadratic regularization methods. A unified theory of global convergence to firstorder critical points is then desc ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
A general class of algorithms for unconstrained optimization is introduced, which subsumes the classical trustregion algorithm and two of its newer variants, as well as the cubic and quadratic regularization methods. A unified theory of global convergence to firstorder critical points is then described for this class. An extension to projectionbased trustregion algorithms for nonlinear optimization over convex sets is also presented.
Edinburgh Research Explorer
"... Optimal Newtontype methods for nonconvex smooth optimization problems Citation for published version: Cartis, C, Gould, NIM & Toint, PL 2011, Optimal Newtontype methods for nonconvex smooth optimization ..."
Abstract
 Add to MetaCart
(Show Context)
Optimal Newtontype methods for nonconvex smooth optimization problems Citation for published version: Cartis, C, Gould, NIM & Toint, PL 2011, Optimal Newtontype methods for nonconvex smooth optimization
Algebraic rules for quadratic regularization of Newton's method
"... Abstract In this work we propose a class of quasiNewton methods to minimize a twice differentiable function with Lipschitz continuous Hessian. These methods are based on the quadratic regularization of Newton's method, with algebraic explicit rules for computing the regularizing parameter. Th ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract In this work we propose a class of quasiNewton methods to minimize a twice differentiable function with Lipschitz continuous Hessian. These methods are based on the quadratic regularization of Newton's method, with algebraic explicit rules for computing the regularizing parameter. The convergence properties of this class of methods are analysed. We show that if the sequence generated by the algorithm converges then its limit point is stationary. We also establish local quadratic convergence in a neighborhood of a stationary point with positive definite Hessian. Encouraging preliminary numerical experiments are presented.