Results 1 
5 of
5
Newton's Method For Large BoundConstrained Optimization Problems
 SIAM JOURNAL ON OPTIMIZATION
, 1998
"... We analyze a trust region version of Newton's method for boundconstrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of constraints. The convergence theory holds for linearlyconstrained problems, and yields global and superlinea ..."
Abstract

Cited by 74 (4 self)
 Add to MetaCart
We analyze a trust region version of Newton's method for boundconstrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of constraints. The convergence theory holds for linearlyconstrained problems, and yields global and superlinear convergence without assuming neither strict complementarity nor linear independence of the active constraints. We also show that the convergence theory leads to an efficient implementation for large boundconstrained problems.
A Convergent Infeasible InteriorPoint TrustRegion Method For Constrained Minimization
 SIAM Journal on Optimization
, 1999
"... We study an infeasible interiorpoint trustregion method for constrained minimization. This method uses a logarithmicbarrier function for the slack variables and updates the slack variables using secondorder correction. We show that if a certain set containing the iterates is bounded and the orig ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
We study an infeasible interiorpoint trustregion method for constrained minimization. This method uses a logarithmicbarrier function for the slack variables and updates the slack variables using secondorder correction. We show that if a certain set containing the iterates is bounded and the origin is not in the convex hull of the nearly active constraint gradients everywhere on this set, then any cluster point of the iterates is a 1storder stationary point. If the cluster point satisfies an additional assumption (which holds when the constraints are linear or when the cluster point satisfies strict complementarity and a local error bound holds), then it is a 2ndorder stationary point. Key words. Nonlinear program, logarithmicbarrier function, interiorpoint method, trustregion strategy, 1st and 2ndorder stationary points, semidefinite programming. 1 Introduction We consider the nonlinear program with inequality constraints: minimize f(x) subject to g(x) = [g 1 (x) g m (...
Convergence to Second Order Stationary Points in Inequality Constrained Optimization
 Mathematics of Operations Research
, 1998
"... : We propose a new algorithm for the nonlinear inequality constrained minimization problem, and prove that it generates a sequence converging to points satisfying the KKT second order necessary conditions for optimality. The algorithm is a line search algorithm using directions of negative curvature ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
: We propose a new algorithm for the nonlinear inequality constrained minimization problem, and prove that it generates a sequence converging to points satisfying the KKT second order necessary conditions for optimality. The algorithm is a line search algorithm using directions of negative curvature and it can be viewed as a non trivial extension of corresponding known techniques from unconstrained to constrained problems. The main tools employed in the definition and in the analysis of the algorithm are a differentiable exact penalty function and results from the theory of LC 1 functions. Key Words: Inequality constrained optimization, KKT second order necessary conditions, penalty function, LC 1 function, negative curvature direction. 1 Introduction We are concerned with the inequality constrained minimization problem (P) min f(x) s.t. g(x) 0; where f : IR n ! IR and g : IR n ! IR m are three times continuously differentiable. Our aim is to develope an algorithm that g...
Secondorder negativecurvature methods for boxconstrained and general constrained optimization
, 2009
"... A Nonlinear Programming algorithm that converges to secondorder stationary points is introduced in this paper. The main tool is a secondorder negativecurvature method for boxconstrained minimization of a certain class of functions that do not possess continuous second derivatives. This method is ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
A Nonlinear Programming algorithm that converges to secondorder stationary points is introduced in this paper. The main tool is a secondorder negativecurvature method for boxconstrained minimization of a certain class of functions that do not possess continuous second derivatives. This method is used to define an Augmented Lagrangian algorithm of PHR (PowellHestenesRockafellar) type. Convergence proofs under weak constraint qualifications are given. Numerical examples showing that the new method converges to secondorder stationary points in situations in which firstorder methods fail are exhibited.
FOR THE DEGREE OF
"... We present a general arc search algorithm for linearly constrained optimization. The method constructs and searches along smooth arcs that satisfy a small and practical set of properties. An activeset strategy is used to manage linear inequality constraints. When second derivatives are used, the me ..."
Abstract
 Add to MetaCart
We present a general arc search algorithm for linearly constrained optimization. The method constructs and searches along smooth arcs that satisfy a small and practical set of properties. An activeset strategy is used to manage linear inequality constraints. When second derivatives are used, the method is shown to converge to a secondorder critical point and have a quadratic rate of convergence under standard conditions. The theory is applied to the methods of line search, curvilinear search, and modified gradient flow that have previously been proposed for unconstrained problems. A key issue when generalizing unconstrained methods to linearly constrained problems using an activeset strategy is the complexity of how the arc intersects hyperplanes. We introduce a new arc that is derived from the regularized Newton equation. Computing the intersection between this arc and a linear constraint reduces to finding the roots of a quadratic polynomial. The new arc scales to large problems, does not require modification to the Hessian, and is rarely dependent on the scaling of directions of negative curvature. Numerical experiments show the effectiveness of this arc search method on problems from the CUTEr test set and on a specific class of problems for which identifying negative curvature is critical. A second set of experiments demonstrates that when using SR1 quasiNewton updates, this arc search method is competitive with a line search method using