Results 1  10
of
22
Newton's Method For Large BoundConstrained Optimization Problems
 SIAM JOURNAL ON OPTIMIZATION
, 1998
"... We analyze a trust region version of Newton's method for boundconstrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of constraints. The convergence theory holds for linearlyconstrained problems, and yields global and superlinea ..."
Abstract

Cited by 74 (4 self)
 Add to MetaCart
We analyze a trust region version of Newton's method for boundconstrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of constraints. The convergence theory holds for linearlyconstrained problems, and yields global and superlinear convergence without assuming neither strict complementarity nor linear independence of the active constraints. We also show that the convergence theory leads to an efficient implementation for large boundconstrained problems.
Global Convergence of a Class of Trust Region Algorithms for Optimization Using Inexact Projections on Convex Constraints
, 1995
"... A class of trust region based algorithms is presented for the solution of nonlinear optimization problems with a convex feasible set. At variance with previously published analysis of this type, the theory presented allows for the use of general norms. Furthermore, the proposed algorithms do not r ..."
Abstract

Cited by 51 (4 self)
 Add to MetaCart
A class of trust region based algorithms is presented for the solution of nonlinear optimization problems with a convex feasible set. At variance with previously published analysis of this type, the theory presented allows for the use of general norms. Furthermore, the proposed algorithms do not require the explicit computation of the projected gradient, and can therefore be adapted to cases where the projection onto the feasible domain may be expensive to calculate. Strong global convergence results are derived for the class. It is also shown that the set of linear and nonlinear constraints that are binding at the solution are identified by the algorithms of the class in a finite number of iterations.
Global Methods For Nonlinear Complementarity Problems
 MATH. OPER. RES
, 1994
"... Global methods for nonlinear complementarity problems formulate the problem as a system of nonsmooth nonlinear equations approach, or use continuation to trace a path defined by a smooth system of nonlinear equations. We formulate the nonlinear complementarity problem as a boundconstrained nonlinea ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
Global methods for nonlinear complementarity problems formulate the problem as a system of nonsmooth nonlinear equations approach, or use continuation to trace a path defined by a smooth system of nonlinear equations. We formulate the nonlinear complementarity problem as a boundconstrained nonlinear least squares problem. Algorithms based on this formulation are applicable to general nonlinear complementarity problems, can be started from any nonnegative starting point, and each iteration only requires the solution of systems of linear equations. Convergence to a solution of the nonlinear complementarity problem is guaranteed under reasonable regularity assumptions. The converge rate is Qlinear, Qsuperlinear, or Qquadratic, depending on the tolerances used to solve the subproblems.
A new active set algorithm for box constrained Optimization
 SIAM Journal on Optimization
, 2006
"... Abstract. An active set algorithm (ASA) for box constrained optimization is developed. The algorithm consists of a nonmonotone gradient projection step, an unconstrained optimization step, and a set of rules for branching between the two steps. Global convergence to a stationary point is established ..."
Abstract

Cited by 26 (6 self)
 Add to MetaCart
Abstract. An active set algorithm (ASA) for box constrained optimization is developed. The algorithm consists of a nonmonotone gradient projection step, an unconstrained optimization step, and a set of rules for branching between the two steps. Global convergence to a stationary point is established. For a nondegenerate stationary point, the algorithm eventually reduces to unconstrained optimization without restarts. Similarly, for a degenerate stationary point, where the strong secondorder sufficient optimality condition holds, the algorithm eventually reduces to unconstrained optimization without restarts. A specific implementation of the ASA is given which exploits the recently developed cyclic Barzilai–Borwein (CBB) algorithm for the gradient projection step and the recently developed conjugate gradient algorithm CG DESCENT for unconstrained optimization. Numerical experiments are presented using box constrained problems in the CUTEr and MINPACK2 test problem libraries. Key words. nonmonotone gradient projection, box constrained optimization, active set algorithm,
Exposing Constraints
, 1992
"... The development of algorithms and software for the solution of largescale optimization problems has been the main motivation behind the research on the identification properties of optimization algorithms. The aim of an identification result for a linearly constrained problem is to show that if the ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
The development of algorithms and software for the solution of largescale optimization problems has been the main motivation behind the research on the identification properties of optimization algorithms. The aim of an identification result for a linearly constrained problem is to show that if the sequence generated by an optimization algorithm converges to a stationary point, then there is a nontrivial face F of the feasible set such that after a finite number of iterations, the iterates enter and remain in the face F . This paper develops the identification properties of linearly constrained optimization algorithms without any nondegeneracy or linear independence assumptions. The main result shows that the projected gradient converges to zero if and only if the iterates enter and remain in the face exposed by the negative gradient. This result generalizes results of Burke and Moré obtained for nondegenerate cases.
Automatic Determination Of An Initial Trust Region In Nonlinear Programming
 Department of
, 1995
"... . This paper presents a simple but efficient way to find a good initial trust region radius in trust region methods for nonlinear optimization. The method consists of monitoring the agreement between the model and the objective function along the steepest descent direction, computed at the starting ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
. This paper presents a simple but efficient way to find a good initial trust region radius in trust region methods for nonlinear optimization. The method consists of monitoring the agreement between the model and the objective function along the steepest descent direction, computed at the starting point. Further improvements for the starting point are also derived from the information gleaned during the initializing phase. Numerical results on a large set of problems show the impact the initial trust region radius may have on trust region methods behaviour and the usefulness of the proposed strategy. Key Words. Nonlinear optimization, trust region methods, initial trust region, numerical results 1. Introduction. Trust region methods for unconstrained optimization were first introduced by Powell in [14]. Since then, these methods have enjoyed a good reputation on the basis of their remarkable numerical reliability in conjunction with a sound and complete convergence theory. They have...
Convergence Properties of an Augmented Lagrangian Algorithm for Optimization with a Combination of General Equality and Linear Constraints
 SIAM Journal on Optimization
, 1996
"... We consider the global and local convergence properties of a class of augmented Lagrangian methods for solving nonlinear programming problems. In these methods, linear and more general constraints are handled in different ways. The general constraints are combined with the objective function in an a ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
We consider the global and local convergence properties of a class of augmented Lagrangian methods for solving nonlinear programming problems. In these methods, linear and more general constraints are handled in different ways. The general constraints are combined with the objective function in an augmented Lagrangian. The iteration consists of solving a sequence of subproblems; in each subproblem the augmented Lagrangian is approximately minimized in the region defined by the linear constraints. A subproblem is terminated as soon as a stopping condition is satisfied. The stopping rules that we consider here encompass practical tests used in several existing packages for linearly constrained optimization. Our algorithm also allows different penalty parameters to be associated with disjoint subsets of the general constraints. In this paper, we analyze the convergence of the sequence of iterates generated by such an algorithm and prove global and fast linear convergence as well as showin...
A Trust Region Strategy For Minimization On Arbitrary Domains
"... . We present a trust region method for minimizing a general differentiable function restricted to an arbitrary closed set. We prove a global convergence theorem. The trust region method defines difficult subproblems that are solvable in some particular cases. We analyze in detail the case where the ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
. We present a trust region method for minimizing a general differentiable function restricted to an arbitrary closed set. We prove a global convergence theorem. The trust region method defines difficult subproblems that are solvable in some particular cases. We analyze in detail the case where the domain is an Euclidean ball. For this case we present numerical experiments where we consider different Hessian approximations. Key words: Nonlinear programming, Trustregion methods, Global convergence. Abbreviated title: Trust region on arbitrary domains. April 14, 1994 0() Work partially supported by FAPESP (Grants 9037246 and 9124413), FINEP, CNPq and FAEPUNICAMP. This paper was published in Mathematical Programming 68 (1995) 267302 0() Department of Applied Mathematics, State University of Campinas, IMECCUNICAMP, CP 6065, 13081 Campinas SP, Brazil. EMAIL: MARTINEZ@CCVAX.UNICAMP.BR 1. Introduction The problem considered in this paper is the minimization of a differentiable...
NonMonotone TrustRegion Methods for BoundConstrained Semismooth Equations with Applications to Nonlinear Mixed Complementarity Problems
, 1999
"... We develop and analyze a class of trustregion methods for boundconstrained semismooth systems of equations. The algorithm is based on a simply constrained differentiable minimization reformulation. Our global convergence results are developed in a very general setting that allows for nonmonotoni ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
We develop and analyze a class of trustregion methods for boundconstrained semismooth systems of equations. The algorithm is based on a simply constrained differentiable minimization reformulation. Our global convergence results are developed in a very general setting that allows for nonmonotonicity of the function values at subsequent iterates. We propose a way of computing trial steps by a semismooth Newtonlike method that is augmented by a projection onto the feasible set. Under a DennisMoretype condition we prove that close to a BDregular solution the trustregion algorithm turns into this projected Newton method, which is shown to converge locally qsuperlinearly or quadratically, respectively, depending on the quality of the approximate BDsubdifferentials used. As an important application we discuss in detail how the developed algorithm can be used to solve nonlinear mixed complementarity problems (MCPs). Hereby, the MCP is converted into a boundconstrained semismooth...
Methods for nonlinear constraints in optimization calculations
 THE STATE OF THE ART IN NUMERICAL ANALYSIS
, 1996
"... ..."