Results 1  10
of
15
Newton's Method For Large BoundConstrained Optimization Problems
 SIAM JOURNAL ON OPTIMIZATION
, 1998
"... We analyze a trust region version of Newton's method for boundconstrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of constraints. The convergence theory holds for linearlyconstrained problems, and yields global and super ..."
Abstract

Cited by 77 (4 self)
 Add to MetaCart
We analyze a trust region version of Newton's method for boundconstrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of constraints. The convergence theory holds for linearlyconstrained problems, and yields global and superlinear convergence without assuming neither strict complementarity nor linear independence of the active constraints. We also show that the convergence theory leads to an efficient implementation for large boundconstrained problems.
Global Convergence of a Class of Trust Region Algorithms for Optimization Using Inexact Projections on Convex Constraints
, 1995
"... A class of trust region based algorithms is presented for the solution of nonlinear optimization problems with a convex feasible set. At variance with previously published analysis of this type, the theory presented allows for the use of general norms. Furthermore, the proposed algorithms do not r ..."
Abstract

Cited by 54 (4 self)
 Add to MetaCart
A class of trust region based algorithms is presented for the solution of nonlinear optimization problems with a convex feasible set. At variance with previously published analysis of this type, the theory presented allows for the use of general norms. Furthermore, the proposed algorithms do not require the explicit computation of the projected gradient, and can therefore be adapted to cases where the projection onto the feasible domain may be expensive to calculate. Strong global convergence results are derived for the class. It is also shown that the set of linear and nonlinear constraints that are binding at the solution are identified by the algorithms of the class in a finite number of iterations.
Convergence Properties of an Augmented Lagrangian Algorithm for Optimization with a Combination of General Equality and Linear Constraints
 SIAM Journal on Optimization
, 1996
"... We consider the global and local convergence properties of a class of augmented Lagrangian methods for solving nonlinear programming problems. In these methods, linear and more general constraints are handled in different ways. The general constraints are combined with the objective function in an a ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
We consider the global and local convergence properties of a class of augmented Lagrangian methods for solving nonlinear programming problems. In these methods, linear and more general constraints are handled in different ways. The general constraints are combined with the objective function in an augmented Lagrangian. The iteration consists of solving a sequence of subproblems; in each subproblem the augmented Lagrangian is approximately minimized in the region defined by the linear constraints. A subproblem is terminated as soon as a stopping condition is satisfied. The stopping rules that we consider here encompass practical tests used in several existing packages for linearly constrained optimization. Our algorithm also allows different penalty parameters to be associated with disjoint subsets of the general constraints. In this paper, we analyze the convergence of the sequence of iterates generated by such an algorithm and prove global and fast linear convergence as well as showin...
Minimum principle sufficiency
 Mathematical Programming 57
, 1992
"... We characterize the property of obtaining a solution to a convex program by minimizing over the feasible region a linearization of the objective function at any of its solution points (Minimum Principle Sufficiency). For the case of a monotone linear complementarity problem this MPS property is comp ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
We characterize the property of obtaining a solution to a convex program by minimizing over the feasible region a linearization of the objective function at any of its solution points (Minimum Principle Sufficiency). For the case of a monotone linear complementarity problem this MPS property is completely equivalent to the existence of a nondegenerate solution to the problem. For the case of a convex quadratic program, the MPS property is equivalent to the span of the Hessian of the objective function being contained in the normal cone to the feasible region at any solution point, plus the cone generated by the gradient of the objective function at any solution point. This in turn is equivalent to the quadratic program having a weak sharp minimum. An important application of the MPS property is that minimizing on the feasible region a linearization of the objective function at a point in a neighborhood of a solution point gives an exact solution of the convex program. This leads to finite termination of convergent algorithms that periodically minimize such a linearization. Key words: Minimum principle, convex programs, linear complementarity. I.
Convergence Properties of Minimization Algorithms for Convex Constraints Using a Structured Trust Region
, 1992
"... We present in this paper a class of trust region algorithms in which the structure of the problem is explicitly used in the very definition of the trust region itself. This development is intended to reflect the possibility that some parts of the problem may be more "trusted" than others, ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We present in this paper a class of trust region algorithms in which the structure of the problem is explicitly used in the very definition of the trust region itself. This development is intended to reflect the possibility that some parts of the problem may be more "trusted" than others, a commonly occurring situation in largescale nonlinear applications. After describing the structured trust region mechanism, we prove global convergence for all algorithms in our class. We also prove that, when convex constraints are present, the correct set of such constraints active at the problem's solution is identified by these algorithms after a finite number of iterations.
Nondegenerate Solutions and Related Concepts in Affine Variational Inequalities
 SIAM J. ON CONTROL AND OPTIMIZATION
, 1996
"... The notion of a strictly complementary solution for complementarity problems is extended to that of a nondegenerate solution of variational inequalities. Several equivalent formulations of nondegeneracy are given. In the affine case, an existence theorem for a nondegenerate solution is given in ter ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
The notion of a strictly complementary solution for complementarity problems is extended to that of a nondegenerate solution of variational inequalities. Several equivalent formulations of nondegeneracy are given. In the affine case, an existence theorem for a nondegenerate solution is given in terms of several related concepts which are shown to be equivalent in this context. These include a weak sharp minimum, the minimum principle sufficiency, and error bounds. The gap function associated with the variational inequality plays a central role in this existence theorem.
Geometry And Local Optimality Conditions For Bilevel Programs With Quadratic Strictly Convex Lower Levels
 In: D. Du, & M. Pardalos (Eds.), Minimax
, 1995
"... This paper describes necessary and sufficient optimality conditions for bilevel programming problems with quadratic strictly convex lower levels. By examining the local geometry of these problems we establish that the set of feasible directions at a given point is composed of a finite union of conve ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This paper describes necessary and sufficient optimality conditions for bilevel programming problems with quadratic strictly convex lower levels. By examining the local geometry of these problems we establish that the set of feasible directions at a given point is composed of a finite union of convex cones. Based on this result, we show that the optimality conditions are simple generalizations of the first and second order optimality conditions for mathematical (one level) programming problems. 1 INTRODUCTION A bilevel program is defined as the problem of minimizing a function f (the upper level function) in two different vectors of variables x and y subject to (upper level) constraints, where the vector y is an optimal solution of another constrained optimization problem (the lower level problem) parameterized by the vector x. References [2] and [17] survey the extensive research that has been done in bilevel programming. 2 Chapter 1 It is interesting to note that any minimax probl...
An Interior Point Algorithm For Linearly Constrained Optimization
 Siam J. Optim
, 1992
"... . We describe an algorithm for optimization of a smooth function subject to general linear constraints. An algorithm of the gradient projection class is used, with the important feature that the "projection" at each iteration is performed using a primaldual interior point method for conve ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
. We describe an algorithm for optimization of a smooth function subject to general linear constraints. An algorithm of the gradient projection class is used, with the important feature that the "projection" at each iteration is performed using a primaldual interior point method for convex quadratic programming. Convergence properties can be maintained even if the projection is done inexactly in a welldefined way. Higherorder derivative information on the manifold defined by the apparently active constraints can be used to increase the rate of local convergence. Key words. potential reduction algorithm, gradient porojection algorithm, linearly constrained optimization AMS(MOS) subject classifications. 65K10, 90C30 1. Introduction. We address the problem min x f(x) s.t. A T x b; (1) where x 2 R n and b 2 R m , and f is assumed throughout to be twice continuously differentiable on the level set L = fx j A T x b; f(x) f(x 0 )g; where x 0 is some given initial choice...
Local Convergence Properties of two Augmented Lagrangian Algorithms for Optimization with a Combination of General Equality and Linear Constraints
, 1993
"... . We consider the local convergence properties of the class of augmented Lagrangian methods for solving nonlinear programming problems whose global convergence properties are analyzed by Conn et al. (1993a). In these methods, linear constraints are treated separately from more general constraints ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
. We consider the local convergence properties of the class of augmented Lagrangian methods for solving nonlinear programming problems whose global convergence properties are analyzed by Conn et al. (1993a). In these methods, linear constraints are treated separately from more general constraints. These latter constraints are combined with the objective function in an augmented Lagrangian while the subproblem then consists of (approximately) minimizing this augmented Lagrangian subject to the linear constraints. The stopping rule that we consider for the inner iteration covers practical tests used in several existing packages for linearly constrained optimization. Our algorithmic class allows several distinct penalty parameters to be associated with different subsets of general equality constraints. In this paper, we analyze the local convergence of the sequence of iterates generated by this technique and prove fast linear convergence and boundedness of the potentially troubl...
ORTHOGONAL INVARIANCE AND IDENTIFIABILITY
"... Abstract. Orthogonally invariant functions of symmetric matrices often inherit properties from their diagonal restrictions: von Neumann’s theorem on matrix norms is an early example. We discuss the example of “identifiability”, a common property of nonsmooth functions associated with the existence o ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. Orthogonally invariant functions of symmetric matrices often inherit properties from their diagonal restrictions: von Neumann’s theorem on matrix norms is an early example. We discuss the example of “identifiability”, a common property of nonsmooth functions associated with the existence of a smooth manifold of approximate critical points. Identifiability (or its synonym, “partial smoothness”) is the key idea underlying active set methods in optimization. Polyhedral functions, in particular, are always partly smooth, and hence so are many standard examples from eigenvalue optimization. Key words. Eigenvalues, symmetric matrix, partial smoothness, identifiable set, polyhedra, duality AMS subject classifications. 15A18, 53B25, 15A23, 05A05 1. Introduction. Nonsmoothness is inherently present