Results 1  10
of
17
Newton's Method For Large BoundConstrained Optimization Problems
 SIAM JOURNAL ON OPTIMIZATION
, 1998
"... We analyze a trust region version of Newton's method for boundconstrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of constraints. The convergence theory holds for linearlyconstrained problems, and yields global and super ..."
Abstract

Cited by 82 (4 self)
 Add to MetaCart
We analyze a trust region version of Newton's method for boundconstrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of constraints. The convergence theory holds for linearlyconstrained problems, and yields global and superlinear convergence without assuming neither strict complementarity nor linear independence of the active constraints. We also show that the convergence theory leads to an efficient implementation for large boundconstrained problems.
Global Convergence of a Class of Trust Region Algorithms for Optimization Using Inexact Projections on Convex Constraints
, 1995
"... A class of trust region based algorithms is presented for the solution of nonlinear optimization problems with a convex feasible set. At variance with previously published analysis of this type, the theory presented allows for the use of general norms. Furthermore, the proposed algorithms do not r ..."
Abstract

Cited by 60 (4 self)
 Add to MetaCart
A class of trust region based algorithms is presented for the solution of nonlinear optimization problems with a convex feasible set. At variance with previously published analysis of this type, the theory presented allows for the use of general norms. Furthermore, the proposed algorithms do not require the explicit computation of the projected gradient, and can therefore be adapted to cases where the projection onto the feasible domain may be expensive to calculate. Strong global convergence results are derived for the class. It is also shown that the set of linear and nonlinear constraints that are binding at the solution are identified by the algorithms of the class in a finite number of iterations.
Convergence Properties of an Augmented Lagrangian Algorithm for Optimization with a Combination of General Equality and Linear Constraints
 SIAM Journal on Optimization
, 1996
"... We consider the global and local convergence properties of a class of augmented Lagrangian methods for solving nonlinear programming problems. In these methods, linear and more general constraints are handled in different ways. The general constraints are combined with the objective function in an a ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
We consider the global and local convergence properties of a class of augmented Lagrangian methods for solving nonlinear programming problems. In these methods, linear and more general constraints are handled in different ways. The general constraints are combined with the objective function in an augmented Lagrangian. The iteration consists of solving a sequence of subproblems; in each subproblem the augmented Lagrangian is approximately minimized in the region defined by the linear constraints. A subproblem is terminated as soon as a stopping condition is satisfied. The stopping rules that we consider here encompass practical tests used in several existing packages for linearly constrained optimization. Our algorithm also allows different penalty parameters to be associated with disjoint subsets of the general constraints. In this paper, we analyze the convergence of the sequence of iterates generated by such an algorithm and prove global and fast linear convergence as well as showin...
Minimum principle sufficiency
 Mathematical Programming 57
, 1992
"... We characterize the property of obtaining a solution to a convex program by minimizing over the feasible region a linearization of the objective function at any of its solution points (Minimum Principle Sufficiency). For the case of a monotone linear complementarity problem this MPS property is comp ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
(Show Context)
We characterize the property of obtaining a solution to a convex program by minimizing over the feasible region a linearization of the objective function at any of its solution points (Minimum Principle Sufficiency). For the case of a monotone linear complementarity problem this MPS property is completely equivalent to the existence of a nondegenerate solution to the problem. For the case of a convex quadratic program, the MPS property is equivalent to the span of the Hessian of the objective function being contained in the normal cone to the feasible region at any solution point, plus the cone generated by the gradient of the objective function at any solution point. This in turn is equivalent to the quadratic program having a weak sharp minimum. An important application of the MPS property is that minimizing on the feasible region a linearization of the objective function at a point in a neighborhood of a solution point gives an exact solution of the convex program. This leads to finite termination of convergent algorithms that periodically minimize such a linearization. Key words: Minimum principle, convex programs, linear complementarity. I.
Nondegenerate Solutions and Related Concepts in Affine Variational Inequalities
 SIAM J. ON CONTROL AND OPTIMIZATION
, 1996
"... The notion of a strictly complementary solution for complementarity problems is extended to that of a nondegenerate solution of variational inequalities. Several equivalent formulations of nondegeneracy are given. In the affine case, an existence theorem for a nondegenerate solution is given in ter ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
The notion of a strictly complementary solution for complementarity problems is extended to that of a nondegenerate solution of variational inequalities. Several equivalent formulations of nondegeneracy are given. In the affine case, an existence theorem for a nondegenerate solution is given in terms of several related concepts which are shown to be equivalent in this context. These include a weak sharp minimum, the minimum principle sufficiency, and error bounds. The gap function associated with the variational inequality plays a central role in this existence theorem.
Convergence Properties of Minimization Algorithms for Convex Constraints Using a Structured Trust Region
, 1992
"... We present in this paper a class of trust region algorithms in which the structure of the problem is explicitly used in the very definition of the trust region itself. This development is intended to reflect the possibility that some parts of the problem may be more "trusted" than others, ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We present in this paper a class of trust region algorithms in which the structure of the problem is explicitly used in the very definition of the trust region itself. This development is intended to reflect the possibility that some parts of the problem may be more "trusted" than others, a commonly occurring situation in largescale nonlinear applications. After describing the structured trust region mechanism, we prove global convergence for all algorithms in our class. We also prove that, when convex constraints are present, the correct set of such constraints active at the problem's solution is identified by these algorithms after a finite number of iterations.
Geometry And Local Optimality Conditions For Bilevel Programs With Quadratic Strictly Convex Lower Levels
 In: D. Du, & M. Pardalos (Eds.), Minimax
, 1995
"... This paper describes necessary and sufficient optimality conditions for bilevel programming problems with quadratic strictly convex lower levels. By examining the local geometry of these problems we establish that the set of feasible directions at a given point is composed of a finite union of conve ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
This paper describes necessary and sufficient optimality conditions for bilevel programming problems with quadratic strictly convex lower levels. By examining the local geometry of these problems we establish that the set of feasible directions at a given point is composed of a finite union of convex cones. Based on this result, we show that the optimality conditions are simple generalizations of the first and second order optimality conditions for mathematical (one level) programming problems. 1 INTRODUCTION A bilevel program is defined as the problem of minimizing a function f (the upper level function) in two different vectors of variables x and y subject to (upper level) constraints, where the vector y is an optimal solution of another constrained optimization problem (the lower level problem) parameterized by the vector x. References [2] and [17] survey the extensive research that has been done in bilevel programming. 2 Chapter 1 It is interesting to note that any minimax probl...
An Interior Point Algorithm For Linearly Constrained Optimization
 Siam J. Optim
, 1992
"... . We describe an algorithm for optimization of a smooth function subject to general linear constraints. An algorithm of the gradient projection class is used, with the important feature that the "projection" at each iteration is performed using a primaldual interior point method for conve ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
. We describe an algorithm for optimization of a smooth function subject to general linear constraints. An algorithm of the gradient projection class is used, with the important feature that the "projection" at each iteration is performed using a primaldual interior point method for convex quadratic programming. Convergence properties can be maintained even if the projection is done inexactly in a welldefined way. Higherorder derivative information on the manifold defined by the apparently active constraints can be used to increase the rate of local convergence. Key words. potential reduction algorithm, gradient porojection algorithm, linearly constrained optimization AMS(MOS) subject classifications. 65K10, 90C30 1. Introduction. We address the problem min x f(x) s.t. A T x b; (1) where x 2 R n and b 2 R m , and f is assumed throughout to be twice continuously differentiable on the level set L = fx j A T x b; f(x) f(x 0 )g; where x 0 is some given initial choice...
Convergence of the Gradient Projection Method for Generalized Convex Minimization⁄
, 1996
"... Abstract. This paper develops convergence theory of the gradient projection method by Calamai and More ́ (Math. Programming, vol. 39, 93–116, 1987) which, for minimizing a continuously differentiable optimization problem minf f.x / : x 2 ˜g where ˜ is a nonempty closed convex set, generates a sequen ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. This paper develops convergence theory of the gradient projection method by Calamai and More ́ (Math. Programming, vol. 39, 93–116, 1987) which, for minimizing a continuously differentiable optimization problem minf f.x / : x 2 ˜g where ˜ is a nonempty closed convex set, generates a sequence xkC1 D P.xk ¡ fikr f.xk// where the stepsize fik> 0 is chosen suitably. It is shown that, when f.x / is a pseudoconvex (quasiconvex) function, this method has strong convergence results: either xk! x ⁄ and x ⁄ is a minimizer (stationary point); or
A linearly convergent dualbased gradient projection algorithm for quadratically constrained convex minimization
 Mathematics of Operations Research
, 2006
"... Abstract. This paper presents a new dual formulation for quadratically constrained convex programs (QCCP). The special structure of the derived dual problem allows to apply the gradient projection algorithm to produce a simple explicit method involving only elementary vectormatrix operations, that ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. This paper presents a new dual formulation for quadratically constrained convex programs (QCCP). The special structure of the derived dual problem allows to apply the gradient projection algorithm to produce a simple explicit method involving only elementary vectormatrix operations, that is proven to converge at a linear rate.