Results 1  10
of
27
Newton's Method For Large BoundConstrained Optimization Problems
 SIAM JOURNAL ON OPTIMIZATION
, 1998
"... We analyze a trust region version of Newton's method for boundconstrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of constraints. The convergence theory holds for linearlyconstrained problems, and yields global and super ..."
Abstract

Cited by 110 (5 self)
 Add to MetaCart
We analyze a trust region version of Newton's method for boundconstrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of constraints. The convergence theory holds for linearlyconstrained problems, and yields global and superlinear convergence without assuming neither strict complementarity nor linear independence of the active constraints. We also show that the convergence theory leads to an efficient implementation for large boundconstrained problems.
Global Convergence of a Class of Trust Region Algorithms for Optimization Using Inexact Projections on Convex Constraints
, 1995
"... A class of trust region based algorithms is presented for the solution of nonlinear optimization problems with a convex feasible set. At variance with previously published analysis of this type, the theory presented allows for the use of general norms. Furthermore, the proposed algorithms do not r ..."
Abstract

Cited by 71 (7 self)
 Add to MetaCart
A class of trust region based algorithms is presented for the solution of nonlinear optimization problems with a convex feasible set. At variance with previously published analysis of this type, the theory presented allows for the use of general norms. Furthermore, the proposed algorithms do not require the explicit computation of the projected gradient, and can therefore be adapted to cases where the projection onto the feasible domain may be expensive to calculate. Strong global convergence results are derived for the class. It is also shown that the set of linear and nonlinear constraints that are binding at the solution are identified by the algorithms of the class in a finite number of iterations.
Identi surfaces in constrained optimization
 SIAM Journal on Control and Optimization
, 1993
"... Abstract. The concept of a \classC ..."
(Show Context)
Convergence Properties of an Augmented Lagrangian Algorithm for Optimization with a Combination of General Equality and Linear Constraints
 SIAM Journal on Optimization
, 1996
"... We consider the global and local convergence properties of a class of augmented Lagrangian methods for solving nonlinear programming problems. In these methods, linear and more general constraints are handled in different ways. The general constraints are combined with the objective function in an a ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
(Show Context)
We consider the global and local convergence properties of a class of augmented Lagrangian methods for solving nonlinear programming problems. In these methods, linear and more general constraints are handled in different ways. The general constraints are combined with the objective function in an augmented Lagrangian. The iteration consists of solving a sequence of subproblems; in each subproblem the augmented Lagrangian is approximately minimized in the region defined by the linear constraints. A subproblem is terminated as soon as a stopping condition is satisfied. The stopping rules that we consider here encompass practical tests used in several existing packages for linearly constrained optimization. Our algorithm also allows different penalty parameters to be associated with disjoint subsets of the general constraints. In this paper, we analyze the convergence of the sequence of iterates generated by such an algorithm and prove global and fast linear convergence as well as showin...
Minimum principle sufficiency
 Mathematical Programming 57
, 1992
"... We characterize the property of obtaining a solution to a convex program by minimizing over the feasible region a linearization of the objective function at any of its solution points (Minimum Principle Sufficiency). For the case of a monotone linear complementarity problem this MPS property is comp ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
(Show Context)
We characterize the property of obtaining a solution to a convex program by minimizing over the feasible region a linearization of the objective function at any of its solution points (Minimum Principle Sufficiency). For the case of a monotone linear complementarity problem this MPS property is completely equivalent to the existence of a nondegenerate solution to the problem. For the case of a convex quadratic program, the MPS property is equivalent to the span of the Hessian of the objective function being contained in the normal cone to the feasible region at any solution point, plus the cone generated by the gradient of the objective function at any solution point. This in turn is equivalent to the quadratic program having a weak sharp minimum. An important application of the MPS property is that minimizing on the feasible region a linearization of the objective function at a point in a neighborhood of a solution point gives an exact solution of the convex program. This leads to finite termination of convergent algorithms that periodically minimize such a linearization. Key words: Minimum principle, convex programs, linear complementarity. I.
Identifying active manifolds
 Algorithmic Oper. Res
"... Determining the \active manifold " for a minimization problem is a large step towards solving the problem. Many researchers have studied under what conditions certain algorithms identify active manifolds in a nite number of iterations. We outline a unifying framework encompassing many earlier ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Determining the \active manifold " for a minimization problem is a large step towards solving the problem. Many researchers have studied under what conditions certain algorithms identify active manifolds in a nite number of iterations. We outline a unifying framework encompassing many earlier results on identication via the
Convergence Properties of Minimization Algorithms for Convex Constraints Using a Structured Trust Region
, 1992
"... We present in this paper a class of trust region algorithms in which the structure of the problem is explicitly used in the very definition of the trust region itself. This development is intended to reflect the possibility that some parts of the problem may be more "trusted" than others, ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
We present in this paper a class of trust region algorithms in which the structure of the problem is explicitly used in the very definition of the trust region itself. This development is intended to reflect the possibility that some parts of the problem may be more "trusted" than others, a commonly occurring situation in largescale nonlinear applications. After describing the structured trust region mechanism, we prove global convergence for all algorithms in our class. We also prove that, when convex constraints are present, the correct set of such constraints active at the problem's solution is identified by these algorithms after a finite number of iterations.
ORTHOGONAL INVARIANCE AND IDENTIFIABILITY
"... Abstract. Orthogonally invariant functions of symmetric matrices often inherit properties from their diagonal restrictions: von Neumann’s theorem on matrix norms is an early example. We discuss the example of “identifiability”, a common property of nonsmooth functions associated with the existence o ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Orthogonally invariant functions of symmetric matrices often inherit properties from their diagonal restrictions: von Neumann’s theorem on matrix norms is an early example. We discuss the example of “identifiability”, a common property of nonsmooth functions associated with the existence of a smooth manifold of approximate critical points. Identifiability (or its synonym, “partial smoothness”) is the key idea underlying active set methods in optimization. Polyhedral functions, in particular, are always partly smooth, and hence so are many standard examples from eigenvalue optimization. Key words. Eigenvalues, symmetric matrix, partial smoothness, identifiable set, polyhedra, duality AMS subject classifications. 15A18, 53B25, 15A23, 05A05 1. Introduction. Nonsmoothness is inherently present
Nondegenerate Solutions and Related Concepts in Affine Variational Inequalities
 SIAM J. ON CONTROL AND OPTIMIZATION
, 1996
"... The notion of a strictly complementary solution for complementarity problems is extended to that of a nondegenerate solution of variational inequalities. Several equivalent formulations of nondegeneracy are given. In the affine case, an existence theorem for a nondegenerate solution is given in ter ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
The notion of a strictly complementary solution for complementarity problems is extended to that of a nondegenerate solution of variational inequalities. Several equivalent formulations of nondegeneracy are given. In the affine case, an existence theorem for a nondegenerate solution is given in terms of several related concepts which are shown to be equivalent in this context. These include a weak sharp minimum, the minimum principle sufficiency, and error bounds. The gap function associated with the variational inequality plays a central role in this existence theorem.
Convergence of the Gradient Projection Method for Generalized Convex Minimization⁄
, 1996
"... Abstract. This paper develops convergence theory of the gradient projection method by Calamai and More ́ (Math. Programming, vol. 39, 93–116, 1987) which, for minimizing a continuously differentiable optimization problem minf f.x / : x 2 ˜g where ˜ is a nonempty closed convex set, generates a sequen ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Abstract. This paper develops convergence theory of the gradient projection method by Calamai and More ́ (Math. Programming, vol. 39, 93–116, 1987) which, for minimizing a continuously differentiable optimization problem minf f.x / : x 2 ˜g where ˜ is a nonempty closed convex set, generates a sequence xkC1 D P.xk ¡ fikr f.xk// where the stepsize fik> 0 is chosen suitably. It is shown that, when f.x / is a pseudoconvex (quasiconvex) function, this method has strong convergence results: either xk! x ⁄ and x ⁄ is a minimizer (stationary point); or