Results 1 
8 of
8
LAGRANGE MULTIPLIERS AND OPTIMALITY
, 1993
"... Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write firstorder optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions ..."
Abstract

Cited by 114 (7 self)
 Add to MetaCart
Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write firstorder optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions than equations, have demanded deeper understanding of the concept and how it fits into a larger theoretical picture. A major line of research has been the nonsmooth geometry of onesided tangent and normal vectors to the set of points satisfying the given constraints. Another has been the gametheoretic role of multiplier vectors as solutions to a dual problem. Interpretations as generalized derivatives of the optimal value with respect to problem parameters have also been explored. Lagrange multipliers are now being seen as arising from a general rule for the subdifferentiation of a nonsmooth objective function which allows blackandwhite constraints to be replaced by penalty expressions. This paper traces such themes in the current theory of Lagrange multipliers, providing along the way a freestanding exposition of basic nonsmooth analysis as motivated by and applied to this subject.
Majorizing functions and convergence of the GaussNewton method for convex composite optimization
 SIAM J. Optim
"... Abstract. We introduce a notion of quasiregularity for points with respect to the inclusion F (x) ∈ C where F is a nonlinear Frechét differentiable function from Rv to Rm. When C is the set of minimum points of a convex realvalued function h on Rm and F ′ satisfies the Laverage Lipschitz conditi ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
Abstract. We introduce a notion of quasiregularity for points with respect to the inclusion F (x) ∈ C where F is a nonlinear Frechét differentiable function from Rv to Rm. When C is the set of minimum points of a convex realvalued function h on Rm and F ′ satisfies the Laverage Lipschitz condition of Wang, we use the majorizing function technique to establish the semilocal linear/quadratic convergence of sequences generated by the GaussNewton method (with quasiregular initial points) for the convex composite function h ◦ F. Results are new even when the initial point is regular and F ′ is Lipschitz. Key words. The GaussNewton method, convex composite optimization, majorizing function, convergence. AMS subject classifications. 47J15 65H10 Secondary, 41A29 1. Introduction. The
Amenable functions in optimization
 IN NONSMOOTH OPTIMIZATION METHODS AND APPLICATIONS
, 1992
"... ..."
First And SecondOrder Optimality Conditions For Convex Composite MultiObjective Optimization
 J. Optim. Theo. Appli. 95
, 1997
"... : Multiobjective optimization is known as a useful mathematical model in order to investigate some real world problems with conflicting objectives, arising from economics, engineering and human decision making. In this paper, a convex composite multiobjective optimization subject to a closed conve ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
: Multiobjective optimization is known as a useful mathematical model in order to investigate some real world problems with conflicting objectives, arising from economics, engineering and human decision making. In this paper, a convex composite multiobjective optimization subject to a closed convex set constraint is studied. New firstorder optimality conditions of a weakly efficient solution for the convex composite multiobjective optimization problem are established via scalarization. These conditions are then extended to derive secondorder optimality conditions. Key Words: Multiobjective optimization, nonsmooth analysis, convex analysis, sufficient optimality condition. 1. Introduction This paper considers the following convex composite multiobjective optimization problem (P) VMinimize (f 1 (F 1 (x)); \Delta \Delta \Delta ; f p (F p (x))) subject to x 2 C; 1 Research Fellow, Department of Mathemmatics, The University of Western Australia, Australia. 2 Senior Lecturer, De...
Secondorder global optimality conditions for convex composite optimization
, 1995
"... In recent years secondorder sufficient conditions of an isolated local minimizer for convex composite optimization problems have been established. In this paper, secondorder optimality conditions are obtained of a global minimizer for convex composite problems with a nonfinite valued convex func ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
In recent years secondorder sufficient conditions of an isolated local minimizer for convex composite optimization problems have been established. In this paper, secondorder optimality conditions are obtained of a global minimizer for convex composite problems with a nonfinite valued convex function and a twice strictly differentiable function by introducing a generalized representation condition. This result is applied to a minimization problem with a closed convex set constraint which is shown to satisfy the basic onstraint qualification. In particular, secondorder necessary and sufficient conditions of a solution for a variational inequality problem with convex composite inequality constraints are obtained. © 1998 The
J. Math. Anal. Appl. •• • (••••) •••–••• Contents lists available at SciVerse ScienceDirect Journal of Mathematical Analysis and
"... www.elsevier.com/locate/jmaa Convergence analysis of the Gauss–Newton method for convex inclusion ..."
Abstract
 Add to MetaCart
(Show Context)
www.elsevier.com/locate/jmaa Convergence analysis of the Gauss–Newton method for convex inclusion
SECOND ORDER OPTIMALITY CONDITIONS BASED ON PARABOLIC SECOND ORDER TANGENT SETS ∗
"... Abstract. In this paper we discuss second order optimality conditions in optimization problems subject to abstract constraints. Our analysis is based on various concepts of second order tangent sets and parametric duality. We introduce a condition, called second order regularity, under which there i ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. In this paper we discuss second order optimality conditions in optimization problems subject to abstract constraints. Our analysis is based on various concepts of second order tangent sets and parametric duality. We introduce a condition, called second order regularity, under which there is no gap between the corresponding second order necessary and second order sufficient conditions. We show that the second order regularity condition always holds in the case of semidefinite programming.