Results 1 
8 of
8
LAGRANGE MULTIPLIERS AND OPTIMALITY
, 1993
"... Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write firstorder optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions ..."
Abstract

Cited by 88 (7 self)
 Add to MetaCart
Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write firstorder optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions than equations, have demanded deeper understanding of the concept and how it fits into a larger theoretical picture. A major line of research has been the nonsmooth geometry of onesided tangent and normal vectors to the set of points satisfying the given constraints. Another has been the gametheoretic role of multiplier vectors as solutions to a dual problem. Interpretations as generalized derivatives of the optimal value with respect to problem parameters have also been explored. Lagrange multipliers are now being seen as arising from a general rule for the subdifferentiation of a nonsmooth objective function which allows blackandwhite constraints to be replaced by penalty expressions. This paper traces such themes in the current theory of Lagrange multipliers, providing along the way a freestanding exposition of basic nonsmooth analysis as motivated by and applied to this subject.
On the Boundedness of Penalty Parameters in an Augmented Lagrangian Method with Constrained Subproblems
, 2011
"... Augmented Lagrangian methods are effective tools for solving largescale nonlinear programming problems. At each outer iteration a minimization subproblem with simple constraints, whose objective function depends on updated Lagrange multipliers and penalty parameters, is approximately solved. When t ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Augmented Lagrangian methods are effective tools for solving largescale nonlinear programming problems. At each outer iteration a minimization subproblem with simple constraints, whose objective function depends on updated Lagrange multipliers and penalty parameters, is approximately solved. When the penalty parameter becomes very large the subproblem is difficult, therefore the effectiveness of this approach is associated with boundedness of penalty parameters. In this paper it is proved that, under more natural assumptions than the ones up to now employed, penalty parameters are bounded. For proving the new boundedness result, the original algorithm has been slightly modified. Numerical consequences of the modifications are discussed and computational experiments are presented.
Some properties of the augmented Lagrangian in cone constrained optimization
 MATHEMATICS OF OPERATIONS RESEARCH
, 2004
"... A large class of optimization problems can be modeled as minimization of an objective function subject to constraints given in a form of set inclusions. We discuss in this paper augmented Lagrangian duality for such optimization problems. We formulate the augmented Lagrangian dual problems and study ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
A large class of optimization problems can be modeled as minimization of an objective function subject to constraints given in a form of set inclusions. We discuss in this paper augmented Lagrangian duality for such optimization problems. We formulate the augmented Lagrangian dual problems and study conditions ensuring existence of the corresponding augmented Lagrange multipliers. We also discuss sensitivity of optimal solutions to small perturbations of augmented Lagrange multipliers.
Augmented Lagrangian method with nonmonotone penalty parameters for constrained optimization
, 2010
"... At each outer iteration of standard Augmented Lagrangian methods one tries to solve a boxconstrained optimization problem with some prescribed tolerance. In the continuous world, using exact arithmetic, this subproblem is always solvable. Therefore, the possibility of finishing the subproblem resol ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
At each outer iteration of standard Augmented Lagrangian methods one tries to solve a boxconstrained optimization problem with some prescribed tolerance. In the continuous world, using exact arithmetic, this subproblem is always solvable. Therefore, the possibility of finishing the subproblem resolution without satisfying the theoretical stopping conditions is not contemplated in usual convergence theories. However, in practice, one might not be able to solve the subproblem up to the required precision. This may be due to different reasons. One of them is that the presence of an excessively large penalty parameter could impair the performance of the boxconstraint optimization solver. In this paper a practical strategy for decreasing the penalty parameter in situations like the one mentioned above is proposed. More generally, the different decisions that may be taken when, in practice, one is not able to solve the Augmented Lagrangian subproblem will be discussed. As a result, an improved Augmented Lagrangian method is presented, which takes into account numerical difficulties in a satisfactory way, preserving suitable convergence theory. Numerical experiments are presented
Augmented Lagrangians in semiinfinite programming
, 2009
"... We consider the class of semiinfinite programming problems which became in recent years a powerful tool for the mathematical modeling of many reallife problems. In this paper, we study an augmented Lagrangian approach to semiinfinite problems and present necessary and sufficient conditions for t ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We consider the class of semiinfinite programming problems which became in recent years a powerful tool for the mathematical modeling of many reallife problems. In this paper, we study an augmented Lagrangian approach to semiinfinite problems and present necessary and sufficient conditions for the existence of corresponding augmented Lagrange multipliers. Furthermore, we discuss two particular cases for the augmenting function: the proximal Lagrangian and the sharp Lagrangian.
Convergence Analysis of the Augmented Lagrangian Method for Nonlinear SecondOrder Cone Optimization Problems
, 2006
"... The paper focuses on the convergence rate of the augmented Lagrangian method for nonlinear secondorder cone optimization problems. Under a set of assumptions of sufficient conditions, including the componentwise strict complementarity condition, the constraint nondegeneracy condition and the second ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The paper focuses on the convergence rate of the augmented Lagrangian method for nonlinear secondorder cone optimization problems. Under a set of assumptions of sufficient conditions, including the componentwise strict complementarity condition, the constraint nondegeneracy condition and the second order sufficient condition, we first study some properties of the augmented Lagrangian and then show that the rate of local convergence of the augmented Lagrangian method is proportional to 1/τ, where the penalty parameter τ is not less than a threshold ˆτ> 0.
Convergence Results on Proximal Method of Multipliers In Nonconvex Programming ∗
, 1998
"... We describe a primaldual application of the proximal point algorithm to nonconvex minimization problems. Motivated by the work of Spingarn and more recently by the work of Kaplan and Tichatschke about the proximal point methodology in nonconvex optimization. This paper discusses some local results ..."
Abstract
 Add to MetaCart
We describe a primaldual application of the proximal point algorithm to nonconvex minimization problems. Motivated by the work of Spingarn and more recently by the work of Kaplan and Tichatschke about the proximal point methodology in nonconvex optimization. This paper discusses some local results in two directions. The first one concerns the application of the proximal method of multipliers to a general nonconvex problem under second order optimality conditions. Secondly we show that without the second order statements, local convergence is obtained for a particular class of nonconvex programs.