Results 1  10
of
17
LAGRANGE MULTIPLIERS AND OPTIMALITY
, 1993
"... Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write firstorder optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions ..."
Abstract

Cited by 89 (7 self)
 Add to MetaCart
Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write firstorder optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions than equations, have demanded deeper understanding of the concept and how it fits into a larger theoretical picture. A major line of research has been the nonsmooth geometry of onesided tangent and normal vectors to the set of points satisfying the given constraints. Another has been the gametheoretic role of multiplier vectors as solutions to a dual problem. Interpretations as generalized derivatives of the optimal value with respect to problem parameters have also been explored. Lagrange multipliers are now being seen as arising from a general rule for the subdifferentiation of a nonsmooth objective function which allows blackandwhite constraints to be replaced by penalty expressions. This paper traces such themes in the current theory of Lagrange multipliers, providing along the way a freestanding exposition of basic nonsmooth analysis as motivated by and applied to this subject.
Steering Exact Penalty Methods for Nonlinear Programming
, 2007
"... This paper reviews, extends and analyzes a new class of penalty methods for nonlinear optimization. These methods adjust the penalty parameter dynamically; by controlling the degree of linear feasibility achieved at every iteration, they promote balanced progress toward optimality and feasibility. I ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
This paper reviews, extends and analyzes a new class of penalty methods for nonlinear optimization. These methods adjust the penalty parameter dynamically; by controlling the degree of linear feasibility achieved at every iteration, they promote balanced progress toward optimality and feasibility. In contrast with classical approaches, the choice of the penalty parameter ceases to be a heuristic and is determined, instead, by a subproblem with clearly defined objectives. The new penalty update strategy is presented in the context of sequential quadratic programming (SQP) and sequential linearquadratic programming (SLQP) methods that use trust regions to promote convergence. The paper concludes with a discussion of penalty parameters for merit functions used in line search methods.
Pseudonormality and a Lagrange Multiplier Theory for Constrained Optimization
, 2000
"... We consider optimization problems with equality, inequality, and abstract set constraints, and we explore various characteristics of the constraint set that imply the existence of Lagrange multipliers. We prove a generalized version of the FritzJohn theorem, and we introduce new and general conditi ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
We consider optimization problems with equality, inequality, and abstract set constraints, and we explore various characteristics of the constraint set that imply the existence of Lagrange multipliers. We prove a generalized version of the FritzJohn theorem, and we introduce new and general conditions that extend and unify the major constraint qualifications. Among these conditions, two new properties, pseudonormality and quasinormality, emerge as central within the taxonomy of interesting constraint characteristics. In the case where there is no abstract set constraint, these properties provide the connecting link between the classical constraint qualifications and two distinct pathways to the existence of Lagrange multipliers: one involving the notion of quasiregularity and Farkas' Lemma, and the other involving the use of exact penalty functions. The second pathway also applies in the general case where there is an abstract set constraint.
Infeasibility Detection and SQP Methods for Nonlinear Optimization
, 2008
"... This paper addresses the need for nonlinear programming algorithms that provide fast local convergence guarantees no matter if a problem is feasible or infeasible. We present an activeset sequential quadratic programming method derived from an exact penalty approach that adjusts the penalty paramet ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
This paper addresses the need for nonlinear programming algorithms that provide fast local convergence guarantees no matter if a problem is feasible or infeasible. We present an activeset sequential quadratic programming method derived from an exact penalty approach that adjusts the penalty parameter appropriately to emphasize optimality over feasibility, or vice versa. Conditions are presented under which superlinear convergence is achieved in the infeasible case. Numerical experiments illustrate the practical behavior of the method.
Exact Penalty Methods
 In I. Ciocco (Ed.), Algorithms for Continuous Optimization
, 1994
"... . Exact penalty methods for the solution of constrained optimization problems are based on the construction of a function whose unconstrained minimizing points are also solution of the constrained problem. In the first part of this paper we recall some definitions concerning exactness properties of ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
. Exact penalty methods for the solution of constrained optimization problems are based on the construction of a function whose unconstrained minimizing points are also solution of the constrained problem. In the first part of this paper we recall some definitions concerning exactness properties of penalty functions, of barrier functions, of augmented Lagrangian functions, and discuss under which assumptions on the constrained problem these properties can be ensured. In the second part of the paper we consider algorithmic aspects of exact penalty methods; in particular we show that, by making use of continuously differentiable functions that possess exactness properties, it is possible to define implementable algorithms that are globally convergent with superlinear convergence rate towards KKT points of the constrained problem. 1 Introduction "It would be a major theoretic breakthrough in nonlinear programming if a simple continuously differentiable function could be exhibited with th...
Constrained LAV State Estimation Using PenaltyFunctions
 IEEE Transactions on Power Systems
, 1997
"... Inequality constraints are often needed in optimization problems in order to deal with uncertainty. This paper introduces a simple technique that allows enforcement of inequality constraints in l1 norm problems without any modifications to existing programs. The solution of l1 norm problems is requi ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Inequality constraints are often needed in optimization problems in order to deal with uncertainty. This paper introduces a simple technique that allows enforcement of inequality constraints in l1 norm problems without any modifications to existing programs. The solution of l1 norm problems is required, for example, in implementing LAV (Least Absolute Value) state estimators in electric power systems. The paper shows how LAV state estimators with inequality constraints can be useful for estimating the state of external systems. This is important in a competitiveenvironment where precise information about a utility's neighboring systems may not be available.
Exact Barrier Function Methods For Lipschitz Programs
 Applied Mathematics and Optimization
, 1995
"... this paper is twofold. First we consider a class of nondifferentiable penalty functions for constrained Lipschitz programs and then we show how these penalty functions can be employed to actually solve a constrained Lipschitz program. The penalty functions considered incorporate a barrier term which ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
this paper is twofold. First we consider a class of nondifferentiable penalty functions for constrained Lipschitz programs and then we show how these penalty functions can be employed to actually solve a constrained Lipschitz program. The penalty functions considered incorporate a barrier term which makes their value go to infinity on the boundary of a perturbation of the feasible set. Exploiting this fact it is possible to prove, under mild compactness and regularity assumptions, a complete correspondence between the unconstrained minimization of the penalty functions and the solutions of the constrained program, thus showing that the penalty functions are exact according to the definition introduced in [17]. Motivated by these results, we then propose some algorithm models and study their convergence properties. We show that, even when the assumptions used to establish the exactness of the penalty functions are not satisfied, every limit point of the sequence produced by a basic algorithm model is an extended stationary point according to the definition given in [8].Then, based on this analysis and on the one previously carried out on the penalty function, we study the consequences on the convergence properties of increasingly demanding assumptions. In particular we show that under the same assumptions used to establish the exactness properties of the penalty functions, it is possible to guarantee that a limit point at least exists, and that any such limit point is a KKT point for the constrained problem. KEY WORDS: Constrained optimization, Nonsmooth optimization, Penalty methods, Barrier functions, Extended stationary points. AMS SUBJECT CLASSIFICATION: 90C30, 49M30, 65K05 1 INTRODUCTION Nondifferentiable penalty functions for smooth nonlinear programming problems h...
Exact Penalization Via Dini And Hadamard Conditional Derivatives
"... : Exact penalty functions for nonsmooth constrained optimization problems are analyzed by using the notion of (Dini) Hadamard directional derivative with respect to the constraint set. Weak conditions are given guaranteeing equivalence of the sets of stationary, global minimum, local minimum points ..."
Abstract
 Add to MetaCart
: Exact penalty functions for nonsmooth constrained optimization problems are analyzed by using the notion of (Dini) Hadamard directional derivative with respect to the constraint set. Weak conditions are given guaranteeing equivalence of the sets of stationary, global minimum, local minimum points of the constrained problem and of the penalty function. Key Words: Exact penalty function, Dini (conditional) derivative, Hadamard (conditional) derivative, stationary point, minimim point, nonsmooth analysis. 1 Introduction We consider exact penalty functions for finite dimensional nonsmooth constrained optimization problems. Since the seminal papers [7, 10, 14], published almost thirty years ago, exact penalty functions have been the object of intense and increasingly deeper analysis and they proved a valuable tool both in the theoretical study of optimization problems and in the development of algorithms for their numerical solution. We refer the interested reader to [1, 5, 6] and refe...
Lagrange Multipliers
"... set constraints (in addition to equality and inequality constraints) have been considered along two di#erent lines: (a) For convex programs (convex f , g j , and X, and linear h i ), and in the context of the geometric multiplier theory to be developed in Chapter 3. Here the abstract set constraint ..."
Abstract
 Add to MetaCart
set constraints (in addition to equality and inequality constraints) have been considered along two di#erent lines: (a) For convex programs (convex f , g j , and X, and linear h i ), and in the context of the geometric multiplier theory to be developed in Chapter 3. Here the abstract set constraint does not cause significant complications, because for convex X, the tangent cone is conveniently defined in terms of feasible directions, and nonsmooth analysis issues of nonregularity do not arise. (b) For the nonconvex setting of this chapter, where the abstract set constraint causes significant di#culties because the classical approach that is based on quasiregularity is not fully satisfactory. This has motivated alternative approaches. The paper of Rockafellar [Roc93], and the book of Rockafellar and Wets [RoW98] develop in depth the nonsmooth analysis concepts, such as the normal cone of Mordukhovich [Mor76] and related work by Clarke (see e.g., the book [Cla83]), in connection with Lagrange multiplier theory. Rockafellar, following the work of Clarke and other researchers, used the Lagrange multiplier definition given in Section 2.3 (what we have called Rmultiplier), but he did not develop or use the main ideas of this chapter, i.e., the enhanced FritzJohn conditions, informative Lagrange multipliers, and pseudonormality. Instead he assumed the constraint qualification CQ6, which, as discussed in Section 2.4, is restrictive because, when X is regular, it implies that the set of Lagrange multipliers is not only nonempty but also compact. The material of this chapter is based on the author's joint work with Asuman Ozdaglar [BeO00a], [BeO00b]. This work first gave the enhanced Fritz John conditions of Prop. 2.2.1, introduced the notion of an informative Lagrange multipli...