Results 1  10
of
22
LAGRANGE MULTIPLIERS AND OPTIMALITY
, 1993
"... Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write firstorder optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions ..."
Abstract

Cited by 89 (7 self)
 Add to MetaCart
Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write firstorder optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions than equations, have demanded deeper understanding of the concept and how it fits into a larger theoretical picture. A major line of research has been the nonsmooth geometry of onesided tangent and normal vectors to the set of points satisfying the given constraints. Another has been the gametheoretic role of multiplier vectors as solutions to a dual problem. Interpretations as generalized derivatives of the optimal value with respect to problem parameters have also been explored. Lagrange multipliers are now being seen as arising from a general rule for the subdifferentiation of a nonsmooth objective function which allows blackandwhite constraints to be replaced by penalty expressions. This paper traces such themes in the current theory of Lagrange multipliers, providing along the way a freestanding exposition of basic nonsmooth analysis as motivated by and applied to this subject.
A Newton Barrier method for Minimizing a Sum of Euclidean Norms subject to linear equality constraints
, 1995
"... An algorithm for minimizing a sum of Euclidean Norms subject to linear equality constraints is described. The algorithm is based on a recently developed Newton barrier method for the unconstrained minimization of a sum of Euclidean norms (MSN ). The linear equality constraints are handled using an e ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
An algorithm for minimizing a sum of Euclidean Norms subject to linear equality constraints is described. The algorithm is based on a recently developed Newton barrier method for the unconstrained minimization of a sum of Euclidean norms (MSN ). The linear equality constraints are handled using an exact L 1 penalty function which is made smooth in the same way as the Euclidean norms. It is shown that the dual problem is to maximize a linear objective function subject to homogeneous linear equality constraints and quadratic inequalities. Hence the suggested method also solves such problems efficiently. In fact such a problem from plastic collapse analysis motivated this work. Numerical results are presented for large sparse problems, demonstrating the extreme efficiency of the method. Keywords: Sum of Norms, Nonsmooth Optimization, Duality, Newton Barrier Method. AMS(MOS) subject classification: 65K05, 90C06, 90C25, 90C90. Abbreviated title: A Newton barrier method. Supported by the ...
Steering Exact Penalty Methods for Nonlinear Programming
, 2007
"... This paper reviews, extends and analyzes a new class of penalty methods for nonlinear optimization. These methods adjust the penalty parameter dynamically; by controlling the degree of linear feasibility achieved at every iteration, they promote balanced progress toward optimality and feasibility. I ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
This paper reviews, extends and analyzes a new class of penalty methods for nonlinear optimization. These methods adjust the penalty parameter dynamically; by controlling the degree of linear feasibility achieved at every iteration, they promote balanced progress toward optimality and feasibility. In contrast with classical approaches, the choice of the penalty parameter ceases to be a heuristic and is determined, instead, by a subproblem with clearly defined objectives. The new penalty update strategy is presented in the context of sequential quadratic programming (SQP) and sequential linearquadratic programming (SLQP) methods that use trust regions to promote convergence. The paper concludes with a discussion of penalty parameters for merit functions used in line search methods.
Pseudonormality and a Lagrange Multiplier Theory for Constrained Optimization
, 2000
"... We consider optimization problems with equality, inequality, and abstract set constraints, and we explore various characteristics of the constraint set that imply the existence of Lagrange multipliers. We prove a generalized version of the FritzJohn theorem, and we introduce new and general conditi ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
We consider optimization problems with equality, inequality, and abstract set constraints, and we explore various characteristics of the constraint set that imply the existence of Lagrange multipliers. We prove a generalized version of the FritzJohn theorem, and we introduce new and general conditions that extend and unify the major constraint qualifications. Among these conditions, two new properties, pseudonormality and quasinormality, emerge as central within the taxonomy of interesting constraint characteristics. In the case where there is no abstract set constraint, these properties provide the connecting link between the classical constraint qualifications and two distinct pathways to the existence of Lagrange multipliers: one involving the notion of quasiregularity and Farkas' Lemma, and the other involving the use of exact penalty functions. The second pathway also applies in the general case where there is an abstract set constraint.
On the Convergence of Successive Linear Programming Algorithms
, 2003
"... We analyze the global convergence properties of a class of penalty methods for nonlinear programming. These methods include successive linear programming approaches, and more speci cally the SLPEQP approach presented in [1]. Every iteration requires the solution of two trust region subproblems inv ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
We analyze the global convergence properties of a class of penalty methods for nonlinear programming. These methods include successive linear programming approaches, and more speci cally the SLPEQP approach presented in [1]. Every iteration requires the solution of two trust region subproblems involving linear and quadratic models, respectively. The interaction between the trust regions of these subproblems requires careful consideration. It is shown under mild assumptions that there exist an accumulation point which is a critical point for the penalty function.
Infeasibility Detection and SQP Methods for Nonlinear Optimization
, 2008
"... This paper addresses the need for nonlinear programming algorithms that provide fast local convergence guarantees no matter if a problem is feasible or infeasible. We present an activeset sequential quadratic programming method derived from an exact penalty approach that adjusts the penalty paramet ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
This paper addresses the need for nonlinear programming algorithms that provide fast local convergence guarantees no matter if a problem is feasible or infeasible. We present an activeset sequential quadratic programming method derived from an exact penalty approach that adjusts the penalty parameter appropriately to emphasize optimality over feasibility, or vice versa. Conditions are presented under which superlinear convergence is achieved in the infeasible case. Numerical experiments illustrate the practical behavior of the method.
Exact Penalty Methods
 In I. Ciocco (Ed.), Algorithms for Continuous Optimization
, 1994
"... . Exact penalty methods for the solution of constrained optimization problems are based on the construction of a function whose unconstrained minimizing points are also solution of the constrained problem. In the first part of this paper we recall some definitions concerning exactness properties of ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
. Exact penalty methods for the solution of constrained optimization problems are based on the construction of a function whose unconstrained minimizing points are also solution of the constrained problem. In the first part of this paper we recall some definitions concerning exactness properties of penalty functions, of barrier functions, of augmented Lagrangian functions, and discuss under which assumptions on the constrained problem these properties can be ensured. In the second part of the paper we consider algorithmic aspects of exact penalty methods; in particular we show that, by making use of continuously differentiable functions that possess exactness properties, it is possible to define implementable algorithms that are globally convergent with superlinear convergence rate towards KKT points of the constrained problem. 1 Introduction "It would be a major theoretic breakthrough in nonlinear programming if a simple continuously differentiable function could be exhibited with th...
Constrained LAV State Estimation Using PenaltyFunctions
 IEEE Transactions on Power Systems
, 1997
"... Inequality constraints are often needed in optimization problems in order to deal with uncertainty. This paper introduces a simple technique that allows enforcement of inequality constraints in l1 norm problems without any modifications to existing programs. The solution of l1 norm problems is requi ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Inequality constraints are often needed in optimization problems in order to deal with uncertainty. This paper introduces a simple technique that allows enforcement of inequality constraints in l1 norm problems without any modifications to existing programs. The solution of l1 norm problems is required, for example, in implementing LAV (Least Absolute Value) state estimators in electric power systems. The paper shows how LAV state estimators with inequality constraints can be useful for estimating the state of external systems. This is important in a competitiveenvironment where precise information about a utility's neighboring systems may not be available.
Exact Barrier Function Methods For Lipschitz Programs
 Applied Mathematics and Optimization
, 1995
"... this paper is twofold. First we consider a class of nondifferentiable penalty functions for constrained Lipschitz programs and then we show how these penalty functions can be employed to actually solve a constrained Lipschitz program. The penalty functions considered incorporate a barrier term which ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
this paper is twofold. First we consider a class of nondifferentiable penalty functions for constrained Lipschitz programs and then we show how these penalty functions can be employed to actually solve a constrained Lipschitz program. The penalty functions considered incorporate a barrier term which makes their value go to infinity on the boundary of a perturbation of the feasible set. Exploiting this fact it is possible to prove, under mild compactness and regularity assumptions, a complete correspondence between the unconstrained minimization of the penalty functions and the solutions of the constrained program, thus showing that the penalty functions are exact according to the definition introduced in [17]. Motivated by these results, we then propose some algorithm models and study their convergence properties. We show that, even when the assumptions used to establish the exactness of the penalty functions are not satisfied, every limit point of the sequence produced by a basic algorithm model is an extended stationary point according to the definition given in [8].Then, based on this analysis and on the one previously carried out on the penalty function, we study the consequences on the convergence properties of increasingly demanding assumptions. In particular we show that under the same assumptions used to establish the exactness properties of the penalty functions, it is possible to guarantee that a limit point at least exists, and that any such limit point is a KKT point for the constrained problem. KEY WORDS: Constrained optimization, Nonsmooth optimization, Penalty methods, Barrier functions, Extended stationary points. AMS SUBJECT CLASSIFICATION: 90C30, 49M30, 65K05 1 INTRODUCTION Nondifferentiable penalty functions for smooth nonlinear programming problems h...
Exact Penalization Via Dini And Hadamard Conditional Derivatives
"... : Exact penalty functions for nonsmooth constrained optimization problems are analyzed by using the notion of (Dini) Hadamard directional derivative with respect to the constraint set. Weak conditions are given guaranteeing equivalence of the sets of stationary, global minimum, local minimum points ..."
Abstract
 Add to MetaCart
: Exact penalty functions for nonsmooth constrained optimization problems are analyzed by using the notion of (Dini) Hadamard directional derivative with respect to the constraint set. Weak conditions are given guaranteeing equivalence of the sets of stationary, global minimum, local minimum points of the constrained problem and of the penalty function. Key Words: Exact penalty function, Dini (conditional) derivative, Hadamard (conditional) derivative, stationary point, minimim point, nonsmooth analysis. 1 Introduction We consider exact penalty functions for finite dimensional nonsmooth constrained optimization problems. Since the seminal papers [7, 10, 14], published almost thirty years ago, exact penalty functions have been the object of intense and increasingly deeper analysis and they proved a valuable tool both in the theoretical study of optimization problems and in the development of algorithms for their numerical solution. We refer the interested reader to [1, 5, 6] and refe...