Results 1  10
of
31
LAGRANGE MULTIPLIERS AND OPTIMALITY
, 1993
"... Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write firstorder optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions ..."
Abstract

Cited by 120 (7 self)
 Add to MetaCart
Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write firstorder optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions than equations, have demanded deeper understanding of the concept and how it fits into a larger theoretical picture. A major line of research has been the nonsmooth geometry of onesided tangent and normal vectors to the set of points satisfying the given constraints. Another has been the gametheoretic role of multiplier vectors as solutions to a dual problem. Interpretations as generalized derivatives of the optimal value with respect to problem parameters have also been explored. Lagrange multipliers are now being seen as arising from a general rule for the subdifferentiation of a nonsmooth objective function which allows blackandwhite constraints to be replaced by penalty expressions. This paper traces such themes in the current theory of Lagrange multipliers, providing along the way a freestanding exposition of basic nonsmooth analysis as motivated by and applied to this subject.
Steering Exact Penalty Methods for Nonlinear Programming
, 2007
"... This paper reviews, extends and analyzes a new class of penalty methods for nonlinear optimization. These methods adjust the penalty parameter dynamically; by controlling the degree of linear feasibility achieved at every iteration, they promote balanced progress toward optimality and feasibility. I ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
(Show Context)
This paper reviews, extends and analyzes a new class of penalty methods for nonlinear optimization. These methods adjust the penalty parameter dynamically; by controlling the degree of linear feasibility achieved at every iteration, they promote balanced progress toward optimality and feasibility. In contrast with classical approaches, the choice of the penalty parameter ceases to be a heuristic and is determined, instead, by a subproblem with clearly defined objectives. The new penalty update strategy is presented in the context of sequential quadratic programming (SQP) and sequential linearquadratic programming (SLQP) methods that use trust regions to promote convergence. The paper concludes with a discussion of penalty parameters for merit functions used in line search methods.
Pseudonormality and a Lagrange Multiplier Theory for Constrained Optimization
, 2000
"... We consider optimization problems with equality, inequality, and abstract set constraints, and we explore various characteristics of the constraint set that imply the existence of Lagrange multipliers. We prove a generalized version of the FritzJohn theorem, and we introduce new and general conditi ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
(Show Context)
We consider optimization problems with equality, inequality, and abstract set constraints, and we explore various characteristics of the constraint set that imply the existence of Lagrange multipliers. We prove a generalized version of the FritzJohn theorem, and we introduce new and general conditions that extend and unify the major constraint qualifications. Among these conditions, two new properties, pseudonormality and quasinormality, emerge as central within the taxonomy of interesting constraint characteristics. In the case where there is no abstract set constraint, these properties provide the connecting link between the classical constraint qualifications and two distinct pathways to the existence of Lagrange multipliers: one involving the notion of quasiregularity and Farkas' Lemma, and the other involving the use of exact penalty functions. The second pathway also applies in the general case where there is an abstract set constraint.
Restrictedrecourse bounds for stochastic linear programming
 Oper. Res
, 1999
"... We consider the problem of bounding the expected value of a linear program (LP) containing random coe&cients, with applications to solving twostage stochastic programs. An upper bound for minimizations is derived from a restriction of an equivalent, penaltybased formulation of the primal stoch ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
(Show Context)
We consider the problem of bounding the expected value of a linear program (LP) containing random coe&cients, with applications to solving twostage stochastic programs. An upper bound for minimizations is derived from a restriction of an equivalent, penaltybased formulation of the primal stochastic LP, and a lower bound is obtained from a restriction of a reformulation of the dual. Our “restrictedrecourse bounds ” are more general and more easily computed than most other bounds because random coe&cients may appear anywhere in the LP, neither independence nor boundedness of the coe&cients is needed, and the bound is computed by solving a single LP or nonlinear program. Analytical examples demonstrate that the new bounds can be stronger than complementary Jensen bounds. (An upper bound is “complementary ” to a lower bound, and vice versa). In computational work, we apply the bounds to a twostage stochastic program for semiconductor manufacturing with uncertain demand and production rates. This paper develops new techniques for bounding theexpected value of a stochastic linear program, which is a linear program (LP), some or all of whose coe&cients are random. The random coe&cients may be discretely or continuously distributed, may be independent or contain dependencies, and may occur anywhere in the objective function, righthand side, or constraint matrix. Calculating (or estimating) the expected value of a stochastic LP is key to
Infeasibility Detection and SQP Methods for Nonlinear Optimization
, 2008
"... This paper addresses the need for nonlinear programming algorithms that provide fast local convergence guarantees no matter if a problem is feasible or infeasible. We present an activeset sequential quadratic programming method derived from an exact penalty approach that adjusts the penalty paramet ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
This paper addresses the need for nonlinear programming algorithms that provide fast local convergence guarantees no matter if a problem is feasible or infeasible. We present an activeset sequential quadratic programming method derived from an exact penalty approach that adjusts the penalty parameter appropriately to emphasize optimality over feasibility, or vice versa. Conditions are presented under which superlinear convergence is achieved in the infeasible case. Numerical experiments illustrate the practical behavior of the method.
Exact Barrier Function Methods For Lipschitz Programs
 Applied Mathematics and Optimization
, 1995
"... this paper is twofold. First we consider a class of nondifferentiable penalty functions for constrained Lipschitz programs and then we show how these penalty functions can be employed to actually solve a constrained Lipschitz program. The penalty functions considered incorporate a barrier term which ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
this paper is twofold. First we consider a class of nondifferentiable penalty functions for constrained Lipschitz programs and then we show how these penalty functions can be employed to actually solve a constrained Lipschitz program. The penalty functions considered incorporate a barrier term which makes their value go to infinity on the boundary of a perturbation of the feasible set. Exploiting this fact it is possible to prove, under mild compactness and regularity assumptions, a complete correspondence between the unconstrained minimization of the penalty functions and the solutions of the constrained program, thus showing that the penalty functions are exact according to the definition introduced in [17]. Motivated by these results, we then propose some algorithm models and study their convergence properties. We show that, even when the assumptions used to establish the exactness of the penalty functions are not satisfied, every limit point of the sequence produced by a basic algorithm model is an extended stationary point according to the definition given in [8].Then, based on this analysis and on the one previously carried out on the penalty function, we study the consequences on the convergence properties of increasingly demanding assumptions. In particular we show that under the same assumptions used to establish the exactness properties of the penalty functions, it is possible to guarantee that a limit point at least exists, and that any such limit point is a KKT point for the constrained problem. KEY WORDS: Constrained optimization, Nonsmooth optimization, Penalty methods, Barrier functions, Extended stationary points. AMS SUBJECT CLASSIFICATION: 90C30, 49M30, 65K05 1 INTRODUCTION Nondifferentiable penalty functions for smooth nonlinear programming problems h...
Constrained LAV State Estimation Using PenaltyFunctions
 IEEE Transactions on Power Systems
, 1997
"... Inequality constraints are often needed in optimization problems in order to deal with uncertainty. This paper introduces a simple technique that allows enforcement of inequality constraints in l1 norm problems without any modifications to existing programs. The solution of l1 norm problems is requi ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Inequality constraints are often needed in optimization problems in order to deal with uncertainty. This paper introduces a simple technique that allows enforcement of inequality constraints in l1 norm problems without any modifications to existing programs. The solution of l1 norm problems is required, for example, in implementing LAV (Least Absolute Value) state estimators in electric power systems. The paper shows how LAV state estimators with inequality constraints can be useful for estimating the state of external systems. This is important in a competitiveenvironment where precise information about a utility's neighboring systems may not be available.
Guaranteed matrix completion via nonconvex factorization. arXiv preprint arXiv:1411.8003
, 2014
"... Matrix factorization is a popular approach for largescale matrix completion and constitutes a basic component of many solutions for Netflix Prize competition. In this approach, the unknown lowrank matrix is expressed as the product of two much smaller matrices so that the lowrank property is auto ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Matrix factorization is a popular approach for largescale matrix completion and constitutes a basic component of many solutions for Netflix Prize competition. In this approach, the unknown lowrank matrix is expressed as the product of two much smaller matrices so that the lowrank property is automatically fulfilled. The resulting optimization problem, even with huge size, can be solved (to stationary points) very efficiently through standard optimization algorithms such as alternating minimization and stochastic gradient descent (SGD). However, due to the nonconvexity caused by the factorization model, there is a limited theoretical understanding of whether these algorithms will generate a good solution. In this paper, we establish a theoretical guarantee for the factorization based formulation to correctly recover the underlying lowrank matrix. In particular, we show that under similar conditions to those in previous works, many standard optimization algorithms converge to the global optima of the factorization based formulation, thus recovering the true lowrank matrix. To the best of our knowledge, our result is the first one that provides recovery guarantee for many standard algorithms such as gradient descent, SGD and block coordinate gradient descent. Our result also applies to alternating minimization, and a notable difference from previous studies on alternating minimization is that we do not need the resampling scheme (i.e. using independent samples in each iteration). 1
Exact Penalty Methods
 In I. Ciocco (Ed.), Algorithms for Continuous Optimization
, 1994
"... . Exact penalty methods for the solution of constrained optimization problems are based on the construction of a function whose unconstrained minimizing points are also solution of the constrained problem. In the first part of this paper we recall some definitions concerning exactness properties of ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
. Exact penalty methods for the solution of constrained optimization problems are based on the construction of a function whose unconstrained minimizing points are also solution of the constrained problem. In the first part of this paper we recall some definitions concerning exactness properties of penalty functions, of barrier functions, of augmented Lagrangian functions, and discuss under which assumptions on the constrained problem these properties can be ensured. In the second part of the paper we consider algorithmic aspects of exact penalty methods; in particular we show that, by making use of continuously differentiable functions that possess exactness properties, it is possible to define implementable algorithms that are globally convergent with superlinear convergence rate towards KKT points of the constrained problem. 1 Introduction "It would be a major theoretic breakthrough in nonlinear programming if a simple continuously differentiable function could be exhibited with th...
Enhanced Optimality Conditions and Exact Penalty Functions
 PRODEEDINGS OF ALLERTON CONFERENCE, ALLERTON PARK
, 2000
"... We consider optimization problems with equality, inequality, and abstract set constraints, and we explore various characteristics of the constraint set that imply the existence of Lagrange multipliers. We prove a generalized version of the FritzJohn theorem, and we introduce new and general conditi ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
We consider optimization problems with equality, inequality, and abstract set constraints, and we explore various characteristics of the constraint set that imply the existence of Lagrange multipliers. We prove a generalized version of the FritzJohn theorem, and we introduce new and general conditions that extend and unify the major constraint qualifications. Among these conditions, a new property, pseudonormality, provides the connecting link between the classical constraint qualifications and the use of exact penalty functions.