Results 1  10
of
57
LAGRANGE MULTIPLIERS AND OPTIMALITY
, 1993
"... Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write firstorder optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions ..."
Abstract

Cited by 89 (7 self)
 Add to MetaCart
Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write firstorder optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions than equations, have demanded deeper understanding of the concept and how it fits into a larger theoretical picture. A major line of research has been the nonsmooth geometry of onesided tangent and normal vectors to the set of points satisfying the given constraints. Another has been the gametheoretic role of multiplier vectors as solutions to a dual problem. Interpretations as generalized derivatives of the optimal value with respect to problem parameters have also been explored. Lagrange multipliers are now being seen as arising from a general rule for the subdifferentiation of a nonsmooth objective function which allows blackandwhite constraints to be replaced by penalty expressions. This paper traces such themes in the current theory of Lagrange multipliers, providing along the way a freestanding exposition of basic nonsmooth analysis as motivated by and applied to this subject.
Stabilized Sequential Quadratic Programming
 Computational Optimization and Applications
, 1998
"... . Recently, Wright proposed a stabilized sequential quadratic programming algorithm for inequality constrained optimization. Assuming the MangasarianFromovitz constraint qualification and the existence of a strictly positive multiplier (but possibly dependent constraint gradients), he proved a local ..."
Abstract

Cited by 38 (0 self)
 Add to MetaCart
. Recently, Wright proposed a stabilized sequential quadratic programming algorithm for inequality constrained optimization. Assuming the MangasarianFromovitz constraint qualification and the existence of a strictly positive multiplier (but possibly dependent constraint gradients), he proved a local quadratic convergence result. In this paper, we establish quadratic convergence in cases where both strict complementarity and the MangasarianFromovitz constraint qualification do not hold. The constraints on the stabilization parameter are relaxed, and linear convergence is demonstrated when the parameter is kept fixed. We show that the analysis of this method can be carried out using recent results for the stability of variational problems. Key words. Sequential quadratic programming, quadratic convergence, superlinear convergence, degenerate optimization, stabilized SQP, error estimation To appear in Computational Optimization and Applications This paper is dedicated to Olvi L. Manga...
A Survey of Subdifferential Calculus with Applications
 TMA
, 1998
"... This survey is an account of the current status of subdifferential research. It is intended to serve as an entry point for researchers and graduate students in a wide variety of pure and applied analysis areas who might profitably use subdifferentials as tools. ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
This survey is an account of the current status of subdifferential research. It is intended to serve as an entry point for researchers and graduate students in a wide variety of pure and applied analysis areas who might profitably use subdifferentials as tools.
The LP Dual Active Set Algorithm
, 1998
"... . An overview is given for a new algorithm, the LP Dual Active Set Algorithm, to solve linear programming problems. In its pure form, the algorithm uses a series of projections to ascend the dual function. These projections can be approximated using proximal techniques, and both iterative and direct ..."
Abstract

Cited by 12 (8 self)
 Add to MetaCart
. An overview is given for a new algorithm, the LP Dual Active Set Algorithm, to solve linear programming problems. In its pure form, the algorithm uses a series of projections to ascend the dual function. These projections can be approximated using proximal techniques, and both iterative and direct methods can be applied to obtain highly accurate, small norm solutions to both the primal and the dual problem. High Performance Algorithms and Software in Nonlinear Optimization R. De Leone, A. Murli, P. M. Pardalos, and G. Toraldo, eds. Kluwer, Dordrecht, 1998, pp. 243254. This research was supported by the National Science Foundation. 1 1. Introduction. In this paper we give an overview of the LP Dual Active Set Algorithm (LP DASA) for solving a linear programming problem of the form: minimize c T x subject to Ax = b; l x u: (1) Here A is an m n matrix and x 2 R n . The Dual Active Set Algorithm originates from an algorithm to solve dual control problems presented i...
Generalized Hessian properties of regularized nonsmooth functions
 SIAM Journal on Optimization
, 1996
"... Abstract. The question of secondorder expansions is taken up for a class of functions of importance in optimization, namely Moreau envelope regularizations of nonsmooth functions f. It is shown that when f is proxregular, which includes convex functions and the extendedrealvalued functions repre ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
Abstract. The question of secondorder expansions is taken up for a class of functions of importance in optimization, namely Moreau envelope regularizations of nonsmooth functions f. It is shown that when f is proxregular, which includes convex functions and the extendedrealvalued functions representing problems of nonlinear programming, the many secondorder properties that can be formulated around the existence and stability of expansions of the envelopes of f or of their gradient mappings are linked by surprisingly extensive lists of equivalences with each other and with generalized differentiation properties of f itself. This clarifies the circumstances conducive to developing computational methods based on envelope functions, such as secondorder approximations in nonsmooth optimization and variants of the proximal point algorithm. The results establish that generalized secondorder expansions of Moreau envelopes, at least, can be counted on in most situations of interest in finitedimensional optimization. Keywords. Proxregularity, amenable functions, primallowernice functions, Hessians, first and secondorder expansions, strict protoderivatives, proximal mappings, Moreau envelopes, regularization, subgradient mappings, nonsmooth analysis, variational analysis, protoderivatives, secondorder epiderivatives, Attouchâ€™s theorem.
Partial Linearization Methods in Nonlinear Programming
, 1993
"... . In this paper, we characterize a class of feasible direction methods in nonlinear programming through the concept of partial linearization of the objective function. Based on a feasible point, the objective is replaced by an arbitrary convex and continuously differentiable function, and the error ..."
Abstract

Cited by 11 (10 self)
 Add to MetaCart
. In this paper, we characterize a class of feasible direction methods in nonlinear programming through the concept of partial linearization of the objective function. Based on a feasible point, the objective is replaced by an arbitrary convex and continuously differentiable function, and the error is taken into account by a first order approximation of it. A new feasible point is defined through a line search with respect to the original objective, towards the solution of the approximate problem. Global convergence results are given for exact and approximate line searches, and possible interpretations are made. We present some instances of the general algorithm, and discuss extensions to nondifferentiable programming. Key Words. Feasible direction methods, partial linearization, regularization, nondifferentiable programming. 1. Introduction The purpose of this paper is to unify a number of feasible direction methods in nonlinear programming. Examples of these are the method of Fran...
A General Descent Framework For The Monotone Variational Inequality Problem
 Mathematical Programming
, 1993
"... We present a framework for descent algorithms that solve the monotone variational inequality problem V IP v which consists in finding a solution v 2\Omega v which satisfies s(v) T (u \Gamma v) 0, for all u 2\Omega v . This unified framework includes, as special cases, some well known iterati ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
We present a framework for descent algorithms that solve the monotone variational inequality problem V IP v which consists in finding a solution v 2\Omega v which satisfies s(v) T (u \Gamma v) 0, for all u 2\Omega v . This unified framework includes, as special cases, some well known iterative methods and equivalent optimization formulations. A descent method is developed for an equivalent general optimization formulation and a proof of its convergence is given. Based on this unified algorithmic framework, we show that a variant of the descent method where each subproblem is only solved approximately is globally convergent under certain conditions. Key words Variational Inequalities, Descent methods, Optimization. 1 Introduction In this paper we consider the variational inequality problem (VIP) that consists in finding a vector v in\Omega v such that (V IP v ) s(v) T (u \Gamma v) 0 for all u 2\Omega v (1) where u, v are vectors in R n and s a mapping from the clos...
Equivalent subgradient versions of Hamiltonian and EulerLagrange equations in variational analysis
 SIAM J. Control and Optimization
, 1996
"... Abstract. Much effort in recent years has gone into generalizing the classical Hamiltonian and EulerLagrange equations of the calculus of variations so as to encompass problems in optimal control and a greater variety of integrands and constraints. These generalizations, in which nonsmoothness abou ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Abstract. Much effort in recent years has gone into generalizing the classical Hamiltonian and EulerLagrange equations of the calculus of variations so as to encompass problems in optimal control and a greater variety of integrands and constraints. These generalizations, in which nonsmoothness abounds and gradients are systematically replaced by subgradients, have succeeded in furnishing necessary conditions for optimality which reduce to the classical ones in the classical setting, but important issues have remained unsettled, especially concerning the exact relationship of the subgradient versions of the Hamiltonian equations versus those of the EulerLagrange equations. Here it is shown that new, tighter subgradient versions of these equations are actually equivalent to each other. The theory of epiconvergence of convex functions provides the technical basis for this development. Key words. EulerLagrange equations, Hamiltonian equations, variational analysis, nonsmooth analysis, subgradients, optimality.
Application of the dual active set algorithm to quadratic network optimization
 Comput. Optim. Appl
, 1993
"... Abstract. A new algorithm, the dual active set algorithm, is presented for solving a minimization problem with equality constraints and bounds on the variables. The algorithm identifies the active bound constraints by maximizing an unconstrained dual function in a finite number of iterations. Conver ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Abstract. A new algorithm, the dual active set algorithm, is presented for solving a minimization problem with equality constraints and bounds on the variables. The algorithm identifies the active bound constraints by maximizing an unconstrained dual function in a finite number of iterations. Convergence of the method is established, and it is applied to convex quadratic programming. In its implementable form, the algorithm is combined with the proximal point method. A computational study of largescale quadratic network problems compares the algorithm to a coordinate ascent method and to conjugate gradient methods for the dual problem. This study shows that combining the new algorithm with the nonlinear conjugate gradient method is particularly effective on difficult network problems from the literature.
Active Set Strategies and the LP Dual Active Set Algorithm
, 1996
"... fter a general treatment of primal and dual active set strategies, we present the Dual m Active Set Algorithm for linear programming and prove its convergence. An efficient impleentation is developed using proximal point approximations. This implementation involves a b primal/dual proximal iteration ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
fter a general treatment of primal and dual active set strategies, we present the Dual m Active Set Algorithm for linear programming and prove its convergence. An efficient impleentation is developed using proximal point approximations. This implementation involves a b primal/dual proximal iteration similar to one introduced by Rockafellar, and a new iteration ased on optimization of a proximal vector parameter. This proximal parameter optimization , w problem is well conditioned, leading to rapid convergence of the conjugate gradient method hile the original proximal function is terribly conditioned, leading to almost undetectable conz vergence of the conjugate gradient method. Limits as a proximal scalar parameter tends to ero are evaluated. Intriguing numerical results are presented for Netlib test problems. t s Key Words. Linear programming, quadratic programming, active sets, dual method, leas quares, proximal point, extrapolation, conjugate gradients, successive overrelexation ...