Results 1 
5 of
5
LAGRANGE MULTIPLIERS AND OPTIMALITY
, 1993
"... Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write firstorder optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions ..."
Abstract

Cited by 88 (7 self)
 Add to MetaCart
Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write firstorder optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions than equations, have demanded deeper understanding of the concept and how it fits into a larger theoretical picture. A major line of research has been the nonsmooth geometry of onesided tangent and normal vectors to the set of points satisfying the given constraints. Another has been the gametheoretic role of multiplier vectors as solutions to a dual problem. Interpretations as generalized derivatives of the optimal value with respect to problem parameters have also been explored. Lagrange multipliers are now being seen as arising from a general rule for the subdifferentiation of a nonsmooth objective function which allows blackandwhite constraints to be replaced by penalty expressions. This paper traces such themes in the current theory of Lagrange multipliers, providing along the way a freestanding exposition of basic nonsmooth analysis as motivated by and applied to this subject.
Sensitivity Analysis in (Degenerate) Quadratic Programming
 DELFT UNIVERSITY OF TECHNOLOGY
, 1996
"... In this paper we deal with sensitivity analysis in convex quadratic programming, without making assumptions on nondegeneracy, strict convexity of the objective function, and the existence of a strictly complementary solution. We show that the optimal value as a function of a righthand side element ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
In this paper we deal with sensitivity analysis in convex quadratic programming, without making assumptions on nondegeneracy, strict convexity of the objective function, and the existence of a strictly complementary solution. We show that the optimal value as a function of a righthand side element (or an element of the linear part of the objective) is piecewise quadratic, where the pieces can be characterized by maximal complementary solutions and tripartitions. Further, we investigate differentiability of this function. A new algorithm to compute the optimal value function is proposed. Finally, we discuss the advantages of this approach when applied to meanvariance portfolio models.
Strong Convexity and Directional Derivatives of Marginal Values in TwoStage Stochastic Programming
, 1995
"... . Twostage stochastic programs with random righthand side are considered. Verifiable sufficient conditions for the existence of secondorder directional derivatives of marginal values are presented. The central role of the strong convexity of the expected recourse function as well as of a Lipschit ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
. Twostage stochastic programs with random righthand side are considered. Verifiable sufficient conditions for the existence of secondorder directional derivatives of marginal values are presented. The central role of the strong convexity of the expected recourse function as well as of a Lipschitz stability result for optimal sets is emphasized. Keywords. Twostage stochastic programs, directional derivatives of marginal values, strong convexity, sensitivity analysis 1991 Mathematics Subject Classification: 90C15, 90C31 1 Introduction Consider the following twostage stochastic program minfg(x) +Q¯ (Ax) : x 2 Cg (1.1) Q¯ (Ø) = Z IR s ~ Q(z \Gamma Ø)¯(dz); (1.2) ~ Q(t) = minfq ? y : Wy = t; y 0g (1.3) where g : IR m ! IR is a convex function, C ae IR m is a nonempty closed convex set and ¯ is a Borel probability measure on IR s . Furthermore, q 2 IR m and A 2 L(IR m ; IR s ); W 2 L(IR m ; IR s ). To have (1.1.)(1.3) welldefined we assume This researc...
On Concepts of Directional Differentiability A.
"... Abstract. Various definitions of directional derivatives in topological vector spaces are compared. Directional derivatives in the sense of G~teaux, Fr6chet, and Hadamard are singled out from the general framework of crdirectional differentiability. It is pointed out that, in the case of finitedim ..."
Abstract
 Add to MetaCart
Abstract. Various definitions of directional derivatives in topological vector spaces are compared. Directional derivatives in the sense of G~teaux, Fr6chet, and Hadamard are singled out from the general framework of crdirectional differentiability. It is pointed out that, in the case of finitedimensional spaces and locally Lipschitz mappings, all these concepts of directional differentiability are equivalent. The chain rule for directional derivatives of a composite mapping is discussed. Key Words. Directional derivatives, positively homogeneous mapping, locally Lipschitz mapping, chain rule. 1.
SPACE MAPPING FOR OPTIMAL CONTROL OF PARTIAL DIFFERENTIAL EQUATIONS
"... Abstract. Solving optimal control problems for nonlinear partial differential equations represents a significant numerical challenge due to the tremendous size and possible model difficulties (e.g., nonlinearities) of the discretized problems. In this paper, a novel spacemapping technique for solvi ..."
Abstract
 Add to MetaCart
Abstract. Solving optimal control problems for nonlinear partial differential equations represents a significant numerical challenge due to the tremendous size and possible model difficulties (e.g., nonlinearities) of the discretized problems. In this paper, a novel spacemapping technique for solving the aforementioned problem class is introduced, analyzed, and tested. The advantage of the spacemapping approach compared to classical multigrid techniques lies in the flexibility of not only using grid coarsening as a model reduction but also employing (perhaps less nonlinear) surrogates. The space mapping is based on a regularization approach which, in contrast to other spacemapping techniques, results in a smooth mapping and, thus, avoids certain irregular situations at kinks. A new Broyden’s update formula for the sensitivities of the space map is also introduced. This quasiNewton update is motivated by the usual secant condition combined with a secant condition resulting from differentiating the spacemapping surrogate. The overall algorithm employs a trustregion framework for global convergence. Issues involved in the computations are highlighted, and a report on a few illustrative numerical tests is given.