Results 1  10
of
15
LAGRANGE MULTIPLIERS AND OPTIMALITY
, 1993
"... Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write firstorder optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions ..."
Abstract

Cited by 88 (7 self)
 Add to MetaCart
Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write firstorder optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions than equations, have demanded deeper understanding of the concept and how it fits into a larger theoretical picture. A major line of research has been the nonsmooth geometry of onesided tangent and normal vectors to the set of points satisfying the given constraints. Another has been the gametheoretic role of multiplier vectors as solutions to a dual problem. Interpretations as generalized derivatives of the optimal value with respect to problem parameters have also been explored. Lagrange multipliers are now being seen as arising from a general rule for the subdifferentiation of a nonsmooth objective function which allows blackandwhite constraints to be replaced by penalty expressions. This paper traces such themes in the current theory of Lagrange multipliers, providing along the way a freestanding exposition of basic nonsmooth analysis as motivated by and applied to this subject.
Primaldual projected gradient algorithms for extended linearquadratic programming
 SIAM J. Optimization
"... Abstract. Many largescale problems in dynamic and stochastic optimization can be modeled with extended linearquadratic programming, which admits penalty terms and treats them through duality. In general the objective functions in such problems are only piecewise smooth and must be minimized or max ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
Abstract. Many largescale problems in dynamic and stochastic optimization can be modeled with extended linearquadratic programming, which admits penalty terms and treats them through duality. In general the objective functions in such problems are only piecewise smooth and must be minimized or maximized relative to polyhedral sets of high dimensionality. This paper proposes a new class of numerical methods for “fully quadratic ” problems within this framework, which exhibit secondorder nonsmoothness. These methods, combining the idea of finiteenvelope representation with that of modified gradient projection, work with local structure in the primal and dual problems simultaneously, feeding information back and forth to trigger advantageous restarts. Versions resembling steepest descent methods and conjugate gradient methods are presented. When a positive threshold of εoptimality is specified, both methods converge in a finite number of iterations. With threshold 0, it is shown under mild assumptions that the steepest descent version converges linearly, while the conjugate gradient version still has a finite termination property. The algorithms are designed to exploit features of primal and dual decomposability of the Lagrangian, which are typically available in a largescale setting, and they are open to considerable parallelization. Key words. Extended linearquadratic programming, largescale numerical optimization, finiteenvelope representation, gradient projection, primaldual methods, steepest descent methods, conjugate gradient methods. AMS(MOS) subject classifications. 65K05, 65K10, 90C20 1. Introduction. A
Newton's Method for Quadratic Stochastic Programs with Recourse
 Journal of Computational and Applied Mathematics
, 1995
"... . Quadratic stochastic programs (QSP) with recourse can be formulated as nonlinear convex programming problems. By attaching a Lagrange multiplier vector to the nonlinear convex program, a QSP is written as a system of nonsmooth equations. A Newtonlike method for solving the QSP is proposed and glo ..."
Abstract

Cited by 10 (8 self)
 Add to MetaCart
. Quadratic stochastic programs (QSP) with recourse can be formulated as nonlinear convex programming problems. By attaching a Lagrange multiplier vector to the nonlinear convex program, a QSP is written as a system of nonsmooth equations. A Newtonlike method for solving the QSP is proposed and global convergence and local superlinear convergence of the method are established. The current method is more general than previous methods which were developed for boxdiagonal and fully quadratic QSP. Numerical experiments are given to demonstrate the efficiency of the algorithm, and to compare the use of MonteCarlo rules and lattice rules for multiple integration in the algorithm. Keywords: Newton's method, quadratic stochastic programs, nonsmooth equations. Short title: Newton's method for stochastic programs 1 This work is supported by the Australian Research Council. 1. Introduction Let P 2 R n\Thetan be symmetric positive semidefinite and H 2 R m\Thetam be symmetric positive...
Computational schemes for largescale problems in extended linearquadratic programming
 Mathematical Programming
, 1990
"... Abstract. Numerical approaches are developed for solving largescale problems of extended linearquadratic programming that exhibit Lagrangian separability in both primal and dual variables simultaneously. Such problems are kin to largescale linear complementarity models as derived from application ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Abstract. Numerical approaches are developed for solving largescale problems of extended linearquadratic programming that exhibit Lagrangian separability in both primal and dual variables simultaneously. Such problems are kin to largescale linear complementarity models as derived from applications of variational inequalities, and they arise from general models in multistage stochastic programming and discretetime optimal control. Because their objective functions are merely piecewise linearquadratic, due to the presence of penalty terms, they do not fit a conventional quadratic programming framework. They have potentially advantageous features, however, which so far have not been exploited in solution procedures. These features are laid out and analyzed for their computational potential. In particular, a new class of algorithms, called finiteenvelope methods, is described that does take advantage of the structure. Such methods reduce the solution of a highdimensional extended linearquadratic program to that of a sequence of lowdimensional ordinary quadratic programs.
Global and Superlinear Convergence of Inexact Uzawa Methods for Saddle Point Problems with Nondifferentiable Mappings
 SIAM J. Numer. Anal
, 1996
"... This paper investigates inexact Uzawa methods for nonlinear saddle point problems. We prove that the inexact Uzawa method converges globally and superlinearly even if the derivative of the nonlinear mapping does not exist. We show that the Newtontype decomposition method for saddle point problems i ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
This paper investigates inexact Uzawa methods for nonlinear saddle point problems. We prove that the inexact Uzawa method converges globally and superlinearly even if the derivative of the nonlinear mapping does not exist. We show that the Newtontype decomposition method for saddle point problems is a special case of a NewtonUzawa method. We discuss applications of inexact Uzawa methods to separable convex programming problems and coupling of finite elements/boundary elements for nonlinear interface problems. Key words. saddle point, nonsmooth, Uzawa, Newton, inexact, inner/outer, convergence. AMS subject classifications. 65H10 Abbreviated title. Inexact Uzawa Method This work is supported by the Australian Research Council. 1 Introduction We consider the nonlinear saddle point problem H(x; y) = " F (x) + B T y \Gamma p Bx \Gamma Cy \Gamma q # = 0; (1.1) where B is an m \Theta n matrix, C is an m \Theta m symmetric positive semidefinite matrix, p is a vector in ! n ...
On Preconditioned Uzawa Methods and SOR Methods for Saddle Point Problems
 J. Comput. Appl. Math
, 1998
"... This paper studies convergence analysis of a preconditioned inexact Uzawa method for nondifferentiable saddle point problems. The SORNewton method and the SORBFGS method are special cases of this method. We relax the BramblePasciakVassilev condition on preconditioners for convergence of the inex ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
This paper studies convergence analysis of a preconditioned inexact Uzawa method for nondifferentiable saddle point problems. The SORNewton method and the SORBFGS method are special cases of this method. We relax the BramblePasciakVassilev condition on preconditioners for convergence of the inexact Uzawa method for linear saddle point problems. The relaxed condition is used to determine the relaxation parameters in the SORNewton method and the SORBFGS method. Furthermore, we study global convergence of the multistep inexact Uzawa method for nondifferentiable saddle point problems. Key words. Saddle point problem, nonsmooth equation, Uzawa method, precondition, SOR method. Abbreviated title. Uzawa method and SOR method AMS Subject Classification. 65H10. 1 Introduction Saddle point problems arise, for example, in the mixed finite element discretization of the Stokes equations, coupled finite element/boundary element computations for interface problems, and the minimization of a ...
A Stochastic Newton Method for Stochastic Quadratic Programs with Recourse
 Applied Mathematics Preprint AM94/9, School of Mathematics, the University of New South
, 1995
"... In this paper, we combine the inexact Newton method with the stochastic decomposition method and present a stochastic Newton method for solving the twostage stochastic program. We prove that the new method is superlinearly convergent with probability one and a probabilistic error bound h(N k ). The ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
In this paper, we combine the inexact Newton method with the stochastic decomposition method and present a stochastic Newton method for solving the twostage stochastic program. We prove that the new method is superlinearly convergent with probability one and a probabilistic error bound h(N k ). The error bound h(N k ) at least has the same order as jjy k \Gamma y jj when k !1. In the algorithm, we can control the error bound h(N k ) such that h(N k ) = o(jjy k \Gamma y jj). Keywords: Stochastic Newton method, stochastic quadratic programming. 1. Introduction Let P 2 R n\Thetan be symmetric positive semidefinite and H 2 R m\Thetam be symmetric positive definite. We consider the twostage stochastic quadratic program with fixed recourse : minimize `(x) = 1 2 x T Px + c T x + OE(x) x 2 R n subject to Ax b; (1:1) where OE(x) = Z R m /(! \Gamma Tx)ae(!)d!; /(! \Gamma Tx) = maximize \Gamma 1 2 z T Hz + z T (! \Gamma Tx) z 2 R m subject to W z q; 1 Th...
Largescale extended linearquadratic programming and multistage optimization
 Advances in Numerical Partial Differential Equations and Optimization, chapter 15
, 1991
"... Abstract. Optimization problems in discrete time can be modeled more flexibly by extended linearquadratic programming than by traditional linear or quadratic programming, because penalties and other expressions that may substitute for constraints can readily be incorporated and dualized. At the same ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Abstract. Optimization problems in discrete time can be modeled more flexibly by extended linearquadratic programming than by traditional linear or quadratic programming, because penalties and other expressions that may substitute for constraints can readily be incorporated and dualized. At the same time, dynamics can be written with state vectors as in dynamic programming and optimal control. This suggests new primaldual approaches to solving multistage problems. The special setting for such numerical methods is described. New results are presented on the calculation of gradients of the primal and dual objective functions and on the convergence effects of strict quadratic regularization.
A Variant of the TopkisVeinott Method for Solving Inequality Constrained Optimization Problems
 J. Appl. Math. Optim
, 1997
"... . In this paper, we give a variant of the TopkisVeinott method for solving inequality constrained optimization problems. This method uses a linearly constrained positive semidefinite quadratic problem to generate a feasible descent direction at each iteration. Under mild assumptions, the algorithm ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
. In this paper, we give a variant of the TopkisVeinott method for solving inequality constrained optimization problems. This method uses a linearly constrained positive semidefinite quadratic problem to generate a feasible descent direction at each iteration. Under mild assumptions, the algorithm is shown to be globally convergent in the sense that every accumulation point of the sequence generated by the algorithm is a FritzJohn point of the problem. We introduce a FritzJohn (FJ) function, an FJ1 strong secondorder sufficiency condition (FJ1SSOSC) and an FJ2 strong secondorder sufficiency condition (FJ2SSOSC), and then show, without any constraint qualification (CQ), that (i) if an FJ point z satisfies the FJ1SSOSC, then there exists a neighborhood N(z) of z such that for any FJ point y 2 N(z) n fzg, f 0 (y) 6= f 0 (z), where f 0 is the objective function of the problem; (ii) if an FJ point z satisfies the FJ2SSOSC, then z is a strict local minimum of the problem. The resu...