Results 1  10
of
28
Engineering and economic applications of complementarity problems
 SIAM Review
, 1997
"... Abstract. This paper gives an extensive documentation of applications of finitedimensional nonlinear complementarity problems in engineering and equilibrium modeling. For most applications, we describe the problem briefly, state the defining equations of the model, and give functional expressions f ..."
Abstract

Cited by 127 (24 self)
 Add to MetaCart
Abstract. This paper gives an extensive documentation of applications of finitedimensional nonlinear complementarity problems in engineering and equilibrium modeling. For most applications, we describe the problem briefly, state the defining equations of the model, and give functional expressions for the complementarity formulations. The goal of this documentation is threefold: (i) to summarize the essential applications of the nonlinear complementarity problem known to date, (ii) to provide a basis for the continued research on the nonlinear complementarity problem, and (iii) to supply a broad collection of realistic complementarity problems for use in algorithmic experimentation and other studies.
LAGRANGE MULTIPLIERS AND OPTIMALITY
, 1993
"... Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write firstorder optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions ..."
Abstract

Cited by 89 (7 self)
 Add to MetaCart
Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write firstorder optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions than equations, have demanded deeper understanding of the concept and how it fits into a larger theoretical picture. A major line of research has been the nonsmooth geometry of onesided tangent and normal vectors to the set of points satisfying the given constraints. Another has been the gametheoretic role of multiplier vectors as solutions to a dual problem. Interpretations as generalized derivatives of the optimal value with respect to problem parameters have also been explored. Lagrange multipliers are now being seen as arising from a general rule for the subdifferentiation of a nonsmooth objective function which allows blackandwhite constraints to be replaced by penalty expressions. This paper traces such themes in the current theory of Lagrange multipliers, providing along the way a freestanding exposition of basic nonsmooth analysis as motivated by and applied to this subject.
A Modified ForwardBackward Splitting Method For Maximal Monotone Mappings
 SIAM J. Control Optim
, 1998
"... We consider the forwardbackward splitting method for finding a zero of the sum of two maximal monotone mappings. This method is known to converge when the inverse of the forward mapping is strongly monotone. We propose a modification to this method, in the spirit of the extragradient method for mon ..."
Abstract

Cited by 49 (0 self)
 Add to MetaCart
We consider the forwardbackward splitting method for finding a zero of the sum of two maximal monotone mappings. This method is known to converge when the inverse of the forward mapping is strongly monotone. We propose a modification to this method, in the spirit of the extragradient method for monotone variational inequalities, under which the method converges assuming only the forward mapping is monotone and (Lipschitz) continuous on some closed convex subset of its domain. The modification entails an additional forward step and a projection step at each iteration. Applications of the modified method to decomposition in convex programming and monotone variational inequalities are discussed.
Interior Point Methods For Optimal Control Of DiscreteTime Systems
 Journal of Optimization Theory and Applications
, 1993
"... . We show that recently developed interior point methods for quadratic programming and linear complementarity problems can be put to use in solving discretetime optimal control problems, with general pointwise constraints on states and controls. We describe interior point algorithms for a discrete ..."
Abstract

Cited by 31 (5 self)
 Add to MetaCart
. We show that recently developed interior point methods for quadratic programming and linear complementarity problems can be put to use in solving discretetime optimal control problems, with general pointwise constraints on states and controls. We describe interior point algorithms for a discrete time linearquadratic regulator problem with mixed state/control constraints, and show how it can be efficiently incorporated into an inexact sequential quadratic programming algorithm for nonlinear problems. The key to the efficiency of the interiorpoint method is the narrowbanded structure of the coefficient matrix which is factorized at each iteration. Key words. interior point algorithms, optimal control, banded linear systems. 1. Introduction. The problem of optimal control of an initial value ordinary differential equation, with Bolza objectives and mixed constraints, is min x;u Z T 0 L(x(t); u(t); t) dt + OE f (x(T )); x(t) = f(x(t); u(t); t); x(0) = x init ; (1.1) g(x(t); u(...
Modified ProjectionType Methods For Monotone Variational Inequalities
 SIAM Journal on Control and Optimization
, 1996
"... . We propose new methods for solving the variational inequality problem where the underlying function F is monotone. These methods may be viewed as projectiontype methods in which the projection direction is modified by a strongly monotone mapping of the form I \Gamma ffF or, if F is affine with un ..."
Abstract

Cited by 25 (9 self)
 Add to MetaCart
. We propose new methods for solving the variational inequality problem where the underlying function F is monotone. These methods may be viewed as projectiontype methods in which the projection direction is modified by a strongly monotone mapping of the form I \Gamma ffF or, if F is affine with underlying matrix M , of the form I + ffM T , with ff 2 (0; 1). We show that these methods are globally convergent and, if in addition a certain error bound based on the natural residual holds locally, the convergence is linear. Computational experience with the new methods is also reported. Key words. Monotone variational inequalities, projectiontype methods, error bound, linear convergence. AMS subject classifications. 49M45, 90C25, 90C33 1. Introduction. We consider the monotone variational inequality problem of finding an x 2 X satisfying F (x ) T (x \Gamma x ) 0 8x 2 X; (1) where X is a closed convex set in ! n and F is a monotone and continuous function from ! n to ...
Markowitz revisited: meanvariance models in financial portfolio analysis
 SIAM Rev
, 2001
"... Abstract. Meanvariance portfolio analysis provided the first quantitative treatment of the tradeoff between profit and risk. We describe in detail the interplay between objective and constraints in a number of singleperiod variants, including semivariance models. Particular emphasis is laid on avo ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
Abstract. Meanvariance portfolio analysis provided the first quantitative treatment of the tradeoff between profit and risk. We describe in detail the interplay between objective and constraints in a number of singleperiod variants, including semivariance models. Particular emphasis is laid on avoiding the penalization of overperformance. The results are then used as building blocks in the development and theoretical analysis of multiperiod models based on scenario trees. A key property is the possibility of removing surplus money in future decisions, yielding approximate downside risk minimization.
Primaldual projected gradient algorithms for extended linearquadratic programming
 SIAM J. Optimization
"... Abstract. Many largescale problems in dynamic and stochastic optimization can be modeled with extended linearquadratic programming, which admits penalty terms and treats them through duality. In general the objective functions in such problems are only piecewise smooth and must be minimized or max ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
Abstract. Many largescale problems in dynamic and stochastic optimization can be modeled with extended linearquadratic programming, which admits penalty terms and treats them through duality. In general the objective functions in such problems are only piecewise smooth and must be minimized or maximized relative to polyhedral sets of high dimensionality. This paper proposes a new class of numerical methods for “fully quadratic ” problems within this framework, which exhibit secondorder nonsmoothness. These methods, combining the idea of finiteenvelope representation with that of modified gradient projection, work with local structure in the primal and dual problems simultaneously, feeding information back and forth to trigger advantageous restarts. Versions resembling steepest descent methods and conjugate gradient methods are presented. When a positive threshold of εoptimality is specified, both methods converge in a finite number of iterations. With threshold 0, it is shown under mild assumptions that the steepest descent version converges linearly, while the conjugate gradient version still has a finite termination property. The algorithms are designed to exploit features of primal and dual decomposability of the Lagrangian, which are typically available in a largescale setting, and they are open to considerable parallelization. Key words. Extended linearquadratic programming, largescale numerical optimization, finiteenvelope representation, gradient projection, primaldual methods, steepest descent methods, conjugate gradient methods. AMS(MOS) subject classifications. 65K05, 65K10, 90C20 1. Introduction. A
Hierarchical sparsity in multistage convex stochastic programs
 Uryasev & P.M. Pardalos, Stochastic Optimization: Algorithms and Applications
, 2000
"... Interior point methods for multistage stochastic programs involve KKT systems with a characteristic global block structure induced by dynamic equations on the scenario tree. We generalize the recursive solution algorithm proposed in an earlier paper so that its linear complexity extends to a rened ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
Interior point methods for multistage stochastic programs involve KKT systems with a characteristic global block structure induced by dynamic equations on the scenario tree. We generalize the recursive solution algorithm proposed in an earlier paper so that its linear complexity extends to a rened treesparse KKT structure. Then we analyze how the block operations can be specialized to take advantage of problemspecic sparse substructures. Savings of memory and operations for a nancial engineering application are discussed in detail.
Recursive Direct Algorithms for Multistage Stochastic Programs in Financial Engineering
, 1998
"... Multistage stochastic programs can be seen as discrete optimal control problems with a characteristic dynamic structure induced by the scenario tree. To exploit that structure, we propose a highly efficient dynamic programming recursion for the computationally intensive task of KKT systems solution ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
Multistage stochastic programs can be seen as discrete optimal control problems with a characteristic dynamic structure induced by the scenario tree. To exploit that structure, we propose a highly efficient dynamic programming recursion for the computationally intensive task of KKT systems solution within an interior point method. Test runs on a multistage portfolio selection problem demonstrate the performance of the algorithm. 1 Introduction Multistage stochastic programs have become an important approach to model the process of decision making under uncertainty over a finite planning horizon. Important applications include, among others, financial engineering problems such as portfolio selection or asset and liability management. Multistage stochastic programs are considered a very hard class of optimization problems since their size can become excessively large even for coarse discretizations of the probability space and possibly the time horizon. Nevertheless, the characteristic ...
Operator Splitting Methods for Monotone Affine Variational Inequalities, with a Parallel Application to Optimal Control
 INFORMS J. Comput
, 1994
"... This paper applies splitting techniques developed for setvalued maximal monotone operators to monotone affine variational inequalities, including as a special case the classical linear complementarity problem. We give a unified presentation of several splitting algorithms for monotone operators, an ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
This paper applies splitting techniques developed for setvalued maximal monotone operators to monotone affine variational inequalities, including as a special case the classical linear complementarity problem. We give a unified presentation of several splitting algorithms for monotone operators, and then apply these results to obtain two classes of algorithms for affine variational inequalities. The second class resembles classical matrix splitting, but has a novel "underrelaxation " step, and converges under more general conditions. In particular, the convergence proofs do not require the affine operator to be symmetric. We specialize our matrixsplittinglike method to discretetime optimal control problems formulated as extended linearquadratic programs in the manner advocated by Rockafellar and Wets. The result is a highly parallel algorithm, which we implement and test on the Connection Machine CM5 computer family. The affine variational inequality problem is to find a vector x...