Results 1 
2 of
2
SNOPT: An SQP Algorithm For LargeScale Constrained Optimization
, 2002
"... Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first deriv ..."
Abstract

Cited by 597 (24 self)
 Add to MetaCart
(Show Context)
Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first derivatives are available, and that the constraint gradients are sparse. We discuss
STABILIZING POLICY IMPROVEMENT FOR LARGESCALE INFINITEHORIZON DYNAMIC PROGRAMMING ∗
"... Abstract. Today’s focus on sustainability within industry presents a modeling challenge that may be dealt with using dynamic programming over an infinite time horizon. However, the curse of dimensionality often results in a large number of states in these models. These largescale models require num ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. Today’s focus on sustainability within industry presents a modeling challenge that may be dealt with using dynamic programming over an infinite time horizon. However, the curse of dimensionality often results in a large number of states in these models. These largescale models require numerically stable solution methods. The best method for infinitehorizon dynamic programming depends on both the optimality concept considered and the nature of transitions in the system. Previous research uses policy improvement to find strongpresentvalue optimal policies within normalized systems. A critical step in policy improvement is the calculation of coefficients for the Laurent expansion of the presentvalue for a given policy. Policy improvement uses these coefficients to search for improvements of that policy. The system of linear equations that yields the coefficients will often be rankdeficient, so a specialized solution method for large singular systems is essential. We focus on implementing policy improvement for systems with substochastic classes (a subset of normalized systems). We present methods for calculating the presentvalue Laurent expansion coefficients of a policy with substochastic classes. Classifying the states allows for a decomposition of the linear system into a number of smaller linear systems. Each smaller linear system has full rank or is rankdeficient by one. We show how to make repeated use of a rankrevealing LU factorization to solve the smaller systems. In the rankdeficient case, excellent numerical properties are obtained with an extension of Veinott’s method [Ann. Math. Statist., 40 (1969), pp. 1635–1660] for substochastic systems.