Results 1  10
of
90
SNOPT: An SQP Algorithm For LargeScale Constrained Optimization
, 2002
"... Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first deriv ..."
Abstract

Cited by 597 (24 self)
 Add to MetaCart
(Show Context)
Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first derivatives are available, and that the constraint gradients are sparse. We discuss
Constrained Optimization Approaches to Estimation of Structural Models
, 2008
"... Maximum likelihood estimation of structural models is often viewed as computationally difficult. This impression is due to a focus on the Nested FixedPoint approach. We present a direct optimization approach to the general problem and show that it is significantly faster than the NFXP approach whe ..."
Abstract

Cited by 76 (7 self)
 Add to MetaCart
Maximum likelihood estimation of structural models is often viewed as computationally difficult. This impression is due to a focus on the Nested FixedPoint approach. We present a direct optimization approach to the general problem and show that it is significantly faster than the NFXP approach when applied to the canonical Zurcher bus repair model. The NFXP approach is inappropriate for estimating games since it requires finding all Nash equilibria of a game for each parameter vector considered, a generally intractable computational problem. We formulate the problem of maximum likelihood estimation of games as a constrained optimization problem that is qualitatively no more difficult to solve than standard maximum likelihood problems. The direct optimization approach is also applicable to other structural estimation methods such as methods of moments, and also allows one to use computationally intensive bootstrap methods to calculate inference. The MPEC approach is also easily implemented on software with highlevel interfaces. Furthermore, all the examples in this paper were computed using only free resources available on the web.
Scatter search and local NLP solvers: A multistart framework for global optimization
 INFORMS Journal on Computing
"... doi 10.1287/ijoc.1060.0175 ..."
A Newton Barrier method for Minimizing a Sum of Euclidean Norms subject to linear equality constraints
, 1995
"... An algorithm for minimizing a sum of Euclidean Norms subject to linear equality constraints is described. The algorithm is based on a recently developed Newton barrier method for the unconstrained minimization of a sum of Euclidean norms (MSN ). The linear equality constraints are handled using an e ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
An algorithm for minimizing a sum of Euclidean Norms subject to linear equality constraints is described. The algorithm is based on a recently developed Newton barrier method for the unconstrained minimization of a sum of Euclidean norms (MSN ). The linear equality constraints are handled using an exact L 1 penalty function which is made smooth in the same way as the Euclidean norms. It is shown that the dual problem is to maximize a linear objective function subject to homogeneous linear equality constraints and quadratic inequalities. Hence the suggested method also solves such problems efficiently. In fact such a problem from plastic collapse analysis motivated this work. Numerical results are presented for large sparse problems, demonstrating the extreme efficiency of the method. Keywords: Sum of Norms, Nonsmooth Optimization, Duality, Newton Barrier Method. AMS(MOS) subject classification: 65K05, 90C06, 90C25, 90C90. Abbreviated title: A Newton barrier method. Supported by the ...
Computing Limit Loads By Minimizing a Sum of Norms
 IFIP
, 1994
"... This paper treats the problem of computing the collapse state in limit analysis for a solid with a quadratic yield condition, such as, for example, the Mises condition. After discretization with the finite element method, using divergencefree elements for the plastic flow, the kinematic formulation ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
(Show Context)
This paper treats the problem of computing the collapse state in limit analysis for a solid with a quadratic yield condition, such as, for example, the Mises condition. After discretization with the finite element method, using divergencefree elements for the plastic flow, the kinematic formulation turns into the problem of minimizing a sum of Euclidean vector norms, subject to a single linear constraint. This is a nonsmooth minimization problem, since many of the norms in the sum may vanish at the optimal point. However, efficient solution algorithms for this particular convex optimization problem have recently been developed. The method is applied to test problems in limit analysis in two different plane models: plane strain and plates. In the first case more than 80 percent of the terms in the sum are zero in the optimal solution, causing severe illconditioning. In the last case all terms are nonzero. In both cases the algorithm works very well, and problems are solved which are l...
Analysis and implementation of a dual algorithm for constrained optimization
 Journal of Optimization Theory and Applications
, 1993
"... Abstract. This paper analyzes a constrained optimization algorithm that combines an unconstrained minimization scheme like the conjugate gradient method, an augmented Lagrangian, and multiplier updates to obtain global quadratic convergence. Some of the issues that we focus on are the treatment of r ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
(Show Context)
Abstract. This paper analyzes a constrained optimization algorithm that combines an unconstrained minimization scheme like the conjugate gradient method, an augmented Lagrangian, and multiplier updates to obtain global quadratic convergence. Some of the issues that we focus on are the treatment of rigid constraints that must be satisfied during the iterations and techniques for balancing the error associated with constraint violation with the error associated with optimality. A preconditioner is constructed with the property that the rigid constraints are satisfied while illconditioning due to penalty terms is alleviated. Various numerical linear algebra techniques required for the efficient implementation of the algorithm are presented, and convergence behavior is illustrated in a series of numerical experiments.
Simultaneous Optimization and Heat Integration of Chemical Processes, "AIChE
 J
, 1986
"... chemical processes ..."
On attraction of linearly constrained Lagrangian methods and of stabilized and quasiNewton SQP methods to critical multipliers
 MATHEMATICAL PROGRAMMING
, 2009
"... ..."
LOCAL CONVERGENCE OF EXACT AND INEXACT AUGMENTED LAGRANGIAN METHODS UNDER THE SECONDORDER SUFFICIENT OPTIMALITY CONDITION
, 2012
"... We establish local convergence and rate of convergence of the classical augmented Lagrangian algorithm under the sole assumption that the dual starting point is close to a multiplier satisfying the secondorder sufficient optimality condition. In particular, no constraint qualifications of any kind ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
We establish local convergence and rate of convergence of the classical augmented Lagrangian algorithm under the sole assumption that the dual starting point is close to a multiplier satisfying the secondorder sufficient optimality condition. In particular, no constraint qualifications of any kind are needed. Previous literature on the subject required, in addition, the linear independence constraint qualification and either the strict complementarity assumption or a stronger version of the secondorder sufficient condition. That said, the classical results allow the initial multiplier estimate to be far from the optimal one, at the expense of proportionally increasing the threshold value for the penalty parameters. Although our primary goal is to avoid constraint qualifications, if the stronger assumptions are introduced, then starting points far from the optimal multiplier are allowed within our analysis as well. Using only the secondorder sufficient optimality condition, for penalty parameters large enough we prove primaldual Qlinear convergence rate, which becomes superlinear if the parameters are allowed to go to infinity. Both exact and inexact solutions of subproblems are considered. In the exact case, we further show that the primal convergence rate is of the same Qorder as the primaldual rate. Previous assertions for the primal sequence all had to do with the weaker Rrate of convergence and required the stronger assumptions cited above. Finally, we show that under our assumptions one of the popular rules of controlling the penalty parameters ensures their boundedness.
Experience with a Primal Presolve Algorithm
 IN LARGE SCALE OPTIMIZATION: STATE OF THE
, 1994
"... Sometimes an optimization problem can be simplified to a form that is faster to solve. Indeed, sometimes it is convenient to state a problem in a way that admits some obvious simplifications, such as eliminating fixed variables and removing constraints that become redundant after simple bounds on th ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
Sometimes an optimization problem can be simplified to a form that is faster to solve. Indeed, sometimes it is convenient to state a problem in a way that admits some obvious simplifications, such as eliminating fixed variables and removing constraints that become redundant after simple bounds on the variables have been updated appropriately. Because of this convenience, the AMPL modeling system includes a "presolver" that attempts to simplify a problem before passing it to a solver. The current AMPL presolver carries out all the primal simplifications described by Brearely et al. in 1975. This paper describes AMPL's presolver, discusses reconstruction of dual values for eliminated constraints, and presents some computational results.