Results 1  10
of
45
Snopt: An SQP Algorithm For LargeScale Constrained Optimization
, 1997
"... Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first deriv ..."
Abstract

Cited by 328 (18 self)
 Add to MetaCart
Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first derivatives are available, and that the constraint gradients are sparse.
On the implementation of an interiorpoint filter linesearch algorithm for largescale nonlinear programming
 Mathematical Programming
, 2006
"... We present a primaldual interiorpoint algorithm with a filter linesearch method for nonlinear programming. Local and global convergence properties of this method were analyzed in previous work. Here we provide a comprehensive description of the algorithm, including the feasibility restoration pha ..."
Abstract

Cited by 109 (5 self)
 Add to MetaCart
We present a primaldual interiorpoint algorithm with a filter linesearch method for nonlinear programming. Local and global convergence properties of this method were analyzed in previous work. Here we provide a comprehensive description of the algorithm, including the feasibility restoration phase for the filter method, secondorder corrections, and inertia correction of the KKT matrix. Heuristics are also considered that allow faster performance. This method has been implemented in the IPOPT code, which we demonstrate in a detailed numerical study based on 954 problems from the CUTEr test set. An evaluation is made of several linesearch options, and a comparison is provided with two stateoftheart interiorpoint codes for nonlinear programming.
Interior methods for nonlinear optimization
 SIAM Review
, 2002
"... Abstract. Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their ..."
Abstract

Cited by 76 (4 self)
 Add to MetaCart
Abstract. Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for linear programming was not even contemplated because of the total dominance of the simplex method. Vague but continuing anxiety about barrier methods eventually led to their abandonment in favor of newly emerging, apparently more efficient alternatives such as augmented Lagrangian and sequential quadratic programming methods. By the early 1980s, barrier methods were almost without exception regarded as a closed chapter in the history of optimization. This picture changed dramatically with Karmarkar’s widely publicized announcement in 1984 of a fast polynomialtime interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, interior methods have advanced so far, so fast, that their influence has transformed both the theory and practice of constrained optimization. This article provides a condensed, selective look at classical material and recent research about interior methods for nonlinearly constrained optimization.
Failure of Global Convergence for a Class of Interior Point Methods for Nonlinear Programming
 Mathematical Programming
, 2000
"... Using a simple analytical example, we demonstrate that a class of interior point methods for general nonlinear programming, including some current methods, is not globally convergent. It is shown that those algorithms do produce limit points that are neither feasible nor stationary points of some ..."
Abstract

Cited by 34 (4 self)
 Add to MetaCart
Using a simple analytical example, we demonstrate that a class of interior point methods for general nonlinear programming, including some current methods, is not globally convergent. It is shown that those algorithms do produce limit points that are neither feasible nor stationary points of some measure of the constraint violation, when applied to a wellposed problem. 1 Introduction Over the past decade a variety of interior point methods for nonconvex nonlinear programming (NLP) have been proposed and found to be efficient in practice (see e.g. [1][4], [6][8], [10][12]). Based on earlier work [5], these methods come in different varieties, such as primal or primaldual methods, line search or trust region methods, with different merit functions, different strategies to update the barrier parameter, etc. For some algorithms, theoretical global convergence properties have been proved. It has been shown that under certain assumptions the considered method converges to a loca...
A Globally Convergent PrimalDual InteriorPoint Filter Method for Nonlinear Programming
, 2002
"... In this paper, the filter technique of Fletcher and Leyffer (1997) is used to globalize the primaldual interiorpoint algorithm for nonlinear programming, avoiding the use of merit functions and the updating of penalty parameters. The new algorithm decomposes the primaldual step obtained from the p ..."
Abstract

Cited by 33 (4 self)
 Add to MetaCart
In this paper, the filter technique of Fletcher and Leyffer (1997) is used to globalize the primaldual interiorpoint algorithm for nonlinear programming, avoiding the use of merit functions and the updating of penalty parameters. The new algorithm decomposes the primaldual step obtained from the perturbed firstorder necessary conditions into a normal and a tangential step, whose sizes are controlled by a trustregion type parameter. Each entry in the filter is a pair of coordinates: one resulting from feasibility and centrality, and associated with the normal step; the other resulting from optimality (complementarity and duality), and related with the tangential step. Global convergence to firstorder critical points is proved for the new primaldual interiorpoint filter algorithm.
A PrimalDual InteriorPoint Method for Nonlinear Programming with Strong Global and Local Convergence Properties
 SIAM Journal on Optimization
, 2002
"... An exactpenaltyfunctionbased schemeinspired from an old idea due to Mayne and Polak (Math. Prog., vol. 11, 1976, pp. 6780)is proposed for extending to general smooth constrained optimization problems any given feasible interiorpoint method for inequality constrained problems. It is s ..."
Abstract

Cited by 30 (5 self)
 Add to MetaCart
An exactpenaltyfunctionbased schemeinspired from an old idea due to Mayne and Polak (Math. Prog., vol. 11, 1976, pp. 6780)is proposed for extending to general smooth constrained optimization problems any given feasible interiorpoint method for inequality constrained problems. It is shown that the primaldual interiorpoint framework allows for a simpler penalty parameter update rule than that discussed and analyzed by the originators of the scheme in the context of first order methods of feasible direction. Strong global and local convergence results are proved under mild assumptions. In particular, (i) the proposed algorithm does not su#er a common pitfall # Department of Electrical and Computer Engineering and Institute for Systems Research, University of Maryland, College Park, MD 20742, USA + IBM T.J. Watson Research Center, Yorktown Heights, NY 10598, USA # Applied Physics Laboratory, Laurel, MD 20723, USA Alphatech, Arlington, VA 22203, USA recently pointed out by Wachter and Biegler; and (ii) the positive definiteness assumption on the Hessian estimate, made in the original version of the algorithm, is relaxed, allowing for the use of exact Hessian information, resulting in local quadratic convergence. Promising numerical results are reported.
Numerical experience with solving MPECs as NLPs
 Department of Mathematics and Computer Science, University of Dundee, Dundee
, 2002
"... This paper describes numerical experience with solving MPECs as NLPs on a large collection of test problems. The key idea is to use offtheshelf NLP solvers to tackle large instances of MPECs. It is shown that SQP methods are very well suited to solving MPECs and at present outperform Interior Poin ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
This paper describes numerical experience with solving MPECs as NLPs on a large collection of test problems. The key idea is to use offtheshelf NLP solvers to tackle large instances of MPECs. It is shown that SQP methods are very well suited to solving MPECs and at present outperform Interior Point solvers both in terms of speed and reliability. All NLP solvers also compare very favourably to special MPEC solvers on tests published in the literature.
Wedge Trust Region Methods for Derivative Free Optimization
 Mathematical Programming
, 2000
"... A new method for derivativefree optimization is presented. It is designed for solving problems in which the objective function is smooth and the number of variables is moderate, but the gradient is not available. The method generates a model that interpolates the objective function at a set of s ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
A new method for derivativefree optimization is presented. It is designed for solving problems in which the objective function is smooth and the number of variables is moderate, but the gradient is not available. The method generates a model that interpolates the objective function at a set of sample points, and uses trust regions to promote convergence. The stepgeneration subproblem ensures that all the iterates satisfy a geometric condition and are therefore adequate for updating the model. The sample points are updated using a scheme that improves the accuracy of the interpolation model when needed. Two versions of the method are presented: one using linear models and the other using quadratic models. Numerical tests comparing the new approach with established methods for derivatefree optimization are reported. This work was supported by National Science Foundation grant CCR9987818, and by Department of Energy grant DEFG0287ER25047A004. y Department of Industrial ...
Generalized stationary points and an interiorpoint method for mathematical programs with equilibrium constraints
 Industrial Engineering & Management Sciences, Northwestern University
, 2005
"... Abstract. Generalized stationary points of the mathematical program with equilibrium constraints (MPEC) are studied to better describe the limit points produced by interior point methods for MPEC. A primaldual interiorpoint method is then proposed, which solves a sequence of relaxed barrier proble ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
Abstract. Generalized stationary points of the mathematical program with equilibrium constraints (MPEC) are studied to better describe the limit points produced by interior point methods for MPEC. A primaldual interiorpoint method is then proposed, which solves a sequence of relaxed barrier problems derived from MPEC. Global convergence results are deduced without assuming strict complementarity or the linear independence constraint qualification for MPEC (MPECLICQ). Under certain general assumptions, the algorithm can always find some point with strong or weak stationarity. In particular, it is shown that every limit point of the generated sequence is a strong stationary point of MPEC if the penalty parameter of the merit function is bounded. Otherwise, a certain point with weak stationarity can be obtained. Preliminary numerical results are reported, which include a case analyzed by Leyffer for which the penalty interiorpoint algorithm failed to find a stationary point. Key words: Global convergence, interiorpoint methods, mathematical programming with equilibrium constraints, stationary point
On the Solution of Mathematical Programming Problems With Equilibrium Constraints
, 2001
"... Mathematical programming problems with equilibrium constraints (MPEC) are nonlinear programming problems where the constraints have a form that is analogous to firstorder optimality conditions of constrained optimization. We prove that, under reasonable sufficient conditions, stationary points of t ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
Mathematical programming problems with equilibrium constraints (MPEC) are nonlinear programming problems where the constraints have a form that is analogous to firstorder optimality conditions of constrained optimization. We prove that, under reasonable sufficient conditions, stationary points of the sum of squares of the constraints are feasible points of the MPEC. In usual formulations of MPEC all the feasible points are nonregular in the sense that they do not satisfy the MangasarianFromovitz constraint qualification of nonlinear programming. Therefore, all the feasible points satisfy the classical FritzJohn necessary optimality conditions. In principle, this can cause serious difficulties for nonlinear programming algorithms applied to MPEC. However, we show that most feasible points do not satisfy a recently introduced stronger optimality condition for nonlinear programming. This is the reason why, in general, nonlinear programming algorithms are successful when applied to MPEC. Keywords. Mathematical programming with equilibrium constraints, optimality conditions, minimization algorithms, reformulation. AMS: 90C33, 90C30