Results 1  10
of
96
SNOPT: An SQP Algorithm For LargeScale Constrained Optimization
, 2002
"... Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first deriv ..."
Abstract

Cited by 597 (24 self)
 Add to MetaCart
(Show Context)
Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first derivatives are available, and that the constraint gradients are sparse. We discuss
On the Implementation of an InteriorPoint Filter LineSearch Algorithm for LargeScale Nonlinear Programming
, 2004
"... We present a primaldual interiorpoint algorithm with a filter linesearch method for nonlinear programming. Local and global convergence properties of this method were analyzed in previous work. Here we provide a comprehensive description of the algorithm, including the feasibility restoration ph ..."
Abstract

Cited by 294 (6 self)
 Add to MetaCart
(Show Context)
We present a primaldual interiorpoint algorithm with a filter linesearch method for nonlinear programming. Local and global convergence properties of this method were analyzed in previous work. Here we provide a comprehensive description of the algorithm, including the feasibility restoration phase for the filter method, secondorder corrections, and inertia correction of the KKT matrix. Heuristics are also considered that allow faster performance. This method has been implemented in the IPOPT code, which we demonstrate in a detailed numerical study based on 954 problems from the CUTEr test set. An evaluation is made of several linesearch options, and a comparison is provided with two stateoftheart interiorpoint codes for nonlinear programming.
Interior methods for nonlinear optimization
 SIAM REVIEW
, 2002
"... Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for ..."
Abstract

Cited by 127 (6 self)
 Add to MetaCart
(Show Context)
Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for linear programming was not even contemplated because of the total dominance of the simplex method. Vague but continuing anxiety about barrier methods eventually led to their abandonment in favor of newly emerging, apparently more efficient alternatives such as augmented Lagrangian and sequential quadratic programming methods. By the early 1980s, barrier methods were almost without exception regarded as a closed chapter in the history of optimization. This picture changed dramatically with Karmarkar’s widely publicized announcement in 1984 of a fast polynomialtime interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, interior methods have advanced so far, so fast, that their influence has transformed both the theory and practice of constrained optimization. This article provides a condensed, selective look at classical material and recent research about interior methods for nonlinearly constrained optimization.
An interior point algorithm for largescale nonlinear . . .
, 2002
"... Nonlinear programming (NLP) has become an essential tool in process engineering, leading to prot gains through improved plant designs and better control strategies. The rapid advance in computer technology enables engineers to consider increasingly complex systems, where existing optimization codes ..."
Abstract

Cited by 64 (3 self)
 Add to MetaCart
Nonlinear programming (NLP) has become an essential tool in process engineering, leading to prot gains through improved plant designs and better control strategies. The rapid advance in computer technology enables engineers to consider increasingly complex systems, where existing optimization codes reach their practical limits. The objective of this dissertation is the design, analysis, implementation, and evaluation of a new NLP algorithm that is able to overcome the current bottlenecks, particularly in the area of process engineering. The proposed algorithm follows an interior point approach, thereby avoiding the combinatorial complexity of identifying the active constraints. Emphasis is laid on exibility in the computation of search directions, which allows the tailoring of the method to individual applications and is mandatory for the solution of very large problems. In a fullspace version the method can be used as general purpose NLP solver, for example in modeling environments such as Ampl. The reduced space version, based on coordinate decomposition, makes it possible to tailor linear algebra
A Globally Convergent PrimalDual InteriorPoint Filter Method for Nonlinear Programming
, 2002
"... In this paper, the filter technique of Fletcher and Leyffer (1997) is used to globalize the primaldual interiorpoint algorithm for nonlinear programming, avoiding the use of merit functions and the updating of penalty parameters. The new algorithm decomposes the primaldual step obtained from the p ..."
Abstract

Cited by 52 (4 self)
 Add to MetaCart
In this paper, the filter technique of Fletcher and Leyffer (1997) is used to globalize the primaldual interiorpoint algorithm for nonlinear programming, avoiding the use of merit functions and the updating of penalty parameters. The new algorithm decomposes the primaldual step obtained from the perturbed firstorder necessary conditions into a normal and a tangential step, whose sizes are controlled by a trustregion type parameter. Each entry in the filter is a pair of coordinates: one resulting from feasibility and centrality, and associated with the normal step; the other resulting from optimality (complementarity and duality), and related with the tangential step. Global convergence to firstorder critical points is proved for the new primaldual interiorpoint filter algorithm.
Failure of global convergence for a class of interior point methods for nonlinear programming
 Math. Program
"... ..."
(Show Context)
A PrimalDual InteriorPoint Method for Nonlinear Programming with Strong Global and Local Convergence Properties
 SIAM Journal on Optimization
, 2002
"... An exactpenaltyfunctionbased schemeinspired from an old idea due to Mayne and Polak (Math. Prog., vol. 11, 1976, pp. 6780)is proposed for extending to general smooth constrained optimization problems any given feasible interiorpoint method for inequality constrained problems. It is s ..."
Abstract

Cited by 39 (5 self)
 Add to MetaCart
An exactpenaltyfunctionbased schemeinspired from an old idea due to Mayne and Polak (Math. Prog., vol. 11, 1976, pp. 6780)is proposed for extending to general smooth constrained optimization problems any given feasible interiorpoint method for inequality constrained problems. It is shown that the primaldual interiorpoint framework allows for a simpler penalty parameter update rule than that discussed and analyzed by the originators of the scheme in the context of first order methods of feasible direction. Strong global and local convergence results are proved under mild assumptions. In particular, (i) the proposed algorithm does not su#er a common pitfall # Department of Electrical and Computer Engineering and Institute for Systems Research, University of Maryland, College Park, MD 20742, USA + IBM T.J. Watson Research Center, Yorktown Heights, NY 10598, USA # Applied Physics Laboratory, Laurel, MD 20723, USA Alphatech, Arlington, VA 22203, USA recently pointed out by Wachter and Biegler; and (ii) the positive definiteness assumption on the Hessian estimate, made in the original version of the algorithm, is relaxed, allowing for the use of exact Hessian information, resulting in local quadratic convergence. Promising numerical results are reported.
Wedge Trust Region Methods for Derivative Free Optimization
 Mathematical Programming
, 2000
"... A new method for derivativefree optimization is presented. It is designed for solving problems in which the objective function is smooth and the number of variables is moderate, but the gradient is not available. The method generates a model that interpolates the objective function at a set of s ..."
Abstract

Cited by 29 (1 self)
 Add to MetaCart
(Show Context)
A new method for derivativefree optimization is presented. It is designed for solving problems in which the objective function is smooth and the number of variables is moderate, but the gradient is not available. The method generates a model that interpolates the objective function at a set of sample points, and uses trust regions to promote convergence. The stepgeneration subproblem ensures that all the iterates satisfy a geometric condition and are therefore adequate for updating the model. The sample points are updated using a scheme that improves the accuracy of the interpolation model when needed. Two versions of the method are presented: one using linear models and the other using quadratic models. Numerical tests comparing the new approach with established methods for derivatefree optimization are reported. This work was supported by National Science Foundation grant CCR9987818, and by Department of Energy grant DEFG0287ER25047A004. y Department of Industrial ...
Entropy Search for InformationEfficient Global Optimization
"... Contemporary global optimization algorithms are based on local measures of utility, rather than a probability measure over location and value of the optimum. They thus attempt to collect low function values, not to learn about the optimum. The reason for the absence of probabilistic global optimizer ..."
Abstract

Cited by 28 (0 self)
 Add to MetaCart
(Show Context)
Contemporary global optimization algorithms are based on local measures of utility, rather than a probability measure over location and value of the optimum. They thus attempt to collect low function values, not to learn about the optimum. The reason for the absence of probabilistic global optimizers is that the corresponding inference problem is intractable in several ways. This paper develops desiderata for probabilistic optimization algorithms, then presents a concrete algorithm which addresses each of the computational intractabilities with a sequence of approximations and explicitly addresses the decision problem of maximizing information gain from each evaluation.