Results 1  10
of
36
Snopt: An SQP Algorithm For LargeScale Constrained Optimization
, 1997
"... Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first deriv ..."
Abstract

Cited by 327 (18 self)
 Add to MetaCart
Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first derivatives are available, and that the constraint gradients are sparse.
On the implementation of an interiorpoint filter linesearch algorithm for largescale nonlinear programming
 Mathematical Programming
, 2006
"... We present a primaldual interiorpoint algorithm with a filter linesearch method for nonlinear programming. Local and global convergence properties of this method were analyzed in previous work. Here we provide a comprehensive description of the algorithm, including the feasibility restoration pha ..."
Abstract

Cited by 109 (5 self)
 Add to MetaCart
We present a primaldual interiorpoint algorithm with a filter linesearch method for nonlinear programming. Local and global convergence properties of this method were analyzed in previous work. Here we provide a comprehensive description of the algorithm, including the feasibility restoration phase for the filter method, secondorder corrections, and inertia correction of the KKT matrix. Heuristics are also considered that allow faster performance. This method has been implemented in the IPOPT code, which we demonstrate in a detailed numerical study based on 954 problems from the CUTEr test set. An evaluation is made of several linesearch options, and a comparison is provided with two stateoftheart interiorpoint codes for nonlinear programming.
An Algorithm for Nonlinear Optimization Using Linear Programming and Equality Constrained Subproblems
, 2003
"... This paper describes an activeset algorithm for largescale nonlinear programming based on the successive linear programming method proposed by Fletcher and Sainz de la Maza [10]. The step computation is performed in two stages. In the first stage a linear program is solved to estimate the activ ..."
Abstract

Cited by 41 (12 self)
 Add to MetaCart
This paper describes an activeset algorithm for largescale nonlinear programming based on the successive linear programming method proposed by Fletcher and Sainz de la Maza [10]. The step computation is performed in two stages. In the first stage a linear program is solved to estimate the active set at the solution. The linear program is obtained by making a linear approximation to the ` 1 penalty function inside a trust region. In the second stage, an equality constrained quadratic program (EQP) is solved involving only those constraints that are active at the solution of the linear program.
An interior algorithm for nonlinear optimization that combines line search and trust region steps
 Mathematical Programming 107
, 2006
"... An interiorpoint method for nonlinear programming is presented. It enjoys the flexibility of switching between a line search method that computes steps by factoring the primaldual equations and a trust region method that uses a conjugate gradient iteration. Steps computed by direct factorization a ..."
Abstract

Cited by 31 (11 self)
 Add to MetaCart
An interiorpoint method for nonlinear programming is presented. It enjoys the flexibility of switching between a line search method that computes steps by factoring the primaldual equations and a trust region method that uses a conjugate gradient iteration. Steps computed by direct factorization are always tried first, but if they are deemed ineffective, a trust region iteration that guarantees progress toward stationarity is invoked. To demonstrate its effectiveness, the algorithm is implemented in the Knitro [6, 28] software package and is extensively tested on a wide selection of test problems. 1
A globally convergent linearly constrained Lagrangian method for nonlinear optimization
 SIAM J. Optim
, 2002
"... Abstract. For optimization problems with nonlinear constraints, linearly constrained Lagrangian (LCL) methods solve a sequence of subproblems of the form “minimize an augmented Lagrangian function subject to linearized constraints. ” Such methods converge rapidly near a solution but may not be relia ..."
Abstract

Cited by 22 (5 self)
 Add to MetaCart
Abstract. For optimization problems with nonlinear constraints, linearly constrained Lagrangian (LCL) methods solve a sequence of subproblems of the form “minimize an augmented Lagrangian function subject to linearized constraints. ” Such methods converge rapidly near a solution but may not be reliable from arbitrary starting points. Nevertheless, the wellknown software package MINOS has proved effective on many large problems. Its success motivates us to derive a related LCL algorithm that possesses three important properties: it is globally convergent, the subproblem constraints are always feasible, and the subproblems may be solved inexactly. The new algorithm has been implemented in Matlab, with an option to use either MINOS or SNOPT (Fortran codes) to solve the linearly constrained subproblems. Only first derivatives are required. We present numerical results on a subset of the COPS, HS, and CUTE test problems, which include many large examples. The results demonstrate the robustness and efficiency of the stabilized LCL procedure.
Approximate factorization constraint preconditioners for saddlepoint matrices
 SIAM J. Sci. Comput
"... Abstract. We consider the application of the conjugate gradient method to the solution of large, symmetric indefinite linear systems. Special emphasis is put on the use of constraint preconditioners and a new factorization that can reduce the number of flops required by the preconditioning step. Res ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
Abstract. We consider the application of the conjugate gradient method to the solution of large, symmetric indefinite linear systems. Special emphasis is put on the use of constraint preconditioners and a new factorization that can reduce the number of flops required by the preconditioning step. Results concerning the eigenvalues of the preconditioned matrix and its minimum polynomial are given. Numerical experiments validate these conclusions.
GALAHAD, a library of threadsafe Fortran 90 Packages for LargeScale Nonlinear Optimization
, 2002
"... In this paper, we describe the design of version 1.0 of GALAHAD, a library of Fortran 90 packages for largescale largescale nonlinear optimization. The library particularly addresses quadratic programming problems, containing both interior point and active set variants, as well as tools for prepro ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
In this paper, we describe the design of version 1.0 of GALAHAD, a library of Fortran 90 packages for largescale largescale nonlinear optimization. The library particularly addresses quadratic programming problems, containing both interior point and active set variants, as well as tools for preprocessing such problems prior to solution. It also contains an updated version of the venerable nonlinear programming package, LANCELOT.
Asynchronous parallel generating set search for linearlyconstrained optimization
 SIAM JOURNAL ON SCIENTIFIC COMPUTING
, 2008
"... We describe an asynchronous parallel derivativefree algorithm for linearly constrained optimization. Generating set search (GSS) is the basis of our method. At each iteration, a GSS algorithm computes a set of search directions and corresponding trial points and then evaluates the objective functio ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
We describe an asynchronous parallel derivativefree algorithm for linearly constrained optimization. Generating set search (GSS) is the basis of our method. At each iteration, a GSS algorithm computes a set of search directions and corresponding trial points and then evaluates the objective function value at each trial point. Asynchronous versions of the algorithm have been developed in the unconstrained and boundconstrained cases which allow the iterations to continue (and new trial points to be generated and evaluated) as soon as any other trial point completes. This enables better utilization of parallel resources and a reduction in overall run time, especially for problems where the objective function takes minutes or hours to compute. For linearly constrained GSS, the convergence theory requires that the set of search directions conforms to the nearby boundary. This creates an immediate obstacle for asynchronous methods where the definition of nearby is not well defined. In this paper, we develop an asynchronous linearly constrained GSS method that overcomes this difficulty and maintains the original convergence theory. We describe our implementation in detail, including how to avoid function evaluations by caching function values and using approximate lookups. We test our implementation on every CUTEr test problem with general linear constraints and up to 1000 variables. Without tuning to individual problems, our implementation was able to solve 95% of the test problems with 10 or fewer variables, 73% of the problems with 11100 variables, and nearly half of the problems with 1001000 variables. To the best of our knowledge, these are the best results that have ever been achieved with a derivativefree method for linearly constrained optimization. Our asynchronous parallel implementation is freely available as part of the APPSPACK software.
Inexact SQP methods for equality constrained optimization
 SIAM J. Opt
"... Abstract. We present an algorithm for largescale equality constrained optimization. The method is based on a characterization of inexact sequential quadratic programming (SQP) steps that can ensure global convergence. Inexact SQP methods are needed for largescale applications for which the iterati ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
Abstract. We present an algorithm for largescale equality constrained optimization. The method is based on a characterization of inexact sequential quadratic programming (SQP) steps that can ensure global convergence. Inexact SQP methods are needed for largescale applications for which the iteration matrix cannot be explicitly formed or factored and the arising linear systems must be solved using iterative linear algebra techniques. We address how to determine when a given inexact step makes sufficient progress toward a solution of the nonlinear program, as measured by an exact penalty function. The method is globalized by a line search. An analysis of the global convergence properties of the algorithm and numerical results are presented. Key words. largescale optimization, constrained optimization, sequential quadratic programming, inexact linear system solvers, Krylov subspace methods AMS subject classifications. 49M37, 65K05, 90C06, 90C30, 90C55 1. Introduction. In
Adaptive Barrier Update Strategies for Nonlinear Interior Methods
, 2005
"... Abstract This paper considers strategies for selecting the barrier parameter at every iterationof an interiorpoint method for nonlinear programming. Numerical experiments suggest that adaptive choices, such as Mehrotra's probing procedure, outperform static strategies that hold the barrier paramet ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Abstract This paper considers strategies for selecting the barrier parameter at every iterationof an interiorpoint method for nonlinear programming. Numerical experiments suggest that adaptive choices, such as Mehrotra's probing procedure, outperform static strategies that hold the barrier parameter fixed until a barrier optimality test is satisfied. A new adaptive strategy is proposed based on the minimization of a quality function. Thepaper also proposes a globalization framework that ensures the convergence of adaptive interior methods. The barrier update strategies proposed in this paper are applicable to a wide class of interior methods and are tested in the two distinct algorithmic frameworks provided by the ipopt and knitro software packages.