Results 1  10
of
19
SNOPT: An SQP Algorithm For LargeScale Constrained Optimization
, 2002
"... Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first deriv ..."
Abstract

Cited by 597 (24 self)
 Add to MetaCart
(Show Context)
Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first derivatives are available, and that the constraint gradients are sparse. We discuss
Benchmarking Optimization Software with Performance Profiles
, 2001
"... We propose performance profiles  distribution functions for a performance metric  as a tool for benchmarking and comparing optimization software. We show that performance profiles combine the best features of other tools for performance evaluation. 1 Introduction The benchmarking of optimi ..."
Abstract

Cited by 386 (8 self)
 Add to MetaCart
(Show Context)
We propose performance profiles  distribution functions for a performance metric  as a tool for benchmarking and comparing optimization software. We show that performance profiles combine the best features of other tools for performance evaluation. 1 Introduction The benchmarking of optimization software has recently gained considerable visibility. Hans Mittlemann's [13] work on a variety of optimization software has frequently uncovered deficiencies in the software and has generally led to software improvements. Although Mittelmann's efforts have gained the most notice, other researchers have been concerned with the evaluation and performance of optimization codes. As recent examples, we cite [1, 2, 3, 4, 6, 12, 17]. The interpretation and analysis of the data generated by the benchmarking process are the main technical issues addressed in this paper. Most benchmarking efforts involve tables displaying the performance of each solver on each problem for a set of metrics such...
Optimality measures for performance profiles
 Preprint ANL/MCSP11550504, Mathematics and Computer Science Division, Argonne National Lab
, 2004
"... We examine the influence of optimality measures on the benchmarking process, and show that scaling requirements lead to a convergence test for nonlinearly constrained solvers that uses a mixture of absolute and relative error measures. We show that this convergence test is well behaved at any point ..."
Abstract

Cited by 27 (0 self)
 Add to MetaCart
(Show Context)
We examine the influence of optimality measures on the benchmarking process, and show that scaling requirements lead to a convergence test for nonlinearly constrained solvers that uses a mixture of absolute and relative error measures. We show that this convergence test is well behaved at any point where the constraints satisfy the MangasarianFromovitz constraint qualification and also avoids the explicit use of a complementarity measure. Our computational experiments explore the impact of this convergence test on the benchmarking process with performance profiles. 1
An interior point method for mathematical programs with complementarity constraints (MPCCs)
 SIAM JOURNAL ON OPTIMIZATION
, 2003
"... Interior point methods for nonlinear programs (NLP) are adapted for solution of mathematical programs with complementarity constraints (MPCCs). The constraints of the MPCC are suitably relaxed so as to guarantee a strictly feasible interior for the inequality constraints. The standard primaldual ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
Interior point methods for nonlinear programs (NLP) are adapted for solution of mathematical programs with complementarity constraints (MPCCs). The constraints of the MPCC are suitably relaxed so as to guarantee a strictly feasible interior for the inequality constraints. The standard primaldual algorithm has been adapted with a modified step calculation. The algorithm is shown to be superlinearly convergent in the neighborhood of the solution set under assumptions of MPCCLICQ, strong stationarity and upper level strict complementarity. The modification can be easily accommodated within most nonlinear programming interior point algorithms with identical local behavior. Numerical experience is also presented and holds promise for the proposed method.
A PRIMALDUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION
, 2003
"... This paper concerns general (nonconvex) nonlinear optimization when first and second derivatives of the objective and constraint functions are available. The proposed method is based on finding an approximate solution of a sequence of unconstrained subproblems parameterized by a scalar parameter. T ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
This paper concerns general (nonconvex) nonlinear optimization when first and second derivatives of the objective and constraint functions are available. The proposed method is based on finding an approximate solution of a sequence of unconstrained subproblems parameterized by a scalar parameter. The objective function of each unconstrained subproblem is an augmented penaltybarrier function that involves both primal and dual variables. Each subproblem is solved using a secondderivative Newtontype method that employs a combined trust region and line search strategy to ensure global convergence. It is shown that the trustregion step can be computed by factorizing a sequence of systems with diagonallymodified primaldual structure, where the inertia of these systems can be determined without recourse to a special factorization method. This has the benefit that offtheshelf linear system software can be used at all times, allowing the straightforward extension to largescale problems. Numerical results are given for problems in the COPS test collection.
Iterative solution of augmented systems arising in interior methods
 SIAM JOURNAL ON OPTIMIZATION
, 2007
"... Iterative methods are proposed for certain augmented systems of linear equations that arise in interior methods for general nonlinear optimization. Interior methods define a sequence of KKT equations that represent the symmetrized (but indefinite) equations associated with Newton’s method for a po ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
(Show Context)
Iterative methods are proposed for certain augmented systems of linear equations that arise in interior methods for general nonlinear optimization. Interior methods define a sequence of KKT equations that represent the symmetrized (but indefinite) equations associated with Newton’s method for a point satisfying the perturbed optimality conditions. These equations involve both the primal and dual variables and become increasingly illconditioned as the optimization proceeds. In this context, an iterative linear solver must not only handle the illconditioning but also detect the occurrence of KKT matrices with the wrong matrix inertia. A oneparameter family of equivalent linear equations is formulated that includes the KKT system as a special case. The discussion focuses on a particular system from this family, known as the “doubly augmented system, ” that is positive definite with respect to both the primal and dual variables. This property means that a standard preconditioned conjugategradient method involving both primal and dual variables will either terminate successfully or detect if the KKT matrix has the wrong inertia. Constraint preconditioning is a wellknown technique for preconditioning the conjugategradient method on augmented systems. A family of constraint preconditioners is proposed that provably eliminates the inherent illconditioning in the augmented system. A considerable benefit of combining constraint preconditioning with the doubly augmented system is that the preconditioner need not be applied exactly. Two particular “activese ” constraint preconditioners are formulated that involve only a subset of the rows of the augmented system and thereby may be applied with considerably less work. Finally, some numerical experiments illustrate the numerical performance of the proposed preconditioners and highlight some theoretical properties of the preconditioned matrices.
Iterative methods for finding a trustregion step
 SIAM J. Optim
"... Abstract We consider the problem of finding an approximate minimizer of a general quadratic function subject to a twonorm constraint. The SteihaugToint method minimizes the quadratic over a sequence of expanding subspaces until the iterates either converge to an interior point or cross the constr ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
(Show Context)
Abstract We consider the problem of finding an approximate minimizer of a general quadratic function subject to a twonorm constraint. The SteihaugToint method minimizes the quadratic over a sequence of expanding subspaces until the iterates either converge to an interior point or cross the constraint boundary. The benefit of this approach is that an approximate solution may be obtained with minimal work and storage. However, the method does not allow the accuracy of a constrained solution to be specified. We propose an extension of the SteihaugToint method that allows a solution to be calculated to any prescribed accuracy. If the SteihaugToint point lies on the boundary, the constrained problem is solved on a sequence of evolving lowdimensional subspaces. Each subspace includes an accelerator direction obtained from a regularized Newton method applied to the constrained problem. A crucial property of this direction is that it can be computed by applying the conjugategradient method to a positivedefinite system in both the primal and dual variables of the constrained problem. The method includes a parameter that allows the user to take advantage of the tradeoff between the overall number of function evaluations and matrixvector products associated with the underlying trustregion method. At one extreme, a lowaccuracy solution is obtained that is comparable to the SteihaugToint point. At the other extreme, a highaccuracy solution can be specified that minimizes the overall number of function evaluations at the expense of more matrixvector products.
A.: Quality assurance and global optimization
 Global Optimization and Constraint Satisfaction, Lecture Notes in Computer Science
, 2003
"... ..."
A SUBSPACE MINIMIZATION METHOD FOR THE TRUSTREGION STEP
"... We consider methods for largescale unconstrained minimization based on finding an approximate minimizer of a quadratic function subject to a twonorm trustregion constraint. The SteihaugToint method uses the conjugategradient (CG) algorithm to minimize the quadratic over a sequence of expanding ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
(Show Context)
We consider methods for largescale unconstrained minimization based on finding an approximate minimizer of a quadratic function subject to a twonorm trustregion constraint. The SteihaugToint method uses the conjugategradient (CG) algorithm to minimize the quadratic over a sequence of expanding subspaces until the iterates either converge to an interior point or cross the constraint boundary. However, if the CG method is used with a preconditioner, the SteihaugToint method requires that the trustregion norm be defined in terms of the preconditioning matrix. If a different preconditioner is used for each subproblem, the shape of the trustregion can change substantially from one subproblem to the next, which invalidates many of the assumptions on which standard methods for adjusting the trustregion radius are based. In this paper we propose a method that allows the trustregion norm to be defined independently of the preconditioner. The method solves the inequality constrained trustregion subproblem over a sequence of evolving lowdimensional subspaces. Each subspace includes an accelerator direction defined by a regularized Newton method for satisfying the optimality conditions of a primaldual interior method. A crucial property of this direction is that it can be computed by applying the preconditioned CG method to a positivedefinite system in both the primal and dual variables of the trustregion subproblem. Numerical experiments on problems from the CUTEr test collection indicate that the method can require significantly fewer function evaluations than other methods. In addition, experiments with generalpurpose preconditioners show that it is possible to significantly reduce the number of matrixvector products relative to those required without preconditioning.