Results 1  10
of
36
SNOPT: An SQP Algorithm For LargeScale Constrained Optimization
, 2002
"... Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first deriv ..."
Abstract

Cited by 597 (24 self)
 Add to MetaCart
(Show Context)
Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first derivatives are available, and that the constraint gradients are sparse. We discuss
CUTEr (and SifDec), a constrained and unconstrained testing environment, revisited
 ACM TRANSACTIONS ON MATHEMATICAL SOFTWARE
, 2001
"... The initial release of CUTE, a widely used testing environment for optimization software was described in [2]. The latest version, now known as CUTEr is presented. New features include reorganisation of the environment to allow simultaneous multiplatform installation, new tools for, and interface ..."
Abstract

Cited by 89 (7 self)
 Add to MetaCart
(Show Context)
The initial release of CUTE, a widely used testing environment for optimization software was described in [2]. The latest version, now known as CUTEr is presented. New features include reorganisation of the environment to allow simultaneous multiplatform installation, new tools for, and interfaces to, optimization packages, and a considerably simplified and entirely automated installation procedure for unix systems. The SIF decoder, which used to be a part of CUTE, has become a separate tool, easily callable by various packages. It features simple extensions to the SIF test problem format and the generation of files suited to automatic differentiation packages.
On Augmented Lagrangian methods with general lowerlevel constraints
, 2005
"... Augmented Lagrangian methods with general lowerlevel constraints are considered in the present research. These methods are useful when efficient algorithms exist for solving subproblems where the constraints are only of the lowerlevel type. Two methods of this class are introduced and analyzed. In ..."
Abstract

Cited by 84 (7 self)
 Add to MetaCart
(Show Context)
Augmented Lagrangian methods with general lowerlevel constraints are considered in the present research. These methods are useful when efficient algorithms exist for solving subproblems where the constraints are only of the lowerlevel type. Two methods of this class are introduced and analyzed. Inexact resolution of the lowerlevel constrained subproblems is considered. Global convergence is proved using the Constant Positive Linear Dependence constraint qualification. Conditions for boundedness of the penalty parameters are discussed. The reliability of the approach is tested by means of an exhaustive comparison against Lancelot. All the problems of the Cute collection are used in this comparison. Moreover, the resolution of location problems in which many constraints of the lowerlevel set are nonlinear is addressed, employing the Spectral Projected Gradient method for solving the subproblems. Problems of this type with more than 3 × 10 6 variables and 14 × 10 6 constraints are solved in this way, using moderate computer time.
An interior algorithm for nonlinear optimization that combines line search and trust region steps
 Mathematical Programming 107
, 2006
"... An interiorpoint method for nonlinear programming is presented. It enjoys the flexibility of switching between a line search method that computes steps by factoring the primaldual equations and a trust region method that uses a conjugate gradient iteration. Steps computed by direct factorization a ..."
Abstract

Cited by 59 (12 self)
 Add to MetaCart
(Show Context)
An interiorpoint method for nonlinear programming is presented. It enjoys the flexibility of switching between a line search method that computes steps by factoring the primaldual equations and a trust region method that uses a conjugate gradient iteration. Steps computed by direct factorization are always tried first, but if they are deemed ineffective, a trust region iteration that guarantees progress toward stationarity is invoked. To demonstrate its effectiveness, the algorithm is implemented in the Knitro [6, 28] software package and is extensively tested on a wide selection of test problems. 1
Preprocessing for quadratic programming
, 2002
"... Techniques for the preprocessing of (notnecessarily convex) quadratic programs are discussed. Most of the procedures extend known ones from the linear to quadratic cases, but a few new preprocessing techniques are introduced. The implementation aspects are also discussed. Numerical results are nal ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
Techniques for the preprocessing of (notnecessarily convex) quadratic programs are discussed. Most of the procedures extend known ones from the linear to quadratic cases, but a few new preprocessing techniques are introduced. The implementation aspects are also discussed. Numerical results are nally presented to indicate the potential of the resulting code, both for linear and quadratic problems. The impact of insisting that bounds of the variables in the reduced problem be as tight as possible rather than allowing some slack in these bounds is also shown to be numerically significant.
A Multidimensional Filter Algorithm for Nonlinear Equations and Nonlinear Least Squares
 SIAM J. Optim
, 2003
"... We introduce a new algorithm for the solution of systems of nonlinear equations and nonlinear leastsquares problems that attempts to combine the eciency of lter techniques and the robustness of trustregion methods. The algorithm is shown, under reasonable assumptions, to globally converge to zero ..."
Abstract

Cited by 14 (7 self)
 Add to MetaCart
We introduce a new algorithm for the solution of systems of nonlinear equations and nonlinear leastsquares problems that attempts to combine the eciency of lter techniques and the robustness of trustregion methods. The algorithm is shown, under reasonable assumptions, to globally converge to zeros of the system, or to rstorder stationary points of the Euclidean norm of its residual. Preliminary numerical experience is presented that shows substantial gains in eciency over the traditional monotone trustregion approach.
Improving ultimate convergence of an Augmented Lagrangian method
, 2007
"... Optimization methods that employ the classical PowellHestenesRockafellar Augmented Lagrangian are useful tools for solving Nonlinear Programming problems. Their reputation decreased in the last ten years due to the comparative success of InteriorPoint Newtonian algorithms, which are asymptoticall ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
Optimization methods that employ the classical PowellHestenesRockafellar Augmented Lagrangian are useful tools for solving Nonlinear Programming problems. Their reputation decreased in the last ten years due to the comparative success of InteriorPoint Newtonian algorithms, which are asymptotically faster. In the present research a combination of both approaches is evaluated. The idea is to produce a competitive method, being more robust and efficient than its “pure” counterparts for critical problems. Moreover, an additional hybrid algorithm is defined, in which the Interior Point method is replaced by the Newtonian resolution of a KKT system identified by the Augmented Lagrangian algorithm. The software used in this work is freely available through the Tango Project web page:
Finding a point in the relative interior of a polyhedron
, 2007
"... A new initialization or ‘Phase I ’ strategy for feasible interior point methods for linear programming is proposed that computes a point on the primaldual central path associated with the linear program. Provided there exist primaldual strictly feasible points — an allpervasive assumption in inte ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
(Show Context)
A new initialization or ‘Phase I ’ strategy for feasible interior point methods for linear programming is proposed that computes a point on the primaldual central path associated with the linear program. Provided there exist primaldual strictly feasible points — an allpervasive assumption in interior point method theory that implies the existence of the central path — our initial method (Algorithm 1) is globally Qlinearly and asymptotically Qquadratically convergent, with a provable worstcase iteration complexity bound. When this assumption is not met, the numerical behaviour of Algorithm 1 is highly disappointing, even when the problem is primaldual feasible. This is due to the presence of implicit equalities, inequality constraints that hold as equalities at all the feasible points. Controlled perturbations of the inequality constraints of the primaldual problems are introduced — geometrically equivalent to enlarging the primaldual feasible region and then systematically contracting it back to its initial shape — in order for the perturbed problems to satisfy the assumption. Thus Algorithm 1 can successfully be employed to solve each of the perturbed problems. We show that, when there exist primaldual strictly feasible points of the original problems, the resulting method, Algorithm 2, finds such a point in a finite number of changes to the perturbation parameters. When implicit equalities are present, but the original problem and its dual are feasible, Algorithm 2 asymptotically detects all the primaldual implicit equalities and generates a point in the relative interior of the primaldual feasible set. Algorithm 2 can also asymptotically detect primaldual infeasibility. Successful numerical experience with Algorithm 2 on linear programs from NETLIB and CUTEr, both with and without any significant preprocessing of the problems, indicates that Algorithm 2 may be used as an algorithmic preprocessor for removing implicit equalities, with theoretical guarantees of convergence. 1
On secondorder optimality conditions for nonlinear programming
 Optimization
"... A new SecondOrder condition is given, which depends on a weak constant rank constraint requirement. We show that practical and publicly available algorithms (www.ime.usp.br/∼egbirgin/tango) of Augmented Lagrangian type converge, after slight modifications, to stationary points defined by the new co ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
A new SecondOrder condition is given, which depends on a weak constant rank constraint requirement. We show that practical and publicly available algorithms (www.ime.usp.br/∼egbirgin/tango) of Augmented Lagrangian type converge, after slight modifications, to stationary points defined by the new condition.