Results 1  10
of
22
CUTEr (and SifDec), a constrained and unconstrained testing environment, revisited
 ACM TRANSACTIONS ON MATHEMATICAL SOFTWARE
, 2001
"... The initial release of CUTE, a widely used testing environment for optimization software was described in [2]. The latest version, now known as CUTEr is presented. New features include reorganisation of the environment to allow simultaneous multiplatform installation, new tools for, and interface ..."
Abstract

Cited by 86 (8 self)
 Add to MetaCart
(Show Context)
The initial release of CUTE, a widely used testing environment for optimization software was described in [2]. The latest version, now known as CUTEr is presented. New features include reorganisation of the environment to allow simultaneous multiplatform installation, new tools for, and interfaces to, optimization packages, and a considerably simplified and entirely automated installation procedure for unix systems. The SIF decoder, which used to be a part of CUTE, has become a separate tool, easily callable by various packages. It features simple extensions to the SIF test problem format and the generation of files suited to automatic differentiation packages.
A BoxConstrained Optimization Algorithm With Negative Curvature Directions and Spectral Projected Gradients
, 2001
"... A practical algorithm for boxconstrained optimization is introduced. The algorithm combines an activeset strategy with spectral projected gradient iterations. In the interior of each face a strategy that deals eciently with negative curvature is employed. Global convergence results are given. ..."
Abstract

Cited by 27 (5 self)
 Add to MetaCart
A practical algorithm for boxconstrained optimization is introduced. The algorithm combines an activeset strategy with spectral projected gradient iterations. In the interior of each face a strategy that deals eciently with negative curvature is employed. Global convergence results are given. Numerical results are presented. Keywords: box constrained minimization, active set methods, spectral projected gradients, dogleg path methods. AMS Subject Classication: 49M07, 49M10, 65K, 90C06, 90C20. 1
InexactRestoration Algorithm for Constrained Optimization
 Journal of Optimization Theory and Applications
, 1999
"... We introduce a new model algorithm for solving nonlinear programming problems. No slack variables are introduced for dealing with inequality constraints. Each iteration of the method proceeds in two phases. In the first phase, feasibility of the current iterate is improved and in second phase the ob ..."
Abstract

Cited by 27 (6 self)
 Add to MetaCart
We introduce a new model algorithm for solving nonlinear programming problems. No slack variables are introduced for dealing with inequality constraints. Each iteration of the method proceeds in two phases. In the first phase, feasibility of the current iterate is improved and in second phase the objective function value is reduced in an approximate feasible set. The point that results from the second phase is compared with the current point using a nonsmooth merit function that combines feasibility and optimality. This merit function includes a penalty parameter that changes between different iterations. A suitable updating procedure for this penalty parameter is included by means of which it can be increased or decreased along different iterations. The conditions for feasibility improvement at the first phase and for optimality improvement at the second phase are mild, and largescale implementations of the resulting method are possible. We prove that under suitable conditions, that ...
InexactRestoration Method with Lagrangian Tangent Decrease and New Merit Function for Nonlinear Programming
, 1999
"... . A new InexactRestoration method for Nonlinear Programming is introduced. The iteration of the main algorithm has two phases. In Phase 1, feasibility is explicitly improved and in Phase 2 optimality is improved on a tangent approximation of the constraints. Trust regions are used for reducing the ..."
Abstract

Cited by 27 (5 self)
 Add to MetaCart
. A new InexactRestoration method for Nonlinear Programming is introduced. The iteration of the main algorithm has two phases. In Phase 1, feasibility is explicitly improved and in Phase 2 optimality is improved on a tangent approximation of the constraints. Trust regions are used for reducing the step when the trial point is not good enough. The trust region is not centered in the current point, as in many Nonlinear Programming algorithms, but in the intermediate "more feasible" point. Therefore, in this semifeasible approach, the more feasible intermediate point is considered to be essentially better than the current point. This is the first method in which intermediatepointcentered trust regions are combined with the decrease of the Lagrangian in the tangent approximation to the constraints. The merit function used in this paper is also new: it consists of a convex combination of the Lagrangian and the (nonsquared) norm of the constraints. The Euclidean norm is used for simplicity but other norms for measuring infeasibility are admissible. Global convergence theorems are proved, a theoretically justified algorithm for the first phase is introduced and some numerical insight is given. Key Words: Nonlinear Programming, trust regions, GRG methods, SGRA methods, restoration methods, global convergence. 1
On the Solution of Mathematical Programming Problems With Equilibrium Constraints
, 2001
"... Mathematical programming problems with equilibrium constraints (MPEC) are nonlinear programming problems where the constraints have a form that is analogous to firstorder optimality conditions of constrained optimization. We prove that, under reasonable sufficient conditions, stationary points of t ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
Mathematical programming problems with equilibrium constraints (MPEC) are nonlinear programming problems where the constraints have a form that is analogous to firstorder optimality conditions of constrained optimization. We prove that, under reasonable sufficient conditions, stationary points of the sum of squares of the constraints are feasible points of the MPEC. In usual formulations of MPEC all the feasible points are nonregular in the sense that they do not satisfy the MangasarianFromovitz constraint qualification of nonlinear programming. Therefore, all the feasible points satisfy the classical FritzJohn necessary optimality conditions. In principle, this can cause serious difficulties for nonlinear programming algorithms applied to MPEC. However, we show that most feasible points do not satisfy a recently introduced stronger optimality condition for nonlinear programming. This is the reason why, in general, nonlinear programming algorithms are successful when applied to MPEC. Keywords. Mathematical programming with equilibrium constraints, optimality conditions, minimization algorithms, reformulation. AMS: 90C33, 90C30
Feasibility Control in Nonlinear Optimization
, 2000
"... We analyze the properties that optimization algorithms must possess in order to prevent convergence to nonstationary points for the merit function. We show that demanding the exact satisfaction of constraint linearizations results in difficulties in a wide range of optimization algorithms. Feasibil ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
We analyze the properties that optimization algorithms must possess in order to prevent convergence to nonstationary points for the merit function. We show that demanding the exact satisfaction of constraint linearizations results in difficulties in a wide range of optimization algorithms. Feasibility control is a mechanism that prevents convergence to spurious solutions by ensuring that su cient progress towards feasibility is made, even in the presence of certain rank deficiencies. The concept of feasibility control is studied in this paper in the context of Newton methods for nonlinear systems of equations and equality constrained optimization, as well as in interior methods for nonlinear programming.
Nonlinearprogramming reformulation of the Ordervalue optimization problem
 Mathematical Methods of Operations Research 61
, 2005
"... Ordervalue optimization (OVO) is a generalization of the minimax problem motivated by decisionmaking problems under uncertainty and by robust estimation. New optimality conditions for this nonsmooth optimization problem are derived. An equivalent mathematical programming problem with equilibrium c ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
(Show Context)
Ordervalue optimization (OVO) is a generalization of the minimax problem motivated by decisionmaking problems under uncertainty and by robust estimation. New optimality conditions for this nonsmooth optimization problem are derived. An equivalent mathematical programming problem with equilibrium constraints is deduced. The relation between OVO and this nonlinearprogramming reformulation is studied. Particular attention is given to the relation between local minimizers and stationary points of both problems.
A TwoPhase Model Algorithm with Global Convergence for Nonlinear Programming
 Journal of Optimization Theory and Applications
, 1998
"... . The family of feasible methods for minimization with nonlinear constraints includes Rosen's Nonlinear Projected Gradient Method, the Generalized Reduced Gradient Method (GRG) and many variants of the Sequential Gradient Restoration Algorithm (SGRA). Generally speaking, a particular iteration ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
. The family of feasible methods for minimization with nonlinear constraints includes Rosen's Nonlinear Projected Gradient Method, the Generalized Reduced Gradient Method (GRG) and many variants of the Sequential Gradient Restoration Algorithm (SGRA). Generally speaking, a particular iteration of any of these methods proceeds in two phases. In the Restoration Phase, feasibility is restored by means of the resolution of an auxiliary nonlinear problem, generally a nonlinear system of equations. In the Minimization Phase, optimality is improved by means of the consideration of the objective function, or its Lagrangian, on the tangent subspace to the constraints. In this paper, minimal assumptions are stated on the Restoration Phase and the Minimization Phase that ensure that the resulting algorithm is globally convergent. The key point is the possibility of comparing two successive nonfeasible iterates by means of a suitable merit function that combines feasibility and optimality. The mer...
Inexact Restoration methods for nonlinear programming: advances and perspectives
, 2004
"... Inexact Restoration methods have been introduced in the last few years for solving nonlinear programming problems. These methods are related to classical restoration algorithms but also have some remarkable dierences. They generate a sequence of generally infeasible iterates with intermediate it ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
Inexact Restoration methods have been introduced in the last few years for solving nonlinear programming problems. These methods are related to classical restoration algorithms but also have some remarkable dierences. They generate a sequence of generally infeasible iterates with intermediate iterations that consist of inexactly restored points. The convergence theory allows one to use arbitrary algorithms for performing the restoration. This feature is appealing because it allows one to use the structure of the problem in quite opportunistic ways. Dierent Inexact Restoration algorithms are available. The most recent ones use the trustregion approach. However, unlike the algorithms based on sequential quadratic programming, the trust regions are centered not in the current point but in the inexactly restored intermediate one. Global convergence has been proved, based on merit functions of augmented Lagrangian type. In this survey we point out some applications and we relate recent advances in the theory.
Local Convergence of an InexactRestoration Method and Numerical Experiments
, 2007
"... Local convergence of an inexactrestoration method for nonlinear programming is proved. Numerical experiments are performed with the objective of evaluating the behavior of the purely local method against a globally convergent nonlinearprogramming algorithm. ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Local convergence of an inexactrestoration method for nonlinear programming is proved. Numerical experiments are performed with the objective of evaluating the behavior of the purely local method against a globally convergent nonlinearprogramming algorithm.