Results 1  10
of
14
A BoxConstrained Optimization Algorithm With Negative Curvature Directions and Spectral Projected Gradients
, 2001
"... A practical algorithm for boxconstrained optimization is introduced. The algorithm combines an activeset strategy with spectral projected gradient iterations. In the interior of each face a strategy that deals eciently with negative curvature is employed. Global convergence results are given. ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
A practical algorithm for boxconstrained optimization is introduced. The algorithm combines an activeset strategy with spectral projected gradient iterations. In the interior of each face a strategy that deals eciently with negative curvature is employed. Global convergence results are given. Numerical results are presented. Keywords: box constrained minimization, active set methods, spectral projected gradients, dogleg path methods. AMS Subject Classication: 49M07, 49M10, 65K, 90C06, 90C20. 1
InexactRestoration Method with Lagrangian Tangent Decrease and New Merit Function for Nonlinear Programming
, 1999
"... . A new InexactRestoration method for Nonlinear Programming is introduced. The iteration of the main algorithm has two phases. In Phase 1, feasibility is explicitly improved and in Phase 2 optimality is improved on a tangent approximation of the constraints. Trust regions are used for reducing the ..."
Abstract

Cited by 22 (6 self)
 Add to MetaCart
. A new InexactRestoration method for Nonlinear Programming is introduced. The iteration of the main algorithm has two phases. In Phase 1, feasibility is explicitly improved and in Phase 2 optimality is improved on a tangent approximation of the constraints. Trust regions are used for reducing the step when the trial point is not good enough. The trust region is not centered in the current point, as in many Nonlinear Programming algorithms, but in the intermediate "more feasible" point. Therefore, in this semifeasible approach, the more feasible intermediate point is considered to be essentially better than the current point. This is the first method in which intermediatepointcentered trust regions are combined with the decrease of the Lagrangian in the tangent approximation to the constraints. The merit function used in this paper is also new: it consists of a convex combination of the Lagrangian and the (nonsquared) norm of the constraints. The Euclidean norm is used for simplicity but other norms for measuring infeasibility are admissible. Global convergence theorems are proved, a theoretically justified algorithm for the first phase is introduced and some numerical insight is given. Key Words: Nonlinear Programming, trust regions, GRG methods, SGRA methods, restoration methods, global convergence. 1
InexactRestoration Algorithm for Constrained Optimization
 Journal of Optimization Theory and Applications
, 1999
"... We introduce a new model algorithm for solving nonlinear programming problems. No slack variables are introduced for dealing with inequality constraints. Each iteration of the method proceeds in two phases. In the first phase, feasibility of the current iterate is improved and in second phase the ob ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
We introduce a new model algorithm for solving nonlinear programming problems. No slack variables are introduced for dealing with inequality constraints. Each iteration of the method proceeds in two phases. In the first phase, feasibility of the current iterate is improved and in second phase the objective function value is reduced in an approximate feasible set. The point that results from the second phase is compared with the current point using a nonsmooth merit function that combines feasibility and optimality. This merit function includes a penalty parameter that changes between different iterations. A suitable updating procedure for this penalty parameter is included by means of which it can be increased or decreased along different iterations. The conditions for feasibility improvement at the first phase and for optimality improvement at the second phase are mild, and largescale implementations of the resulting method are possible. We prove that under suitable conditions, that ...
On the Solution of Mathematical Programming Problems With Equilibrium Constraints
, 2001
"... Mathematical programming problems with equilibrium constraints (MPEC) are nonlinear programming problems where the constraints have a form that is analogous to firstorder optimality conditions of constrained optimization. We prove that, under reasonable sufficient conditions, stationary points of t ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
Mathematical programming problems with equilibrium constraints (MPEC) are nonlinear programming problems where the constraints have a form that is analogous to firstorder optimality conditions of constrained optimization. We prove that, under reasonable sufficient conditions, stationary points of the sum of squares of the constraints are feasible points of the MPEC. In usual formulations of MPEC all the feasible points are nonregular in the sense that they do not satisfy the MangasarianFromovitz constraint qualification of nonlinear programming. Therefore, all the feasible points satisfy the classical FritzJohn necessary optimality conditions. In principle, this can cause serious difficulties for nonlinear programming algorithms applied to MPEC. However, we show that most feasible points do not satisfy a recently introduced stronger optimality condition for nonlinear programming. This is the reason why, in general, nonlinear programming algorithms are successful when applied to MPEC. Keywords. Mathematical programming with equilibrium constraints, optimality conditions, minimization algorithms, reformulation. AMS: 90C33, 90C30
Inexact Restoration methods for nonlinear programming: advances and perspectives
, 2004
"... Inexact Restoration methods have been introduced in the last few years for solving nonlinear programming problems. These methods are related to classical restoration algorithms but also have some remarkable dierences. They generate a sequence of generally infeasible iterates with intermediate it ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
Inexact Restoration methods have been introduced in the last few years for solving nonlinear programming problems. These methods are related to classical restoration algorithms but also have some remarkable dierences. They generate a sequence of generally infeasible iterates with intermediate iterations that consist of inexactly restored points. The convergence theory allows one to use arbitrary algorithms for performing the restoration. This feature is appealing because it allows one to use the structure of the problem in quite opportunistic ways. Dierent Inexact Restoration algorithms are available. The most recent ones use the trustregion approach. However, unlike the algorithms based on sequential quadratic programming, the trust regions are centered not in the current point but in the inexactly restored intermediate one. Global convergence has been proved, based on merit functions of augmented Lagrangian type. In this survey we point out some applications and we relate recent advances in the theory.
A TwoPhase Model Algorithm with Global Convergence for Nonlinear Programming
 Journal of Optimization Theory and Applications
, 1998
"... . The family of feasible methods for minimization with nonlinear constraints includes Rosen's Nonlinear Projected Gradient Method, the Generalized Reduced Gradient Method (GRG) and many variants of the Sequential Gradient Restoration Algorithm (SGRA). Generally speaking, a particular iteration of an ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
. The family of feasible methods for minimization with nonlinear constraints includes Rosen's Nonlinear Projected Gradient Method, the Generalized Reduced Gradient Method (GRG) and many variants of the Sequential Gradient Restoration Algorithm (SGRA). Generally speaking, a particular iteration of any of these methods proceeds in two phases. In the Restoration Phase, feasibility is restored by means of the resolution of an auxiliary nonlinear problem, generally a nonlinear system of equations. In the Minimization Phase, optimality is improved by means of the consideration of the objective function, or its Lagrangian, on the tangent subspace to the constraints. In this paper, minimal assumptions are stated on the Restoration Phase and the Minimization Phase that ensure that the resulting algorithm is globally convergent. The key point is the possibility of comparing two successive nonfeasible iterates by means of a suitable merit function that combines feasibility and optimality. The mer...
Feasibility Control in Nonlinear Optimization
 in Foundations of Computational Mathematics
, 2000
"... We analyze the properties that optimization algorithms must possess in order to prevent convergence to nonstationary points for the merit function. We show that demanding the exact satisfaction of constraint linearizations results in difficulties in a wide range of optimization algorithms. Feasi ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
We analyze the properties that optimization algorithms must possess in order to prevent convergence to nonstationary points for the merit function. We show that demanding the exact satisfaction of constraint linearizations results in difficulties in a wide range of optimization algorithms. Feasibility control is a mechanism that prevents convergence to spurious solutions by ensuring that sufficient progress towards feasibility is made, even in the presence of certain rank deficiencies. The concept of feasibility control is studied in this paper in the context of Newton methods for nonlinear systems of equations and equality constrained optimization, as well as in interior methods for nonlinear programming. This work was supported by National Science Foundation grant CDA9726385 and by Department of Energy grant DEFG0287ER25047A004. y To appear in the proceedings of the Foundations of Computational Mathematics Meeting held in Oxford, England, in July 1999 z Department o...
Nonlinearprogramming reformulation of the Ordervalue optimization problem
 Mathematical Methods of Operations Research 61
, 2005
"... Ordervalue optimization (OVO) is a generalization of the minimax problem motivated by decisionmaking problems under uncertainty and by robust estimation. New optimality conditions for this nonsmooth optimization problem are derived. An equivalent mathematical programming problem with equilibrium c ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Ordervalue optimization (OVO) is a generalization of the minimax problem motivated by decisionmaking problems under uncertainty and by robust estimation. New optimality conditions for this nonsmooth optimization problem are derived. An equivalent mathematical programming problem with equilibrium constraints is deduced. The relation between OVO and this nonlinearprogramming reformulation is studied. Particular attention is given to the relation between local minimizers and stationary points of both problems.
Local Convergence of an InexactRestoration Method and Numerical Experiments 1
"... Communicated by C. T. Leondes 1This work was supported by PRONEXCNPq/FAPERJ Grant E26/171.164/2003 APQ1, FAPESP Grants 03/091696 and 01/045974, and CNPq. The authors are indebted to Juliano B. Francisco and Yalcin Kaya for their careful reading of the first draft of this paper. ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Communicated by C. T. Leondes 1This work was supported by PRONEXCNPq/FAPERJ Grant E26/171.164/2003 APQ1, FAPESP Grants 03/091696 and 01/045974, and CNPq. The authors are indebted to Juliano B. Francisco and Yalcin Kaya for their careful reading of the first draft of this paper.
Solution of Bounded Nonlinear Systems of Equations Using Homotopies With Inexact Restoration
, 2001
"... Nonlinear systems of equations represent often mathematical models of chemical production processes and other engineering problems. Homotopic techniques (in particular, the bounded homotopies introduced by Paloschi) are used for enhancing convergence to solutions, especially when a good initial e ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Nonlinear systems of equations represent often mathematical models of chemical production processes and other engineering problems. Homotopic techniques (in particular, the bounded homotopies introduced by Paloschi) are used for enhancing convergence to solutions, especially when a good initial estimate is not available. In this paper, the homotopy curve is considered as the feasible set of a mathematical programming problem, where the objective is to nd the optimal value of the homotopic parameter. Inexact restoration techniques can then be used to generate approximations in a neighborhood of the homotopy, the size of which is theoretically justied. Numerical examples are given. Key words: Nonlinear programming, homotopies, bounded homotopies, inexact restoration. 1