Results 1  10
of
17
LOCAL CONVERGENCE OF EXACT AND INEXACT AUGMENTED LAGRANGIAN METHODS UNDER THE SECONDORDER SUFFICIENT OPTIMALITY CONDITION
, 2012
"... We establish local convergence and rate of convergence of the classical augmented Lagrangian algorithm under the sole assumption that the dual starting point is close to a multiplier satisfying the secondorder sufficient optimality condition. In particular, no constraint qualifications of any kind ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
We establish local convergence and rate of convergence of the classical augmented Lagrangian algorithm under the sole assumption that the dual starting point is close to a multiplier satisfying the secondorder sufficient optimality condition. In particular, no constraint qualifications of any kind are needed. Previous literature on the subject required, in addition, the linear independence constraint qualification and either the strict complementarity assumption or a stronger version of the secondorder sufficient condition. That said, the classical results allow the initial multiplier estimate to be far from the optimal one, at the expense of proportionally increasing the threshold value for the penalty parameters. Although our primary goal is to avoid constraint qualifications, if the stronger assumptions are introduced, then starting points far from the optimal multiplier are allowed within our analysis as well. Using only the secondorder sufficient optimality condition, for penalty parameters large enough we prove primaldual Qlinear convergence rate, which becomes superlinear if the parameters are allowed to go to infinity. Both exact and inexact solutions of subproblems are considered. In the exact case, we further show that the primal convergence rate is of the same Qorder as the primaldual rate. Previous assertions for the primal sequence all had to do with the weaker Rrate of convergence and required the stronger assumptions cited above. Finally, we show that under our assumptions one of the popular rules of controlling the penalty parameters ensures their boundedness.
On the Boundedness of Penalty Parameters in an Augmented Lagrangian Method with Constrained Subproblems
, 2011
"... Augmented Lagrangian methods are effective tools for solving largescale nonlinear programming problems. At each outer iteration a minimization subproblem with simple constraints, whose objective function depends on updated Lagrange multipliers and penalty parameters, is approximately solved. When t ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Augmented Lagrangian methods are effective tools for solving largescale nonlinear programming problems. At each outer iteration a minimization subproblem with simple constraints, whose objective function depends on updated Lagrange multipliers and penalty parameters, is approximately solved. When the penalty parameter becomes very large the subproblem is difficult, therefore the effectiveness of this approach is associated with boundedness of penalty parameters. In this paper it is proved that, under more natural assumptions than the ones up to now employed, penalty parameters are bounded. For proving the new boundedness result, the original algorithm has been slightly modified. Numerical consequences of the modifications are discussed and computational experiments are presented.
Low OrderValue Optimization and Applications
, 2005
"... Given r real functions F1(x),..., Fr(x) and an integer p between 1 and r, the Low OrderValue Optimization problem (LOVO) consists of minimizing the sum of the functions that take the p smaller values. If (y1,..., yr) is a vector of data and T (x, ti) is the predicted value of the observation i with ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
Given r real functions F1(x),..., Fr(x) and an integer p between 1 and r, the Low OrderValue Optimization problem (LOVO) consists of minimizing the sum of the functions that take the p smaller values. If (y1,..., yr) is a vector of data and T (x, ti) is the predicted value of the observation i with the parameters x ∈ IR n, it is natural to define Fi(x) = (T (x, ti) − yi) 2 (the quadratic error at observation i under the parameters x). When p = r this LOVO problem coincides with the classical nonlinear leastsquares problem. However, the interesting situation is when p is smaller than r. In that case, the solution of LOVO allows one to discard the influence of an estimated number of outliers. Thus, the LOVO problem is an interesting tool for robust estimation of parameters of nonlinear models. When p ≪ r the LOVO problem may be used to find hidden structures in data sets. One of the best succeeded applications include the Protein Alignment problem. Fully documented algorithms for this application are available at www.ime.unicamp.br/∼martinez/lovoalign. In this paper optimality conditions are discussed, algorithms for solving the LOVO problem are introduced and convergence theorems are proved. Finally, numerical experiments are presented.
Orthogonal packing of rectangles within isotropic convex regions
, 2009
"... A mixed integer continuous nonlinear model and a solution method for the problem of orthogonally packing identical rectangles within an arbitrary convex region are introduced in the present work. The convex region is assumed to be made of an isotropic material in such a way that arbitrary rotations ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
A mixed integer continuous nonlinear model and a solution method for the problem of orthogonally packing identical rectangles within an arbitrary convex region are introduced in the present work. The convex region is assumed to be made of an isotropic material in such a way that arbitrary rotations of the items, preserving the orthogonality constraint, are allowed. The solution method is based on a combination of branch and bound and activeset strategies for boundconstrained minimization of smooth functions. Numerical results show the reliability of the presented approach.
Global Nonlinear Programming with possible infeasibility and finite termination
, 2012
"... In a recent paper, Birgin, Floudas and Martínez introduced an augmented Lagrangian method for global optimization. In their approach, augmented Lagrangian subproblems are solved using the αBB method and convergence to global minimizers was obtained assuming feasibility of the original problem. In th ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In a recent paper, Birgin, Floudas and Martínez introduced an augmented Lagrangian method for global optimization. In their approach, augmented Lagrangian subproblems are solved using the αBB method and convergence to global minimizers was obtained assuming feasibility of the original problem. In the present research, the algorithm mentioned above will be improved in several crucial aspects. On the one hand, feasibility of the problem will not be required. Possible infeasibility will be detected in finite time by the new algorithms and optimal infeasibility results will be proved. On the other hand, finite termination results thatguaranteeoptimalityand/orfeasibilityuptoanyrequiredprecisionwillbeprovided. An adaptive modification in which subproblem tolerances depend on current feasibility and complementarity will also be given. The adaptive algorithm allows the augmented Lagrangian subproblems to be solved without requiring unnecessary potentially high precisions in the intermediate steps of the method, which improves the overall efficiency. Experiments showing how the new algorithms and results are related to practical computations will be given.
An Inexact Modified Subgradient Algorithm for Nonconvex Optimization ∗
, 2008
"... We propose and analyze an inexact version of the modified subgradient (MSG) algorithm, which we call the IMSG algorithm, for nonsmooth and nonconvex optimization over a compact set. We prove that under an approximate, i.e. inexact, minimization of the sharp augmented Lagrangian, the main convergence ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We propose and analyze an inexact version of the modified subgradient (MSG) algorithm, which we call the IMSG algorithm, for nonsmooth and nonconvex optimization over a compact set. We prove that under an approximate, i.e. inexact, minimization of the sharp augmented Lagrangian, the main convergence properties of the MSG algorithm are preserved for the IMSG algorithm. Inexact minimization may allow to solve problems with less computational effort. We illustrate this through test problems, including an optimal bang–bang control problem, under several different inexactness schemes.
Low OrderValue approach for solving VaRconstrained optimization problems
, 2009
"... In low ordervalue optimization (LOVO) problems the sum of the r smallest values of a finite sequence of q functions is involved as the objective to be minimized or as a constraint. The latter case is considered in the present paper. Portfolio optimization problems with a constraint on the admissibl ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In low ordervalue optimization (LOVO) problems the sum of the r smallest values of a finite sequence of q functions is involved as the objective to be minimized or as a constraint. The latter case is considered in the present paper. Portfolio optimization problems with a constraint on the admissible ValueatRisk (VaR) can be modeled in terms of LOVOconstrained minimization. Different algorithms for practical solution of this problem will be presented. Using these techniques, portfolio optimization problems with transaction costs will be solved.
Outer TrustRegion method for Constrained Optimization ∗
, 2009
"... Given an algorithm A for solving some mathematical problem based on the iterative solution of simpler subproblems, an Outer TrustRegion (OTR) modification of A is the result of adding a trustregion constraint to each subproblem. The trustregion size is adaptively updated according to the behavior ..."
Abstract
 Add to MetaCart
Given an algorithm A for solving some mathematical problem based on the iterative solution of simpler subproblems, an Outer TrustRegion (OTR) modification of A is the result of adding a trustregion constraint to each subproblem. The trustregion size is adaptively updated according to the behavior of crucial variables. The new subproblems should not be more complex than the original ones and the convergence properties of the Outer TrustRegion algorithm should be the same as those of the Algorithm A. Some reasons for introducing OTR modifications are given in the present work. Convergence results for an OTR version of an Augmented Lagrangian method for nonconvex constrained optimization are proved and numerical experiments are presented.