Results 1  10
of
48
On Augmented Lagrangian methods with general lowerlevel constraints
, 2005
"... Augmented Lagrangian methods with general lowerlevel constraints are considered in the present research. These methods are useful when efficient algorithms exist for solving subproblems where the constraints are only of the lowerlevel type. Two methods of this class are introduced and analyzed. In ..."
Abstract

Cited by 80 (7 self)
 Add to MetaCart
(Show Context)
Augmented Lagrangian methods with general lowerlevel constraints are considered in the present research. These methods are useful when efficient algorithms exist for solving subproblems where the constraints are only of the lowerlevel type. Two methods of this class are introduced and analyzed. Inexact resolution of the lowerlevel constrained subproblems is considered. Global convergence is proved using the Constant Positive Linear Dependence constraint qualification. Conditions for boundedness of the penalty parameters are discussed. The reliability of the approach is tested by means of an exhaustive comparison against Lancelot. All the problems of the Cute collection are used in this comparison. Moreover, the resolution of location problems in which many constraints of the lowerlevel set are nonlinear is addressed, employing the Spectral Projected Gradient method for solving the subproblems. Problems of this type with more than 3 × 10 6 variables and 14 × 10 6 constraints are solved in this way, using moderate computer time.
Global minimization using an Augmented Lagrangian method with variable lowerlevel constraints
, 2007
"... A novel global optimization method based on an Augmented Lagrangian framework is introduced for continuous constrained nonlinear optimization problems. At each outer iteration k the method requires the εkglobal minimization of the Augmented Lagrangian with simple constraints, where εk → ε. Global c ..."
Abstract

Cited by 39 (1 self)
 Add to MetaCart
A novel global optimization method based on an Augmented Lagrangian framework is introduced for continuous constrained nonlinear optimization problems. At each outer iteration k the method requires the εkglobal minimization of the Augmented Lagrangian with simple constraints, where εk → ε. Global convergence to an εglobal minimizer of the original problem is proved. The subproblems are solved using the αBB method. Numerical experiments are presented.
Minimizing the object dimensions in circle and sphere packing problems
, 2006
"... Given a fixed set of identical or differentsized circular items, the problem we deal withconsists on finding the smallest object within which the items can be packed. Circular, triangular, squared, rectangular and also strip objects are considered. Moreover, 2D and3D problems are treated. Twiced ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
Given a fixed set of identical or differentsized circular items, the problem we deal withconsists on finding the smallest object within which the items can be packed. Circular, triangular, squared, rectangular and also strip objects are considered. Moreover, 2D and3D problems are treated. Twicedifferentiable models for all these problems are presented. A strategy to reduce the complexity of evaluating the models is employed and, as a consequence, instances with a large number of items can be considered. Numerical experiments show the flexibility and reliability of the new unified approach.
LOCAL CONVERGENCE OF EXACT AND INEXACT AUGMENTED LAGRANGIAN METHODS UNDER THE SECONDORDER SUFFICIENT OPTIMALITY CONDITION
, 2012
"... We establish local convergence and rate of convergence of the classical augmented Lagrangian algorithm under the sole assumption that the dual starting point is close to a multiplier satisfying the secondorder sufficient optimality condition. In particular, no constraint qualifications of any kind ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
We establish local convergence and rate of convergence of the classical augmented Lagrangian algorithm under the sole assumption that the dual starting point is close to a multiplier satisfying the secondorder sufficient optimality condition. In particular, no constraint qualifications of any kind are needed. Previous literature on the subject required, in addition, the linear independence constraint qualification and either the strict complementarity assumption or a stronger version of the secondorder sufficient condition. That said, the classical results allow the initial multiplier estimate to be far from the optimal one, at the expense of proportionally increasing the threshold value for the penalty parameters. Although our primary goal is to avoid constraint qualifications, if the stronger assumptions are introduced, then starting points far from the optimal multiplier are allowed within our analysis as well. Using only the secondorder sufficient optimality condition, for penalty parameters large enough we prove primaldual Qlinear convergence rate, which becomes superlinear if the parameters are allowed to go to infinity. Both exact and inexact solutions of subproblems are considered. In the exact case, we further show that the primal convergence rate is of the same Qorder as the primaldual rate. Previous assertions for the primal sequence all had to do with the weaker Rrate of convergence and required the stronger assumptions cited above. Finally, we show that under our assumptions one of the popular rules of controlling the penalty parameters ensures their boundedness.
Improving ultimate convergence of an Augmented Lagrangian method
, 2007
"... Optimization methods that employ the classical PowellHestenesRockafellar Augmented Lagrangian are useful tools for solving Nonlinear Programming problems. Their reputation decreased in the last ten years due to the comparative success of InteriorPoint Newtonian algorithms, which are asymptoticall ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
Optimization methods that employ the classical PowellHestenesRockafellar Augmented Lagrangian are useful tools for solving Nonlinear Programming problems. Their reputation decreased in the last ten years due to the comparative success of InteriorPoint Newtonian algorithms, which are asymptotically faster. In the present research a combination of both approaches is evaluated. The idea is to produce a competitive method, being more robust and efficient than its “pure” counterparts for critical problems. Moreover, an additional hybrid algorithm is defined, in which the Interior Point method is replaced by the Newtonian resolution of a KKT system identified by the Augmented Lagrangian algorithm. The software used in this work is freely available through the Tango Project web page:
Structured minimalmemory inexact quasiNewton method and secant preconditioners for Augmented Lagrangian Optimization
, 2006
"... Augmented Lagrangian methods for largescale optimization usually require efficient algorithms for minimization with box constraints. On the other hand, activeset boxconstraint methods employ unconstrained optimization algorithms for minimization inside the faces of the box. Several approaches may ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
Augmented Lagrangian methods for largescale optimization usually require efficient algorithms for minimization with box constraints. On the other hand, activeset boxconstraint methods employ unconstrained optimization algorithms for minimization inside the faces of the box. Several approaches may be employed for computing internal search directions in the largescale case. In this paper a minimalmemory quasiNewton approach with secant preconditioners is proposed, taking into account the structure of Augmented Lagrangians that come from the popular PowellHestenesRockafellar scheme. A combined algorithm, that uses the quasiNewton formula or a truncatedNewton procedure, depending on the presence of active constraints in the penaltyLagrangian function, is also suggested. Numerical experiments using the Cute collection are presented.
Low OrderValue Optimization and Applications
, 2005
"... Given r real functions F1(x),..., Fr(x) and an integer p between 1 and r, the Low OrderValue Optimization problem (LOVO) consists of minimizing the sum of the functions that take the p smaller values. If (y1,..., yr) is a vector of data and T (x, ti) is the predicted value of the observation i with ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
(Show Context)
Given r real functions F1(x),..., Fr(x) and an integer p between 1 and r, the Low OrderValue Optimization problem (LOVO) consists of minimizing the sum of the functions that take the p smaller values. If (y1,..., yr) is a vector of data and T (x, ti) is the predicted value of the observation i with the parameters x ∈ IR n, it is natural to define Fi(x) = (T (x, ti) − yi) 2 (the quadratic error at observation i under the parameters x). When p = r this LOVO problem coincides with the classical nonlinear leastsquares problem. However, the interesting situation is when p is smaller than r. In that case, the solution of LOVO allows one to discard the influence of an estimated number of outliers. Thus, the LOVO problem is an interesting tool for robust estimation of parameters of nonlinear models. When p ≪ r the LOVO problem may be used to find hidden structures in data sets. One of the best succeeded applications include the Protein Alignment problem. Fully documented algorithms for this application are available at www.ime.unicamp.br/∼martinez/lovoalign. In this paper optimality conditions are discussed, algorithms for solving the LOVO problem are introduced and convergence theorems are proved. Finally, numerical experiments are presented.
Partial Spectral Projected Gradient Method with ActiveSet Strategy for Linearly Constrained Optimization
, 2009
"... A method for linearly constrained optimization which modifies and generalizes recent boxconstraint optimization algorithms is introduced. The new algorithm is based on a relaxed form of Spectral Projected Gradient iterations. Intercalated with these projected steps, internal iterations restricted t ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
A method for linearly constrained optimization which modifies and generalizes recent boxconstraint optimization algorithms is introduced. The new algorithm is based on a relaxed form of Spectral Projected Gradient iterations. Intercalated with these projected steps, internal iterations restricted to faces of the polytope are performed, which enhance the efficiency of the algorithms. Convergence proofs are given and numerical experiments are included and commented. Software supporting this paper is available through the Tango
Two new weak constraint qualifications and applications
"... We present two new constraint qualifications (CQ) that are weaker than the recently introduced Relaxed Constant Positive Linear Dependence (RCPLD) constraint qualification. RCPLD is based on the assumption that many subsets of the gradients of the active constraints preserve positive linear dependen ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
We present two new constraint qualifications (CQ) that are weaker than the recently introduced Relaxed Constant Positive Linear Dependence (RCPLD) constraint qualification. RCPLD is based on the assumption that many subsets of the gradients of the active constraints preserve positive linear dependence locally. A major open question was to identify the exact set of gradients whose properties had to be preserved locally and that would still work as a CQ. This is done in the first new constraint qualification, that we call Constant Rank of the Subspace Component (CRSC) CQ. This new CQ also preserves many of the good properties of RCPLD, like local stability and the validity of an error bound. We also introduce an even weaker CQ, called Constant Positive Generator (CPG), that can replace RCPLD in the analysis of the global convergence of algorithms. We close this work extending convergence results of algorithms belonging to all the main classes of nonlinear optimization methods: SQP, augmented Lagrangians, interior point algorithms, and inexact restoration. ∗ This work was supported by PRONEXOptimization (PRONEXCNPq/FAPERJ E