Results 1  10
of
46
Augmented Lagrangian methods under the Constant Positive Linear Dependence constraint qualification
"... ..."
Global minimization using an Augmented Lagrangian method with variable lowerlevel constraints
, 2007
"... A novel global optimization method based on an Augmented Lagrangian framework is introduced for continuous constrained nonlinear optimization problems. At each outer iteration k the method requires the εkglobal minimization of the Augmented Lagrangian with simple constraints, where εk → ε. Global c ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
A novel global optimization method based on an Augmented Lagrangian framework is introduced for continuous constrained nonlinear optimization problems. At each outer iteration k the method requires the εkglobal minimization of the Augmented Lagrangian with simple constraints, where εk → ε. Global convergence to an εglobal minimizer of the original problem is proved. The subproblems are solved using the αBB method. Numerical experiments are presented.
LOCAL CONVERGENCE OF EXACT AND INEXACT AUGMENTED LAGRANGIAN METHODS UNDER THE SECONDORDER SUFFICIENT OPTIMALITY CONDITION
, 2012
"... We establish local convergence and rate of convergence of the classical augmented Lagrangian algorithm under the sole assumption that the dual starting point is close to a multiplier satisfying the secondorder sufficient optimality condition. In particular, no constraint qualifications of any kind ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
We establish local convergence and rate of convergence of the classical augmented Lagrangian algorithm under the sole assumption that the dual starting point is close to a multiplier satisfying the secondorder sufficient optimality condition. In particular, no constraint qualifications of any kind are needed. Previous literature on the subject required, in addition, the linear independence constraint qualification and either the strict complementarity assumption or a stronger version of the secondorder sufficient condition. That said, the classical results allow the initial multiplier estimate to be far from the optimal one, at the expense of proportionally increasing the threshold value for the penalty parameters. Although our primary goal is to avoid constraint qualifications, if the stronger assumptions are introduced, then starting points far from the optimal multiplier are allowed within our analysis as well. Using only the secondorder sufficient optimality condition, for penalty parameters large enough we prove primaldual Qlinear convergence rate, which becomes superlinear if the parameters are allowed to go to infinity. Both exact and inexact solutions of subproblems are considered. In the exact case, we further show that the primal convergence rate is of the same Qorder as the primaldual rate. Previous assertions for the primal sequence all had to do with the weaker Rrate of convergence and required the stronger assumptions cited above. Finally, we show that under our assumptions one of the popular rules of controlling the penalty parameters ensures their boundedness.
On the Boundedness of Penalty Parameters in an Augmented Lagrangian Method with Constrained Subproblems
, 2011
"... Augmented Lagrangian methods are effective tools for solving largescale nonlinear programming problems. At each outer iteration a minimization subproblem with simple constraints, whose objective function depends on updated Lagrange multipliers and penalty parameters, is approximately solved. When t ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Augmented Lagrangian methods are effective tools for solving largescale nonlinear programming problems. At each outer iteration a minimization subproblem with simple constraints, whose objective function depends on updated Lagrange multipliers and penalty parameters, is approximately solved. When the penalty parameter becomes very large the subproblem is difficult, therefore the effectiveness of this approach is associated with boundedness of penalty parameters. In this paper it is proved that, under more natural assumptions than the ones up to now employed, penalty parameters are bounded. For proving the new boundedness result, the original algorithm has been slightly modified. Numerical consequences of the modifications are discussed and computational experiments are presented.
Low OrderValue Optimization and Applications
, 2005
"... Given r real functions F1(x),..., Fr(x) and an integer p between 1 and r, the Low OrderValue Optimization problem (LOVO) consists of minimizing the sum of the functions that take the p smaller values. If (y1,..., yr) is a vector of data and T (x, ti) is the predicted value of the observation i with ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
Given r real functions F1(x),..., Fr(x) and an integer p between 1 and r, the Low OrderValue Optimization problem (LOVO) consists of minimizing the sum of the functions that take the p smaller values. If (y1,..., yr) is a vector of data and T (x, ti) is the predicted value of the observation i with the parameters x ∈ IR n, it is natural to define Fi(x) = (T (x, ti) − yi) 2 (the quadratic error at observation i under the parameters x). When p = r this LOVO problem coincides with the classical nonlinear leastsquares problem. However, the interesting situation is when p is smaller than r. In that case, the solution of LOVO allows one to discard the influence of an estimated number of outliers. Thus, the LOVO problem is an interesting tool for robust estimation of parameters of nonlinear models. When p ≪ r the LOVO problem may be used to find hidden structures in data sets. One of the best succeeded applications include the Protein Alignment problem. Fully documented algorithms for this application are available at www.ime.unicamp.br/∼martinez/lovoalign. In this paper optimality conditions are discussed, algorithms for solving the LOVO problem are introduced and convergence theorems are proved. Finally, numerical experiments are presented.
Secondorder negativecurvature methods for boxconstrained and general constrained optimization
, 2009
"... A Nonlinear Programming algorithm that converges to secondorder stationary points is introduced in this paper. The main tool is a secondorder negativecurvature method for boxconstrained minimization of a certain class of functions that do not possess continuous second derivatives. This method is ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
A Nonlinear Programming algorithm that converges to secondorder stationary points is introduced in this paper. The main tool is a secondorder negativecurvature method for boxconstrained minimization of a certain class of functions that do not possess continuous second derivatives. This method is used to define an Augmented Lagrangian algorithm of PHR (PowellHestenesRockafellar) type. Convergence proofs under weak constraint qualifications are given. Numerical examples showing that the new method converges to secondorder stationary points in situations in which firstorder methods fail are exhibited.
Improving ultimate convergence of an Augmented Lagrangian method
, 2007
"... Optimization methods that employ the classical PowellHestenesRockafellar Augmented Lagrangian are useful tools for solving Nonlinear Programming problems. Their reputation decreased in the last ten years due to the comparative success of InteriorPoint Newtonian algorithms, which are asymptoticall ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Optimization methods that employ the classical PowellHestenesRockafellar Augmented Lagrangian are useful tools for solving Nonlinear Programming problems. Their reputation decreased in the last ten years due to the comparative success of InteriorPoint Newtonian algorithms, which are asymptotically faster. In the present research a combination of both approaches is evaluated. The idea is to produce a competitive method, being more robust and efficient than its “pure” counterparts for critical problems. Moreover, an additional hybrid algorithm is defined, in which the Interior Point method is replaced by the Newtonian resolution of a KKT system identified by the Augmented Lagrangian algorithm. The software used in this work is freely available through the Tango Project web page:
Partial Spectral Projected Gradient Method with ActiveSet Strategy for Linearly Constrained Optimization
, 2009
"... A method for linearly constrained optimization which modifies and generalizes recent boxconstraint optimization algorithms is introduced. The new algorithm is based on a relaxed form of Spectral Projected Gradient iterations. Intercalated with these projected steps, internal iterations restricted t ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
A method for linearly constrained optimization which modifies and generalizes recent boxconstraint optimization algorithms is introduced. The new algorithm is based on a relaxed form of Spectral Projected Gradient iterations. Intercalated with these projected steps, internal iterations restricted to faces of the polytope are performed, which enhance the efficiency of the algorithms. Convergence proofs are given and numerical experiments are included and commented. Software supporting this paper is available through the Tango
On secondorder optimality conditions for nonlinear programming
 Optimization
"... A new SecondOrder condition is given, which depends on a weak constant rank constraint requirement. We show that practical and publicly available algorithms (www.ime.usp.br/∼egbirgin/tango) of Augmented Lagrangian type converge, after slight modifications, to stationary points defined by the new co ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
A new SecondOrder condition is given, which depends on a weak constant rank constraint requirement. We show that practical and publicly available algorithms (www.ime.usp.br/∼egbirgin/tango) of Augmented Lagrangian type converge, after slight modifications, to stationary points defined by the new condition.
On sequential optimality conditions for smooth constrained optimization
, 2009
"... Sequential optimality conditions provide adequate theoretical tools to justify stopping criteria for nonlinear programming solvers. Approximate KKT and Approximate Gradient Projection conditions are analyzed in this work. These conditions are not necessarily equivalent. Implications between differen ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Sequential optimality conditions provide adequate theoretical tools to justify stopping criteria for nonlinear programming solvers. Approximate KKT and Approximate Gradient Projection conditions are analyzed in this work. These conditions are not necessarily equivalent. Implications between different conditions and counterexamples will be shown. Algorithmic consequences will be discussed.