Results 1  10
of
30
On Augmented Lagrangian methods with general lowerlevel constraints
, 2005
"... Augmented Lagrangian methods with general lowerlevel constraints are considered in the present research. These methods are useful when efficient algorithms exist for solving subproblems where the constraints are only of the lowerlevel type. Two methods of this class are introduced and analyzed. In ..."
Abstract

Cited by 62 (6 self)
 Add to MetaCart
Augmented Lagrangian methods with general lowerlevel constraints are considered in the present research. These methods are useful when efficient algorithms exist for solving subproblems where the constraints are only of the lowerlevel type. Two methods of this class are introduced and analyzed. Inexact resolution of the lowerlevel constrained subproblems is considered. Global convergence is proved using the Constant Positive Linear Dependence constraint qualification. Conditions for boundedness of the penalty parameters are discussed. The reliability of the approach is tested by means of an exhaustive comparison against Lancelot. All the problems of the Cute collection are used in this comparison. Moreover, the resolution of location problems in which many constraints of the lowerlevel set are nonlinear is addressed, employing the Spectral Projected Gradient method for solving the subproblems. Problems of this type with more than 3 × 10 6 variables and 14 × 10 6 constraints are solved in this way, using moderate computer time.
LargeScale ActiveSet BoxConstrained Optimization Method with Spectral Projected Gradients
 Computational Optimization and Applications
, 2001
"... A new activeset method for smooth boxconstrained minimization is introduced. The algorithm combines an unconstrained method, including a new linesearch which aims to add many constraints to the working set at a single iteration, with a recently introduced technique (spectral projected gradien ..."
Abstract

Cited by 59 (9 self)
 Add to MetaCart
A new activeset method for smooth boxconstrained minimization is introduced. The algorithm combines an unconstrained method, including a new linesearch which aims to add many constraints to the working set at a single iteration, with a recently introduced technique (spectral projected gradient) for dropping constraints from the working set. Global convergence is proved. A computer implementation is fully described and a numerical comparison assesses the reliability of the new algorithm. Keywords: Boxconstrained minimization, numerical methods, activeset strategies, Spectral Projected Gradient. 1
Augmented Lagrangian methods under the Constant Positive Linear Dependence constraint qualification
"... ..."
Numerical comparison of Augmented Lagrangian algorithms for nonconvex problems
 Computational Optimization and Applications
, 2004
"... Augmented Lagrangian algorithms are very popular tools for solving nonlinear programming problems. At each outer iteration of these methods a simpler optimization problem is solved, for which ecient algorithms can be used, especially when the problems are large. The most famous Augmented Lagrangi ..."
Abstract

Cited by 30 (1 self)
 Add to MetaCart
Augmented Lagrangian algorithms are very popular tools for solving nonlinear programming problems. At each outer iteration of these methods a simpler optimization problem is solved, for which ecient algorithms can be used, especially when the problems are large. The most famous Augmented Lagrangian algorithm for minimization with inequality constraints is known as PowellHestenesRockafellar (PHR) method. The main drawback of PHR is that the objective function of the subproblems is not twice continuously dierentiable. This is the main motivation for the introduction of many alternative Augmented Lagrangian methods.
A BoxConstrained Optimization Algorithm With Negative Curvature Directions and Spectral Projected Gradients
, 2001
"... A practical algorithm for boxconstrained optimization is introduced. The algorithm combines an activeset strategy with spectral projected gradient iterations. In the interior of each face a strategy that deals eciently with negative curvature is employed. Global convergence results are given. ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
A practical algorithm for boxconstrained optimization is introduced. The algorithm combines an activeset strategy with spectral projected gradient iterations. In the interior of each face a strategy that deals eciently with negative curvature is employed. Global convergence results are given. Numerical results are presented. Keywords: box constrained minimization, active set methods, spectral projected gradients, dogleg path methods. AMS Subject Classication: 49M07, 49M10, 65K, 90C06, 90C20. 1
BOXQUACAN and the implementation of Augmented Lagrangian algorithms for minimization with inequality constraints
, 1998
"... BOXQUACAN is a trustregion boxconstraint optimization software developed at the Applied Mathematics Department of the University of Campinas. During the last five years, it has been used for solving many practical and academic problems with box constraints and it has been incorporated as suba ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
BOXQUACAN is a trustregion boxconstraint optimization software developed at the Applied Mathematics Department of the University of Campinas. During the last five years, it has been used for solving many practical and academic problems with box constraints and it has been incorporated as subalgorithm of Augmented Lagrangian methods for minimization with equality constraints and bounds. In this paper it is described its use in connection with Augmented Lagrangian algorithms where inequality constraints are handled without the addition of slack variables. Numerical experiments comparing a modified exponential Lagrangian method and the most classical Augmented Lagrangian are presented. Institute of Mathematics, University of Campinas, CP 6065, 13081970 Campinas SP, Brazil. This work was supported by PRONEX, FAPESP (grant 9037246), FINEP, CNPq, FAEPUNICAMP. 1 1 Introduction Box constrained optimization is a well developed area of numerical analysis. It consists on the m...
Low OrderValue Optimization and Applications
, 2005
"... Given r real functions F1(x),..., Fr(x) and an integer p between 1 and r, the Low OrderValue Optimization problem (LOVO) consists of minimizing the sum of the functions that take the p smaller values. If (y1,..., yr) is a vector of data and T (x, ti) is the predicted value of the observation i with ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
Given r real functions F1(x),..., Fr(x) and an integer p between 1 and r, the Low OrderValue Optimization problem (LOVO) consists of minimizing the sum of the functions that take the p smaller values. If (y1,..., yr) is a vector of data and T (x, ti) is the predicted value of the observation i with the parameters x ∈ IR n, it is natural to define Fi(x) = (T (x, ti) − yi) 2 (the quadratic error at observation i under the parameters x). When p = r this LOVO problem coincides with the classical nonlinear leastsquares problem. However, the interesting situation is when p is smaller than r. In that case, the solution of LOVO allows one to discard the influence of an estimated number of outliers. Thus, the LOVO problem is an interesting tool for robust estimation of parameters of nonlinear models. When p ≪ r the LOVO problem may be used to find hidden structures in data sets. One of the best succeeded applications include the Protein Alignment problem. Fully documented algorithms for this application are available at www.ime.unicamp.br/∼martinez/lovoalign. In this paper optimality conditions are discussed, algorithms for solving the LOVO problem are introduced and convergence theorems are proved. Finally, numerical experiments are presented.
Local Convergence of an InexactRestoration Method and Numerical Experiments
, 2007
"... Local convergence of an inexactrestoration method for nonlinear programming is proved. Numerical experiments are performed with the objective of evaluating the behavior of the purely local method against a globally convergent nonlinearprogramming algorithm. ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Local convergence of an inexactrestoration method for nonlinear programming is proved. Numerical experiments are performed with the objective of evaluating the behavior of the purely local method against a globally convergent nonlinearprogramming algorithm.
Augmented Lagrangians with possible infeasibility and finite termination for global nonlinear programming
, 2012
"... In a recent paper, Birgin, Floudas and Martínez introduced an augmented Lagrangian method for global optimization. In their approach, augmented Lagrangian subproblems are solved using the αBB method and convergence to global minimizers was obtained assuming feasibility of the original problem. In th ..."
Abstract
 Add to MetaCart
In a recent paper, Birgin, Floudas and Martínez introduced an augmented Lagrangian method for global optimization. In their approach, augmented Lagrangian subproblems are solved using the αBB method and convergence to global minimizers was obtained assuming feasibility of the original problem. In the present research, the algorithm mentioned above will be improved in several crucial aspects. On the one hand, feasibility of the problem will not be required. Possible infeasibility will be detected in finite time by the new algorithms and optimal infeasibility results will be proved. On the other hand, finite termination results that guarantee optimality and/or feasibility up to any required precision will be provided. An adaptive modification in which subproblem tolerances depend on current feasibility and complementarity will also be given. The adaptive algorithm allows the augmented Lagrangian subproblems to be solved without requiring unnecessary potentially high precisions in the intermediate steps of the method, which improves the overall efficiency. Experiments showing how the new algorithms and results are related to practical computations will be given.