## A globally convergent linearly constrained Lagrangian method for nonlinear optimization (2002)

### Cached

### Download Links

Venue: | SIAM J. Optim |

Citations: | 21 - 5 self |

### BibTeX

@ARTICLE{Friedlander02aglobally,

author = {Michael P. Friedlander and Michael and A. Saunders},

title = {A globally convergent linearly constrained Lagrangian method for nonlinear optimization},

journal = {SIAM J. Optim},

year = {2002},

volume = {15},

pages = {863--897}

}

### OpenURL

### Abstract

Abstract. For optimization problems with nonlinear constraints, linearly constrained Lagrangian (LCL) methods solve a sequence of subproblems of the form “minimize an augmented Lagrangian function subject to linearized constraints. ” Such methods converge rapidly near a solution but may not be reliable from arbitrary starting points. Nevertheless, the well-known software package MINOS has proved effective on many large problems. Its success motivates us to derive a related LCL algorithm that possesses three important properties: it is globally convergent, the subproblem constraints are always feasible, and the subproblems may be solved inexactly. The new algorithm has been implemented in Matlab, with an option to use either MINOS or SNOPT (Fortran codes) to solve the linearly constrained subproblems. Only first derivatives are required. We present numerical results on a subset of the COPS, HS, and CUTE test problems, which include many large examples. The results demonstrate the robustness and efficiency of the stabilized LCL procedure.

### Citations

1347 |
Practical Optimization
- Gill, Murray, et al.
- 1981
(Show Context)
Citation Context ...e subproblems will be feasible, and the criteria for their early termination is heuristic. Our method may be regarded as a generalization of sequential augmented Lagrangian methods (see, for example, =-=[26, 1, 18]-=-). The theory we develop provides a framework that unifies Robinson’s LCL method [40] with the bound-constrained Lagrangian (BCL) method used, for example, by LANCELOT [11]. In the context of our theo... |

1081 |
Practical Methods of Optimization
- Fletcher
- 1981
(Show Context)
Citation Context ...e subproblems will be feasible, and the criteria for their early termination is heuristic. Our method may be regarded as a generalization of sequential augmented Lagrangian methods (see, for example, =-=[26, 1, 18]-=-). The theory we develop provides a framework that unifies Robinson’s LCL method [40] with the bound-constrained Lagrangian (BCL) method used, for example, by LANCELOT [11]. In the context of our theo... |

496 |
Constrained Optimization and Lagrange Multiplier Method
- Bertsekas
- 1982
(Show Context)
Citation Context ...e subproblems will be feasible, and the criteria for their early termination is heuristic. Our method may be regarded as a generalization of sequential augmented Lagrangian methods (see, for example, =-=[26, 1, 18]-=-). The theory we develop provides a framework that unifies Robinson’s LCL method [40] with the bound-constrained Lagrangian (BCL) method used, for example, by LANCELOT [11]. In the context of our theo... |

363 |
AMPL: A Modeling Language for
- Fourer, Gay, et al.
- 2002
(Show Context)
Citation Context ...obally convergent.) Nevertheless, the well-known software package MINOS [36] employs an LCL method and has proved effective on many problems (large and small), especially within the GAMS [8] and AMPL =-=[19]-=- environments. It is widely used in industry and academia. Its success motivates us to propose an LCL-like method for which global convergence to a local minimizer or a stationary point can be proved ... |

328 | SNOPT: An SQP algorithm for Large-Scale Constrained Optimization
- Gill, Murray, et al.
- 2001
(Show Context)
Citation Context ...lems and is also based on sound theory. We implemented the sLCL method as a Matlab program that calls either the reduced-gradient part of MINOS [35] or the sequential quadratic programming code SNOPT =-=[25]-=- to solve the linearly constrained subproblems. These solvers are most efficient on problems with few degrees of freedom. Also, they use only first derivatives, and consequently our implementation req... |

243 | Benchmarking optimization software with performance profiles
- Dolan, Moré
(Show Context)
Citation Context ...e conducted on an AMD Athlon 1700XP using 384 MB of RAM, running Linux 2.4.18.s888 MICHAEL P. FRIEDLANDER AND MICHAEL A. SAUNDERS Figure 8.1 shows performance profiles, as described by Dolan and Moré =-=[16]-=-, for LCLOPT/MINOS (dotted line), LCLOPT/SNOPT (dashed line), and MINOS (solid line). The statistic profiled in the top chart is the total number of function and gradient evaluations. In the bottom ch... |

234 |
Inexact Newton methods
- Dembo, Eisentat, et al.
- 1982
(Show Context)
Citation Context ...linear equations that would be derived from applying Newton’s method to (3.1) (again, ignoring bound constraints). In that case, the theory from inexact Newton methods (Dembo, Eisenstat, and Steihaug =-=[14]-=-) predicts that the quadratic convergence rate is recovered when the residual error is reduced at the rate O(�F(xk,yk,zk)�). The similarity between (8.1) and the Newton equations hints at the possibil... |

175 |
Multiplier and gradient methods
- HESTENES
- 1969
(Show Context)
Citation Context ...e equivalent bound-constrained minimization problem (BCk) minimize x Lk(x) subject to x ≥ 0.s870 MICHAEL P. FRIEDLANDER AND MICHAEL A. SAUNDERS Subproblem (BCk) is used by BCL methods (e.g., Hestenes =-=[29]-=-, Powell [39], Bertsekas [1], Conn, Gould, and Toint [10]) and in particular by LANCELOT [11]. Recovering the LCL subproblem. The ℓ1 penalty function is exact. If the linearization is feasible and σk ... |

123 |
CUTEr, a constrained and unconstrained testing environment, revisited
- Toint
- 2003
(Show Context)
Citation Context ...Numerical results. This section summarizes the performance of the two LCLOPT implementations on a set of nonlinearly constrained test problems from the COPS 2.0 [15], Hock–Schittkowski [30], and CUTE =-=[5]-=- test suites. As a benchmark, we also give results for AMPL/MINOS 19981015 on all the test problems. We used the AMPL versions of all problems, as formulated by Vanderbei [45]. A MEX interface to the ... |

123 |
GAMS: A User's Guide, The Scientific
- Brooke, Kendrick, et al.
- 1988
(Show Context)
Citation Context ...ght not be globally convergent.) Nevertheless, the well-known software package MINOS [36] employs an LCL method and has proved effective on many problems (large and small), especially within the GAMS =-=[8]-=- and AMPL [19] environments. It is widely used in industry and academia. Its success motivates us to propose an LCL-like method for which global convergence to a local minimizer or a stationary point ... |

96 |
LANCELOT: A Fortran Package for LargeScale Nonlinear Optimization (Release A
- Toint
- 1991
(Show Context)
Citation Context ...s (see, for example, [26, 1, 18]). The theory we develop provides a framework that unifies Robinson’s LCL method [40] with the bound-constrained Lagrangian (BCL) method used, for example, by LANCELOT =-=[11]-=-. In the context of our theory, the proposed algorithm is actually a continuum of methods, with Robinson’s LCL method and the BCL method at opposite ends of a spectrum. The sLCL method exploits this r... |

76 | ADIC: An extensible automatic differentiation tool for ANSIC
- Bischof, Roh, et al.
- 1997
(Show Context)
Citation Context ...gly available for certain problem classes, e.g., within recent versions of GAMS and AMPL, and for more general functions defined by Fortran or C code, notably ADIFOR and ADIC (Bischof, Roh, and Mauer =-=[4]-=- and Bischof et al. [3]). These may be used by SQP and interior methods for nonlinearly constrained (NC) problems. Certain theoretical hurdles might be avoided, however, by developing specialized seco... |

75 | User’s guide for SNOPT 5.3: A Fortran package for large-scale nonlinear programming
- Gill, Murray, et al.
- 1998
(Show Context)
Citation Context ...d in Matlab 6 [33] and is called LCLOPT. There are two versions. LCLOPT/MINOS uses MINOS 5.5 [35, 36] to solve the sequence of linearly constrained subproblems, while LCLOPT/SNOPT uses SNOPT 6.1-1(5) =-=[24]-=-. We now turn our attention to an optimization problem with more general constraints and leave (GNP) behind.sA GLOBALLY CONVERGENT LCL METHOD 885 7.1. Problem formulation. LCLOPT solves problems of th... |

56 |
A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bounds
- Conn, Gould, et al.
(Show Context)
Citation Context ...property. These results rely on a relationship between ‖c(x∗ k )‖ and ρk, namely, (5.3). We know from BCL convergence theory that the convergence rate is superlinear if ρk →∞and linear otherwise (cf. =-=[1, 9, 10]-=-). Because ηk is reduced at a sublinear rate, ‖c(x∗ k )‖ will eventually go to zero faster than ηk, at which point it is no longer necessary to increase ρk. Thus, we can be assured that Algorithm 2 do... |

53 |
A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bounds
- Toint
- 1991
(Show Context)
Citation Context ...property. These results rely on a relationship between �c(x∗ k )� and ρk, namely, (5.3). We know from BCL convergence theory that the convergence rate is superlinear if ρk →∞and linear otherwise (cf. =-=[1, 9, 10]-=-). Because ηk is reduced at a sublinear rate, �c(x ∗ k i=1 i=1 )� will eventually go to zero faster than ηk, at which point it is no longer necessary to increase ρk. Thus, we can be assured that Algor... |

42 |
Test Examples for Nonlinear Programming
- Hock, Schittkowski
- 1981
(Show Context)
Citation Context ...ussed next. 8. Numerical results. This section summarizes the performance of the two LCLOPT implementations on a set of nonlinearly constrained test problems from the COPS 2.0 [15], Hock–Schittkowski =-=[30]-=-, and CUTE [5] test suites. As a benchmark, we also give results for AMPL/MINOS 19981015 on all the test problems. We used the AMPL versions of all problems, as formulated by Vanderbei [45]. A MEX int... |

39 |
CUTEr and SifDec: A constrained and unconstrained testing environment, revisited
- Toint
(Show Context)
Citation Context ...cavty2, drcavty3, flosp2hh, flosp2hl, flosp2hm, flosp2th, flosp2tl, flosp2tm, methanb8, methanl8, res. (To avoid such an exclusion rate, future experiments will work directly with the CUTEr interface =-=[27]-=-.) Of the remaining 42 problems, 17 can be adjusted in size. The solvers were again applied to the largest versions that would not cause memory paging. Table 8.5 gives the dimensions.s892 MICHAEL P. F... |

28 | Hooking your solver to AMPL
- Gay
- 1997
(Show Context)
Citation Context ...15 on all the test problems. We used the AMPL versions of all problems, as formulated by Vanderbei [45]. A MEX interface to the AMPL library makes functions and gradients available in Matlab; see Gay =-=[22]-=- for details. (The CUTE versions of the problems could also have been used from Matlab.) All runs were conducted on an AMD Athlon 1700XP using 384 MB of RAM, running Linux 2.4.18.s888 MICHAEL P. FRIED... |

26 |
An ℓ1 penalty method for nonlinear constraints
- Fletcher
- 1985
(Show Context)
Citation Context ...hreshold, v and w are likely to be zero and the minimizers of the elastic problem (ELCk) will coincide with the minimizers of the inelastic problem (LCk). Exact penalty functions have been studied by =-=[28, 1, 17]-=- among others. See Conn, Gould, and Toint [13] for a more recent discussion. We are particularly interested in this feature when the iterates generated by the sLCL algorithm are approaching a solution... |

26 |
Exact penalty functions in nonlinear programming
- Han, Mangasarian
- 1979
(Show Context)
Citation Context ...t of a large �x∗ k − xk� (see (5.2)). 3.1. The ℓ1 penalty function. The term σkeT (v+w)istheℓ1 penalty function. Together with the constraints v, w ≥ 0, it is equivalent to a penalty on �v − w�1 (see =-=[28]-=-). Eliminating v − w, we see that the elastic subproblem (ELCk) can be stated as (ELC ′ k ) minimize x subject to x ≥ 0, Lk(x)+σk�ck(x)�1 with solution (x∗ k ,z∗ k ). This immediately reveals the stab... |

23 | Benchmarking Optimization Software with COPS
- Dolan, Moré
- 2001
(Show Context)
Citation Context ...s used for the runs discussed next. 8. Numerical results. This section summarizes the performance of the two LCLOPT implementations on a set of nonlinearly constrained test problems from the COPS 2.0 =-=[15]-=-, Hock–Schittkowski [30], and CUTE [5] test suites. As a benchmark, we also give results for AMPL/MINOS 19981015 on all the test problems. We used the AMPL versions of all problems, as formulated by V... |

17 |
Convergence properties of an augmented lagrangian algorithm for optimization with a combination of general equality and linear constraints
- Toint
- 1996
(Show Context)
Citation Context ...naries. We need the following lemma to bound the errors in the least-squares multiplier estimates relative to �x∗ k − x∗�, the error in x∗ k . The lemma follows from Lemmas 2.1 and 4.4 of Conn et al. =-=[9]-=-. It simply demonstrates that �y(x) is Lipschitz continuous in a neighborhood of x∗.s872 MICHAEL P. FRIEDLANDER AND MICHAEL A. SAUNDERS Algorithm 2: Stabilized LCL. Input: x0,y0,z0 Output: x∗,y∗,z∗ Se... |

14 |
On the number of inner iterations per outer iteration of a globally convergent algorithm for optimization with general nonlinear equality constraints and simple bounds
- Toint
- 1992
(Show Context)
Citation Context ...and the Newton equations hints at the possibility of recovering the quadratic convergence rate of the LCL and sLCL methods by reducing ωk at the rate O(�F(xk,yk,zk)�). See also Conn, Gould, and Toint =-=[12]-=-. We note, however, that stronger assumptions may be needed on the smoothness of the nonlinear functions. This issue deserves more study. 9. Conclusions. The stabilized LCL method developed in this pa... |

9 |
Private communication
- Gill
- 2002
(Show Context)
Citation Context ...ty tolerance within the 5000 iteration limit. Rather than using a higher limit, LCLOPT restarts SNOPT several times on the same subproblem—a strategy that sometimes reduces the total minor iterations =-=[23]-=-. 8.3. The Hock–Schittkowski (HS) test problems. The HS test suite contains 86 nonlinearly constrained problems [30]. These are generally small and dense problems. We exclude five problems from this s... |

2 | A globally and quadratically convergent algorithm for general nonlinear programming problems - Best, Bräuninger, et al. - 1981 |

2 |
A modification of Robinson’s algorithm for general nonlinear programming problems requiring only approximate solutions of subproblems with linear equality constraints
- Bräuninger
- 1977
(Show Context)
Citation Context ...e also shows that near a solution, the solutions to the linearly constrained subproblems, if parameterized appropriately, form a continuous path converging to (x∗,y∗,z∗). In a later paper, Bräuninger =-=[6]-=- shows how the fast local convergence rate can be preserved with only approximate solutions of the subproblems (again, with ρk ≡ 0). The subproblems are solved to a tolerance that is tightened at a ra... |

1 |
A globally convergent version of Robinson’s algorithm for general nonlinear programming problems without using derivatives
- Bräuninger
- 1981
(Show Context)
Citation Context ...ions within the prescribed tolerance ωk. On the other end of the performance spectrum lies the issue of recovering LCL’s fast local convergence rate under inexact solves (cf. section 6.1). Bräuninger =-=[7]-=- proves that the quadratic convergence rate of Robinson’s method is retained when ωk is reduced at a rate O(�F(xk,yk,zk)�2 ) (cf. Theorem 6.2). The first-order KKT conditions (3.1) for the LCL subprob... |

1 |
An LCL implementation for nonlinear optimization, presented at
- Friedlander, Saunders
- 2003
(Show Context)
Citation Context ...eral NC problems by incorporating them into the sLCL algorithm. 9.2. Looking ahead (and behind). A Fortran 90 implementation of the sLCL algorithm, to be named Knossos, is currently under development =-=[21]-=-. As in Robinson [40] and the MINOS implementation [36], a key concept is departure from linearity (meaning the difference between the constraint functions and their linearization at the current solut... |