## Methods for nonlinear constraints in optimization calculations (1996)

Venue: | THE STATE OF THE ART IN NUMERICAL ANALYSIS |

Citations: | 9 - 2 self |

### BibTeX

@INPROCEEDINGS{Conn96methodsfor,

author = {Andrew R. Conn and Nicholas I. M. Gould and Philippe L. Toint},

title = {Methods for nonlinear constraints in optimization calculations},

booktitle = {THE STATE OF THE ART IN NUMERICAL ANALYSIS},

year = {1996},

pages = {363--390},

publisher = {Clarendon Press}

}

### OpenURL

### Abstract

### Citations

341 | Interior point polynomial algorithms in convex programming: Theory and Algorithms - Nesterov, Nemirovskii - 1994 |

269 | C.: A limited memory algorithm for bound constrained optimization - Byrd, Lu, et al. - 1995 |

235 | Inexact Newton methods - Dembo, Eisenstat, et al. - 1982 |

193 | O.L.: Nonlinear Programming - Mangasarian - 1969 |

177 |
Multiplier and gradient methods
- Hestenes
- 1969
(Show Context)
Citation Context ...ty constrained problem by minimizing a sequence of problems of the form $)ab> = f(4 - C(& + Pllc(4ll;, (4.2) where y are estimates of the Lagrange multipliers and p a positive penalty parameter (see, =-=Hestenes, 1969-=-, and Powell, 1970). Convergence is assured by adjusting y and p, and it is not necessary for p to approach infinity. When first-order multiplier updates are used, the minimizers of (4.2) converge lin... |

149 | A nonmonotone line search technique for Newton’s method - Grippo, Lampariello, et al. - 1986 |

139 | On automatic differentiation
- Griewank
- 1989
(Show Context)
Citation Context ...ore attention from the numerical analysis community. The promise of automatic differentiation, that is the automatic accumulation of derivatives directly from codes which provide function values (see =-=Griewank, 1989-=-)) has been a long time coming. Dixon (1991) argues that automatic differentiation will revitalize second derivative methods, as there is then little reason to rely on secant approximations. This has ... |

115 | Sequential quadratic programming - Boggs, Tolle - 1995 |

96 |
LANCELOT: A Fortran Package for LargeScale Nonlinear Optimization (Release A
- Toint
- 1991
(Show Context)
Citation Context ...es capable of handling such problems CONOPT (Drud, 1985) and LSGRG2 (Smith and Lasdon, 1992) are generalized reduced gradient methods, MINOS (Murtagh and Saunders, 1982) and LANCELOT (Conn, Gould and =-=Toint, 1992-=-e) are based on augmented Lagrangian functions, ETR (Lalee, Nocedal and Plantega, 1993) is an SQP method for equality constraints, while that by Boggs et al. (1994) is a general SQP method. 8 Conclusi... |

90 | Network Optimization: Algorithms and Codes - BERTSEKAS, Linear - 1991 |

82 |
A globally convergent method for nonlinear programming
- Han
- 1977
(Show Context)
Citation Context ... preclude a natural choice, and most merit functions are attempts to balance these goals. 3sEarly globally convergent SQP methods were based upon the 11 exact penalty function (see Pschenichny, 1970, =-=Han, 1977-=-, and Powell, 1978). So long as the penalty parameter p is sufficiently large, the iteration (2.2) converges globally with many of the Hessian approximations discussed in Section 2.1.1. However, despi... |

74 | Large-Scale Linearly Constrained Optimization - Murtagh, Saunders - 1978 |

72 | On the Goldstein-Levitin-Polyak gradient projection method
- Bertsekas
- 1976
(Show Context)
Citation Context ...yak, 1966) simply chooses iterates according to x+ = Pn[z - Q,V,f(41, (54 where R is the set of feasible points, Pn[-u] is the projection of v into R and a, is a suitable stepsize (see, for instance, =-=Bertsekas, 1976-=-, or Dunn, 1981). When the constraints are simple bounds, R = {z : Z 5 z 5 U}, and the projection is easily computed as Pn[v] = mid@, 21, U), where mid denotes the vector whose components are the medi... |

71 | On the Solution of Large Quadratic Programming Problems with Bound Constraints - Moré, Toraldo - 1991 |

67 | On projected Newton barrier methods for linear programming and an equivalence to Karmarkar’s projective method - Gill, Murray, et al. - 1986 |

64 | A Trust Region Strategy for Nonlinear Equality Constrained Optimization, Numerical Optimization - Celis, Dennis, et al. - 1985 |

62 | Global convergence of a class of trust region algorithms for optimization with simple bounds - Toint - 1988 |

58 |
A projected lagrangian algorithm and its implementation for sparse nonlinear constraints
- Murthagh, Saunders
- 1982
(Show Context)
Citation Context ...sonable times on current desktop computers. Of current codes capable of handling such problems CONOPT (Drud, 1985) and LSGRG2 (Smith and Lasdon, 1992) are generalized reduced gradient methods, MINOS (=-=Murtagh and Saunders, 1982-=-) and LANCELOT (Conn, Gould and Toint, 1992e) are based on augmented Lagrangian functions, ETR (Lalee, Nocedal and Plantega, 1993) is an SQP method for equality constraints, while that by Boggs et al.... |

54 | On the identification of active constraints - Burke, Moré - 1988 |

45 | The watchdog technique for forcing convergence in algorithms for constrained optimization - Chamberlain, Powell, et al. |

44 |
A GRG Code for Large Sparse Dynamic Nonlinear Optimization
- Drud
- 1985
(Show Context)
Citation Context ... problems involving, say, 20,000 unknowns and similar numbers of constraints can be solved in reasonable times on current desktop computers. Of current codes capable of handling such problems CONOPT (=-=Drud, 1985-=-) and LSGRG2 (Smith and Lasdon, 1992) are generalized reduced gradient methods, MINOS (Murtagh and Saunders, 1982) and LANCELOT (Conn, Gould and Toint, 1992e) are based on augmented Lagrangian functio... |

42 | A global convergence theory for general trust-region-based algorithms for equality constrained optimization - Dennis, El-Alem, et al. - 1992 |

40 | A trust-region algorithm for nonlinearly constrained optimization - BYRD, SCHNABEL, et al. - 1985 |

39 | On the implementation of an algorithm for large-scale equality constrained optimization - Lalee, Nocedal, et al. |

36 | Global and asymptotic convergence rate estimates for a class of projected gradient processes - Dunn - 1981 |

34 |
Projected Hessian updating algorithms for nonlinearly constrained optimization
- Nocedal, Overton
- 1985
(Show Context)
Citation Context ... should be set to 2szero. This gives what is known as a reduced Hessian method. With an appropriate secant update formula, such a scheme is two-step superlinearly convergent method SO long as az = 1 (=-=Nocedal and Overton, 1985-=-). A related reduced Hessian method, due to Coleman and Conn (1982a), replaces (2.5) by A(z)Y(z)Az, = -C(Z + ZAZ,), (2.7) in the vicinity of a stationary point or Az, = 0 elsewhere. This method is als... |

33 | A reduced Hessian method for large-scale constrained optimization - Biegler, Nocedal, et al. - 1995 |

31 | On the maximization of a concave quadratic function with box constraints - Friedlander, Martínez - 1994 |

31 | Inertiacontrolling methods for general quadratic programming - Gill, Murray, et al. - 1991 |

30 | de la Maza. Nonlinear programming and nonsmooth optimization by successive linear programming - Fletcher, Sainz |

29 |
On the accurate determination of search directions for simple differentiable penalty functions
- Gould
- 1986
(Show Context)
Citation Context ...970s, it has been seen in a more favourable light since then. Firstly, perceived difficulties with ill-conditioning were shown to be benign provided sufficient care is taken (Broyden and Attia, 1984, =-=Gould, 1986-=-, Coleman and Hempel, 1990). Secondly, the requirement that (4.1) be minimized is easily relaxed. Moreover, Gould (1989) shows that asymptotically at most two Newton-like steps are required for each v... |

29 | Exact Penalty Function Algorithms for Finite Dimensional and Control Optimization Problems - Maratos |

29 | On combining feasibility, descent and superlinear convergence in inequality constrained optimization - Panier, Tits - 1993 |

28 | On the global convergence of trust region algorithms using inexact gradient information - Carter - 1991 |

26 | A practical anti-cycling procedure for linearly constrained optimization - Gill, Murray, et al. - 1989 |

26 | Convex Analysis and Minimization Algorithms: Part I: Fundamentals - Hiriart-Urruty, Lemaréchal - 1993 |

25 | Exposing constraints - Burke, Moré - 1994 |

25 | Convergence properties of trust region methods for linear and convex constraints - Burke, Moré, et al. - 1990 |

25 |
Constrained minimization problems
- LEVITIN, POLJAK
- 1966
(Show Context)
Citation Context ...unds, however, it is far easier to add or delete many constraints at each iteration, and the best mechanism for achieving this is the gradient projection algorithm. The grudient projection algorithm (=-=Levitin and Polyak, 1966-=-) simply chooses iterates according to x+ = Pn[z - Q,V,f(41, (54 where R is the set of feasible points, Pn[-u] is the projection of v into R and a, is a suitable stepsize (see, for instance, Bertsekas... |

22 | A Practical Algorithm for General Large Scale Nonlinear Optimization Problems - Boggs, Kearsley, et al. - 1994 |

22 | A superlinearly convergent algorithm for constrained optimization problems - Mayne, Polak - 1982 |

21 |
Nonlinear programming via an exact penalty function: asymptotic analysis
- Coleman, Conn
- 1982
(Show Context)
Citation Context ...problem of minimizing (2.18) may be reformulated as a quadratic programming problem, and has the desirable property that the subproblem is always consistent (this is also implicit in the algorithm of =-=Coleman and Conn, 1982-=-3). Nonetheless, Fletcher (1982) and Yuan (19850) observe that the Maratos “effect” may still occur if the search direction is computed by minimizing (2.18) but can be prevented if a second-order corr... |

21 | Robust trust-region algorithm with non-monotonic penalty parameter scheme for constrained optimization - El-Alem - 1992 |

21 | On the identification of active constraints. II. The nonconvex case - Burke - 1990 |

20 | An analysis of reduced Hessian methods for constrained optimization - Byrd, Nocedal - 1991 |

20 | Numerical Stability and Efficiency of Penalty Algorithms - Dussault - 1995 |

20 | A class of methods for nonlinear programming with termination and convergence properties - Fletcher - 1970 |

20 | F.J.: A sequential quadratic programming algorithm using an incomplete solution of the subproblem - Murray, Prieto - 1995 |

19 | Avoiding the Maratos effect by means of a nonmonotone line search II: Inequality problems - feasible iterates - Bonnans, Panier, et al. - 1992 |

19 | A B-differentiable equation-based, globally and locally quadratically convergent algorithm for nonlinear programs, complementarity and variational inequality problems - Pang - 1991 |