## Methods for nonlinear constraints in optimization calculations (1996)

Venue: | THE STATE OF THE ART IN NUMERICAL ANALYSIS |

Citations: | 9 - 2 self |

### BibTeX

@INPROCEEDINGS{Conn96methodsfor,

author = {Andrew R. Conn and Nicholas I. M. Gould and Philippe L. Toint},

title = {Methods for nonlinear constraints in optimization calculations},

booktitle = {THE STATE OF THE ART IN NUMERICAL ANALYSIS},

year = {1996},

pages = {363--390},

publisher = {Clarendon Press}

}

### OpenURL

### Abstract

### Citations

374 | Arkadii Nemirovskii. Interior-point polynomial algorithms in convex programming, volume 13 - Nesterov - 1994 |

307 | A limited memory algorithm for bound constrained optimization - Byrd, Lu, et al. - 1995 |

254 | Inexact-Newton methods - Dembo, Eisenstat, et al. - 1982 |

218 | Nonlinear Programming - Mangasarian - 1994 |

188 |
Multiplier and gradient methods
- Hestenes
- 1969
(Show Context)
Citation Context ...ty constrained problem by minimizing a sequence of problems of the form $)ab> = f(4 - C(& + Pllc(4ll;, (4.2) where y are estimates of the Lagrange multipliers and p a positive penalty parameter (see, =-=Hestenes, 1969-=-, and Powell, 1970). Convergence is assured by adjusting y and p, and it is not necessary for p to approach infinity. When first-order multiplier updates are used, the minimizers of (4.2) converge lin... |

172 | A nonmonotone line search technique for Newton’s method - Grippo, Lampariello, et al. - 1986 |

141 | On automatic differentiation
- Griewank
- 1988
(Show Context)
Citation Context ...ore attention from the numerical analysis community. The promise of automatic differentiation, that is the automatic accumulation of derivatives directly from codes which provide function values (see =-=Griewank, 1989-=-)) has been a long time coming. Dixon (1991) argues that automatic differentiation will revitalize second derivative methods, as there is then little reason to rely on secant approximations. This has ... |

121 | Sequential quadratic programming - Boggs, Tolle - 1995 |

103 |
LANCELOT: a Fortran package for large-scale nonlinear optimization (Release A). Number 17
- Toint
- 1992
(Show Context)
Citation Context ...es capable of handling such problems CONOPT (Drud, 1985) and LSGRG2 (Smith and Lasdon, 1992) are generalized reduced gradient methods, MINOS (Murtagh and Saunders, 1982) and LANCELOT (Conn, Gould and =-=Toint, 1992-=-e) are based on augmented Lagrangian functions, ETR (Lalee, Nocedal and Plantega, 1993) is an SQP method for equality constraints, while that by Boggs et al. (1994) is a general SQP method. 8 Conclusi... |

99 | Linear network optimization: algorithms and codes - Bertsekas - 1991 |

94 |
A Globally Convergent Method for Nonlinear Programming
- Han
- 1977
(Show Context)
Citation Context ... preclude a natural choice, and most merit functions are attempts to balance these goals. 3sEarly globally convergent SQP methods were based upon the 11 exact penalty function (see Pschenichny, 1970, =-=Han, 1977-=-, and Powell, 1978). So long as the penalty parameter p is sufficiently large, the iteration (2.2) converges globally with many of the Hessian approximations discussed in Section 2.1.1. However, despi... |

89 | Large-scale linearly constrained optimisation - Murtagh, Saunders - 1978 |

81 | On the Solution of Large Quadratic Programming Problems with Bound Constraints - Moré, Toraldo - 1991 |

79 | On the Goldstein-Levitin-Polyak gradient projectionmethod
- Bertsekas
- 1976
(Show Context)
Citation Context ...yak, 1966) simply chooses iterates according to x+ = Pn[z - Q,V,f(41, (54 where R is the set of feasible points, Pn[-u] is the projection of v into R and a, is a suitable stepsize (see, for instance, =-=Bertsekas, 1976-=-, or Dunn, 1981). When the constraints are simple bounds, R = {z : Z 5 z 5 U}, and the projection is easily computed as Pn[v] = mid@, 21, U), where mid denotes the vector whose components are the medi... |

73 | On projected Newton barrier methods for linear programming and an equivalence to Karmarkar’s projective method - Gill, Murray, et al. - 1986 |

68 |
A projected lagrangian algorithm and its implementation for sparse nonlinear constraints
- Murtagh, Saunders
- 1982
(Show Context)
Citation Context ...sonable times on current desktop computers. Of current codes capable of handling such problems CONOPT (Drud, 1985) and LSGRG2 (Smith and Lasdon, 1992) are generalized reduced gradient methods, MINOS (=-=Murtagh and Saunders, 1982-=-) and LANCELOT (Conn, Gould and Toint, 1992e) are based on augmented Lagrangian functions, ETR (Lalee, Nocedal and Plantega, 1993) is an SQP method for equality constraints, while that by Boggs et al.... |

67 | Global convergence of a class of trust region algorithms for optimization with simple bounds - Toint |

65 | A trust region strategy for nonlinear equality constrained optimization - Celis, Dennis, et al. - 1984 |

54 | On the identification of active constraints - Burke, Moré - 1988 |

51 | The watchdog technique for forcing convergence in algorithms for constrained optimization - Chamberlain, Lemarechal, et al. - 1982 |

49 |
CONOPT: a GRG code for large sparse dynamic nonlinear optimization problems
- Drud
- 1985
(Show Context)
Citation Context ... problems involving, say, 20,000 unknowns and similar numbers of constraints can be solved in reasonable times on current desktop computers. Of current codes capable of handling such problems CONOPT (=-=Drud, 1985-=-) and LSGRG2 (Smith and Lasdon, 1992) are generalized reduced gradient methods, MINOS (Murtagh and Saunders, 1982) and LANCELOT (Conn, Gould and Toint, 1992e) are based on augmented Lagrangian functio... |

47 | A global convergence theory for general trust-region-based algorithms for equality constrained optimization - Dennis, El-Alem, et al. - 1997 |

41 | A trust region algorithm for nonlinearly constrained optimization - Byrd, Schnabel, et al. - 1987 |

41 | On the implementation of an algorithm for large-scale equality constrained optimization - Lalee, Nocedal, et al. - 1998 |

38 | A reduced Hessian method for large-scale constrained optimization - Biegler, Nocedal, et al. - 1995 |

36 | Global and asymptotic convergence rate estimates for a class of projected gradient processes - Dunn - 1981 |

36 |
Projectal Hessian updating algorithms for nonlinearly constrained optimization
- Nocedal, Overton
- 1985
(Show Context)
Citation Context ... should be set to 2szero. This gives what is known as a reduced Hessian method. With an appropriate secant update formula, such a scheme is two-step superlinearly convergent method SO long as az = 1 (=-=Nocedal and Overton, 1985-=-). A related reduced Hessian method, due to Coleman and Conn (1982a), replaces (2.5) by A(z)Y(z)Az, = -C(Z + ZAZ,), (2.7) in the vicinity of a stationary point or Az, = 0 elsewhere. This method is als... |

34 | On the maximization of a concave quadratic function with box constraints - Friedlander, Martínez - 1994 |

33 | Inertia-controlling methods for general quadratic programming - Gill, Murray, et al. - 1991 |

32 | On combining feasibility, descent and superlinear convergence in inequality constrained optimization - Panier, Tits - 1993 |

31 | de la Maza. Nonlinear programming and nonsmooth optimization by successive linear programming - Fletcher, Sainz - 1989 |

31 |
On the accurate determination of search directions for simple differentiable penalty functions
- Gould
- 1986
(Show Context)
Citation Context ...970s, it has been seen in a more favourable light since then. Firstly, perceived difficulties with ill-conditioning were shown to be benign provided sufficient care is taken (Broyden and Attia, 1984, =-=Gould, 1986-=-, Coleman and Hempel, 1990). Secondly, the requirement that (4.1) be minimized is easily relaxed. Moreover, Gould (1989) shows that asymptotically at most two Newton-like steps are required for each v... |

31 |
Constrained minimization problems
- Levitin, Polyak
- 1966
(Show Context)
Citation Context ...unds, however, it is far easier to add or delete many constraints at each iteration, and the best mechanism for achieving this is the gradient projection algorithm. The grudient projection algorithm (=-=Levitin and Polyak, 1966-=-) simply chooses iterates according to x+ = Pn[z - Q,V,f(41, (54 where R is the set of feasible points, Pn[-u] is the projection of v into R and a, is a suitable stepsize (see, for instance, Bertsekas... |

30 | A practical anti-cycling procedure for linearly constrained optimization - Gill, Murray, et al. - 1989 |

30 | Exact Penalty Function Algorithms for Finite Dimensional and Optimization Problems - Maratos - 1978 |

29 | On the global convergence of trust region algorithms using inexact gradient information - Carter - 1991 |

27 | Convergence properties of trust region methods for linear and convex constraints - Burke, Moré, et al. - 1990 |

27 | Convex Analysis and Minimization Algorithms. Part 1: Fundamentals - Hiriart-Urruty, Lemaréchal - 1993 |

25 | A sequental quadratic programming algorithm using an incomplete solution of the subproblem - Murray, Prieto - 1995 |

24 | Exposing constraints - Burke, Moré - 1994 |

24 | A superlinearly convergent algorithm for constrained optimization problems - Mayne, Polak - 1982 |

23 | A practical algorithm for general large scale nonlinear optimization problems - Boggs, Kearsley, et al. - 1999 |

22 | Avoiding the maratos effect by means of a nonmonotone line search i. general constrained problems - Panier, Tits - 1991 |

22 | Numerical stability and efficiency of penalty algorithms - DUSSAULT - 1995 |

22 | A general quadratic programming algorithm - Fletcher - 1971 |

21 |
Nonlinear programming via an exact penalty function: asymptotic analysis
- COLEMAN, CONN
- 1982
(Show Context)
Citation Context ...problem of minimizing (2.18) may be reformulated as a quadratic programming problem, and has the desirable property that the subproblem is always consistent (this is also implicit in the algorithm of =-=Coleman and Conn, 1982-=-3). Nonetheless, Fletcher (1982) and Yuan (19850) observe that the Maratos “effect” may still occur if the search direction is computed by minimizing (2.18) but can be prevented if a second-order corr... |

21 | Robust trust-region algorithm with non-monotonic penalty parameter scheme for constrained optimization - El-Alem - 1992 |

21 | A class of methods for nonlinear programming with termination and convergence properties - Fletcher - 1970 |

21 | An algorithm for large-scale quadratic programming - Gould - 1991 |

21 | On the identification of active constraints. II. The nonconvex case - Burke - 1990 |