Results 1 
4 of
4
An Algorithm for Nonlinear Optimization Using Linear Programming and Equality Constrained Subproblems
, 2003
"... This paper describes an activeset algorithm for largescale nonlinear programming based on the successive linear programming method proposed by Fletcher and Sainz de la Maza [10]. The step computation is performed in two stages. In the first stage a linear program is solved to estimate the activ ..."
Abstract

Cited by 39 (12 self)
 Add to MetaCart
This paper describes an activeset algorithm for largescale nonlinear programming based on the successive linear programming method proposed by Fletcher and Sainz de la Maza [10]. The step computation is performed in two stages. In the first stage a linear program is solved to estimate the active set at the solution. The linear program is obtained by making a linear approximation to the ` 1 penalty function inside a trust region. In the second stage, an equality constrained quadratic program (EQP) is solved involving only those constraints that are active at the solution of the linear program.
An activeset algorithm for nonlinear programming using linear programming and equality constrained subproblems
, 2002
"... This paper describes an activeset algorithm for largescale nonlinear programming based on the successive linear programming method proposed by Fletcher and Sainz de la Maza [9]. The step computation is performed in two stages. In the rst stage a linear program is solved to estimate the active set ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
This paper describes an activeset algorithm for largescale nonlinear programming based on the successive linear programming method proposed by Fletcher and Sainz de la Maza [9]. The step computation is performed in two stages. In the rst stage a linear program is solved to estimate the active set at the solution. The linear program is obtained by making a linear approximation to the `1 penalty function inside a trust region. In the second stage, an equality constrained quadratic program (EQP) is solved involving only those constraints that are active atthesolution of the linear program. The EQP incorporates a trustregion constraint and is solved (inexactly) by means of a projected conjugate gradient method. Numerical experiments are presented illustrating the performance of the algorithm on the CUTEr [1] test set.
Recursive trustregion methods for multiscale nonlinear optimization
 SIAM J. Optim
"... Abstract. A class of trustregion methods is presented for solving unconstrained nonlinear and possibly nonconvex discretized optimization problems, like those arising in systems governed by partial differential equations. The algorithms in this class make use of the discretization level as a mean o ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Abstract. A class of trustregion methods is presented for solving unconstrained nonlinear and possibly nonconvex discretized optimization problems, like those arising in systems governed by partial differential equations. The algorithms in this class make use of the discretization level as a mean of speeding up the computation of the step. This use is recursive, leading to true multilevel/multiscale optimization methods reminiscent of multigrid methods in linear algebra and the solution of partialdifferential equations. A simple algorithm of the class is then described and its numerical performance is shown to be numerically promising. This observation then motivates a proof of global convergence to firstorder stationary points on the fine grid that is valid for all algorithms in the class.
Relaxing Convergence Conditions To Improve The Convergence Rate
, 1999
"... Standard global convergence proofs are examined to determine why some algorithms perform better than other algorithms. We show that relaxing the conditions required to prove global convergence can improve an algorithm's performance. Further analysis indicates that minimizing an estimate of the ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Standard global convergence proofs are examined to determine why some algorithms perform better than other algorithms. We show that relaxing the conditions required to prove global convergence can improve an algorithm's performance. Further analysis indicates that minimizing an estimate of the distance to the minimum relaxes the convergence conditions in such a way as to improve an algorithm's convergence rate. A new linesearch algorithm based on these ideas is presented that does not force a reduction in the objective function at each iteration, yet it allows the objective function to increase during an iteration only if this will result in faster convergence. Unlike the nonmonotone algorithms in the literature, these new functions dynamically adjust to account for changes between the influence of curvature and descent. The result is an optimal algorithm in the sense that an estimate of the distance to the minimum is minimized at each iteration. The algorithm is shown to be well defi...