Results 11  20
of
62
InexactRestoration Algorithm for Constrained Optimization
 Journal of Optimization Theory and Applications
, 1999
"... We introduce a new model algorithm for solving nonlinear programming problems. No slack variables are introduced for dealing with inequality constraints. Each iteration of the method proceeds in two phases. In the first phase, feasibility of the current iterate is improved and in second phase the ob ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
We introduce a new model algorithm for solving nonlinear programming problems. No slack variables are introduced for dealing with inequality constraints. Each iteration of the method proceeds in two phases. In the first phase, feasibility of the current iterate is improved and in second phase the objective function value is reduced in an approximate feasible set. The point that results from the second phase is compared with the current point using a nonsmooth merit function that combines feasibility and optimality. This merit function includes a penalty parameter that changes between different iterations. A suitable updating procedure for this penalty parameter is included by means of which it can be increased or decreased along different iterations. The conditions for feasibility improvement at the first phase and for optimality improvement at the second phase are mild, and largescale implementations of the resulting method are possible. We prove that under suitable conditions, that ...
Convergence Properties of an Augmented Lagrangian Algorithm for Optimization with a Combination of General Equality and Linear Constraints
 SIAM Journal on Optimization
, 1996
"... We consider the global and local convergence properties of a class of augmented Lagrangian methods for solving nonlinear programming problems. In these methods, linear and more general constraints are handled in different ways. The general constraints are combined with the objective function in an a ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
We consider the global and local convergence properties of a class of augmented Lagrangian methods for solving nonlinear programming problems. In these methods, linear and more general constraints are handled in different ways. The general constraints are combined with the objective function in an augmented Lagrangian. The iteration consists of solving a sequence of subproblems; in each subproblem the augmented Lagrangian is approximately minimized in the region defined by the linear constraints. A subproblem is terminated as soon as a stopping condition is satisfied. The stopping rules that we consider here encompass practical tests used in several existing packages for linearly constrained optimization. Our algorithm also allows different penalty parameters to be associated with disjoint subsets of the general constraints. In this paper, we analyze the convergence of the sequence of iterates generated by such an algorithm and prove global and fast linear convergence as well as showin...
A PrimalDual Algorithm for Minimizing a NonConvex Function Subject to Bound and Linear Equality Constraints
, 1996
"... A new primaldual algorithm is proposed for the minimization of nonconvex objective functions subject to simple bounds and linear equality constraints. The method alternates between a classical primaldual step and a Newtonlike step in order to ensure descent on a suitable merit function. Converge ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
A new primaldual algorithm is proposed for the minimization of nonconvex objective functions subject to simple bounds and linear equality constraints. The method alternates between a classical primaldual step and a Newtonlike step in order to ensure descent on a suitable merit function. Convergence of a welldefined subsequence of iterates is proved from arbitrary starting points. Algorithmic variants are discussed and preliminary numerical results presented. 1 IBM T.J. Watson Research Center, P.O.Box 218, Yorktown Heights, NY 10598, USA Email : arconn@watson.ibm.com 2 Department for Computation and Information, Rutherford Appleton Laboratory, Chilton, Oxfordshire, OX11 0QX, England, EU Email : nimg@letterbox.rl.ac.uk 3 Current reports available by anonymous ftp from joyousgard.cc.rl.ac.uk (internet 130.246.9.91) in the directory "pub/reports". 4 Department of Mathematics, Facult'es Universitaires ND de la Paix, 61, rue de Bruxelles, B5000 Namur, Belgium, EU Email : pht@ma...
User’s Guide for SNOPT Version 7: Software for LargeScale Nonlinear Programming
"... SNOPT is a generalpurpose system for constrained optimization. It minimizes a linear or nonlinear function subject to bounds on the variables and sparse linear or nonlinear constraints. It is suitable for largescale linear and quadratic programming and for linearly constrained optimization, as wel ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
SNOPT is a generalpurpose system for constrained optimization. It minimizes a linear or nonlinear function subject to bounds on the variables and sparse linear or nonlinear constraints. It is suitable for largescale linear and quadratic programming and for linearly constrained optimization, as well as for general nonlinear programs. SNOPT finds solutions that are locally optimal, and ideally any nonlinear functions should be smooth and users should provide gradients. It is often more widely useful. For example, local optima are often global solutions, and discontinuities in the function gradients can often be tolerated if they are not too close to an optimum. Unknown gradients are estimated by finite differences. SNOPT uses a sequential quadratic programming (SQP) algorithm. Search directions are obtained from QP subproblems that minimize a quadratic model of the Lagrangian function subject to linearized constraints. An augmented Lagrangian merit function is reduced along each search direction to ensure convergence from any starting point.
Computing a Search Direction for LargeScale LinearlyConstrained Nonlinear Optimization Calculations
, 1993
"... . We consider the computation of Newtonlike search directions that are appropriate when solving largescale linearlyconstrained nonlinear optimization problems. We investigate the use of both direct and iterative methods and consider efficient ways of modifying the Newton equations in order to ens ..."
Abstract

Cited by 12 (8 self)
 Add to MetaCart
. We consider the computation of Newtonlike search directions that are appropriate when solving largescale linearlyconstrained nonlinear optimization problems. We investigate the use of both direct and iterative methods and consider efficient ways of modifying the Newton equations in order to ensure global convergence of the underlying optimization methods. 1 Parallel Algorithms Team, CERFACS, 42 Ave. G. Coriolis, 31057 Toulouse Cedex, France 2 IANCNR, c/o Dipartimento di Matematica, 209, via Abbiategrasso 27100 Pavia, Italy 3 Department of Mathematics, University of California, 405 Hilgard Avenue, Los Angeles, CA 900241555, USA 4 Central Computing Department, Rutherford Appleton Laboratory, Chilton, Oxfordshire, OX11 0QX, England 5 Current reports available by anonymous ftp from the directory "pub/reports" on camelot.cc.rl.ac.uk (internet 130.246.8.61) Keywords: Largescale problems, unconstrained optimization, linearly constrained optimization, direct methods, iterative...
Methods for nonlinear constraints in optimization calculations
 THE STATE OF THE ART IN NUMERICAL ANALYSIS
, 1996
"... ..."
Automatic Differentiation of Nonlinear AMPL Models
 IN AUTOMATIC DIFFERENTIATION OF ALGORITHMS: THEORY, IMPLEMENTATION, AND APPLICATION
, 1991
"... We describe favorable experience with automatic differentiation of mathematical programming problems expressed in AMPL, a modeling language for mathematical programming. Nonlinear expressions are translated to loopfree code, which makes analytically correct gradients and Jacobians particularly easy ..."
Abstract

Cited by 10 (9 self)
 Add to MetaCart
We describe favorable experience with automatic differentiation of mathematical programming problems expressed in AMPL, a modeling language for mathematical programming. Nonlinear expressions are translated to loopfree code, which makes analytically correct gradients and Jacobians particularly easy to compute  static storage allocation suffices. The nonlinear expressions may either be interpreted or, to gain some execution speed, converted to Fortran or C.
Active Set Strategies and the LP Dual Active Set Algorithm
, 1996
"... fter a general treatment of primal and dual active set strategies, we present the Dual m Active Set Algorithm for linear programming and prove its convergence. An efficient impleentation is developed using proximal point approximations. This implementation involves a b primal/dual proximal iteration ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
fter a general treatment of primal and dual active set strategies, we present the Dual m Active Set Algorithm for linear programming and prove its convergence. An efficient impleentation is developed using proximal point approximations. This implementation involves a b primal/dual proximal iteration similar to one introduced by Rockafellar, and a new iteration ased on optimization of a proximal vector parameter. This proximal parameter optimization , w problem is well conditioned, leading to rapid convergence of the conjugate gradient method hile the original proximal function is terribly conditioned, leading to almost undetectable conz vergence of the conjugate gradient method. Limits as a proximal scalar parameter tends to ero are evaluated. Intriguing numerical results are presented for Netlib test problems. t s Key Words. Linear programming, quadratic programming, active sets, dual method, leas quares, proximal point, extrapolation, conjugate gradients, successive overrelexation ...
Newton methods for largescale linear inequalityconstrained minimization
 SIAM Journal on Optimization
, 1997
"... Abstract. Newton methods of the linesearch type for largescale minimization subject to linear inequality constraints are discussed. The purpose of the paper is twofold: (i) to give an active–settype method with the ability to delete multiple constraints simultaneously and (ii) to give a relatively ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Abstract. Newton methods of the linesearch type for largescale minimization subject to linear inequality constraints are discussed. The purpose of the paper is twofold: (i) to give an active–settype method with the ability to delete multiple constraints simultaneously and (ii) to give a relatively short general convergence proof for such a method. It is also discussed how multiple constraints can be added simultaneously. The approach is an extension of a previous work by the same authors for equalityconstrained problems. It is shown how the search directions can be computed without the need to compute the reduced Hessian of the objective function. The convergence analysis states that every limit point of a sequence of iterates satisfies the secondorder necessary optimality conditions. Key words. linear inequalityconstrained minimization, negative curvature, modified Newton method, symmetric indefinite factorization, largescale minimization, linesearch method
A Numerical Comparison Between the LANCELOT and MINOS packages for largescale nonlinear optimization: the complete results
, 1997
"... This report complements another paper by the same authors, "A numerical comparison between the LANCELOT and MINOS packages for largescale nonlinear optimization ". It presents the complete numerical results on which the discussion of the MINOS/LANCELOT comparison is based. It is intended mostly ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
This report complements another paper by the same authors, "A numerical comparison between the LANCELOT and MINOS packages for largescale nonlinear optimization ". It presents the complete numerical results on which the discussion of the MINOS/LANCELOT comparison is based. It is intended mostly for reference. One set of tables lists the dimensions of 913 test problems from the CUTE collection, and a second set reports the performance of both packages problem by problem. 1 Introduction In Bongartz, Conn, Gould, Saunders and Toint (1997), the authors have presented a comparison between the default versions of the LANCELOT and MINOS packages on a set of 913 test problems extracted from the CUTE collection (Bongartz, Conn, Gould and Toint, 1995). That contribution describes the algorithms used by the packages and discusses statistical summaries of the results. Since we believe that complete data should be accessible to interested readers, the present companion report provides...