Results 1  10
of
10
An interior point algorithm for large scale nonlinear programming
 SIAM Journal on Optimization
, 1999
"... The design and implementation of a new algorithm for solving large nonlinear programming problems is described. It follows a barrier approach that employs sequential quadratic programming and trust regions to solve the subproblems occurring in the iteration. Both primal and primaldual versions of t ..."
Abstract

Cited by 74 (17 self)
 Add to MetaCart
The design and implementation of a new algorithm for solving large nonlinear programming problems is described. It follows a barrier approach that employs sequential quadratic programming and trust regions to solve the subproblems occurring in the iteration. Both primal and primaldual versions of the algorithm are developed, and their performance is illustrated in a set of numerical tests. Key words: constrained optimization, interior point method, largescale optimization, nonlinear programming, primal method, primaldual method, successive quadratic programming, trust region method.
TrustRegion InteriorPoint SQP Algorithms For A Class Of Nonlinear Programming Problems
 SIAM J. CONTROL OPTIM
, 1997
"... In this paper a family of trustregion interiorpoint SQP algorithms for the solution of a class of minimization problems with nonlinear equality constraints and simple bounds on some of the variables is described and analyzed. Such nonlinear programs arise e.g. from the discretization of optimal co ..."
Abstract

Cited by 35 (8 self)
 Add to MetaCart
In this paper a family of trustregion interiorpoint SQP algorithms for the solution of a class of minimization problems with nonlinear equality constraints and simple bounds on some of the variables is described and analyzed. Such nonlinear programs arise e.g. from the discretization of optimal control problems. The algorithms treat states and controls as independent variables. They are designed to take advantage of the structure of the problem. In particular they do not rely on matrix factorizations of the linearized constraints, but use solutions of the linearized state equation and the adjoint equation. They are well suited for large scale problems arising from optimal control problems governed by partial differential equations. The algorithms keep strict feasibility with respect to the bound constraints by using an affine scaling method proposed for a different class of problems by Coleman and Li and they exploit trustregion techniques for equalityconstrained optimizatio...
A PrimalDual Algorithm for Minimizing a NonConvex Function Subject to Bound and Linear Equality Constraints
, 1996
"... A new primaldual algorithm is proposed for the minimization of nonconvex objective functions subject to simple bounds and linear equality constraints. The method alternates between a classical primaldual step and a Newtonlike step in order to ensure descent on a suitable merit function. Converge ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
A new primaldual algorithm is proposed for the minimization of nonconvex objective functions subject to simple bounds and linear equality constraints. The method alternates between a classical primaldual step and a Newtonlike step in order to ensure descent on a suitable merit function. Convergence of a welldefined subsequence of iterates is proved from arbitrary starting points. Algorithmic variants are discussed and preliminary numerical results presented. 1 IBM T.J. Watson Research Center, P.O.Box 218, Yorktown Heights, NY 10598, USA Email : arconn@watson.ibm.com 2 Department for Computation and Information, Rutherford Appleton Laboratory, Chilton, Oxfordshire, OX11 0QX, England, EU Email : nimg@letterbox.rl.ac.uk 3 Current reports available by anonymous ftp from joyousgard.cc.rl.ac.uk (internet 130.246.9.91) in the directory "pub/reports". 4 Department of Mathematics, Facult'es Universitaires ND de la Paix, 61, rue de Bruxelles, B5000 Namur, Belgium, EU Email : pht@ma...
A feasible BFGS interior point algorithm for solving strongly convex minimization problems
 SIAM J. OPTIM
, 2000
"... We propose a BFGS primaldual interior point method for minimizing a convex function on a convex set defined by equality and inequality constraints. The algorithm generates feasible iterates and consists in computing approximate solutions of the optimality conditions perturbed by a sequence of posit ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
We propose a BFGS primaldual interior point method for minimizing a convex function on a convex set defined by equality and inequality constraints. The algorithm generates feasible iterates and consists in computing approximate solutions of the optimality conditions perturbed by a sequence of positive parameters µ converging to zero. We prove that it converges qsuperlinearly for each fixed µ. We also show that it is globally convergent to the analytic center of the primaldual optimalset when µ tends to 0 and strict complementarity holds.
Superlinear and Quadratic Convergence of AffineScaling InteriorPoint Newton Methods for Problems with Simple Bounds without Strict Complementarity Assumption
, 1998
"... A class of affinescaling interiorpoint methods for bound constrained optimization problems is introduced which are locally qsuperlinear or qquadratic convergent. It is assumed that the strong... ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
A class of affinescaling interiorpoint methods for bound constrained optimization problems is introduced which are locally qsuperlinear or qquadratic convergent. It is assumed that the strong...
Tits. NewtonKKT interiorpoint methods for indefinite quadratic programming
 Comput. Optim. Appl
"... Two interiorpoint algorithms are proposed and analyzed, for the (local) solution of (possibly) indefinite quadratic programming problems. They are of the NewtonKKT variety in that (much like in the case of primaldual algorithms for linear programming) search directions for the “primal ” variables ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Two interiorpoint algorithms are proposed and analyzed, for the (local) solution of (possibly) indefinite quadratic programming problems. They are of the NewtonKKT variety in that (much like in the case of primaldual algorithms for linear programming) search directions for the “primal ” variables and the KarushKuhnTucker (KKT) multiplier estimates are components of the Newton (or quasiNewton)
Convergence properties of Dikin’s affine scaling algorithm for nonconvex quadratic minimization
 J. Global Optim
, 2001
"... Abstract We study convergence properties of Dikin's affine scaling algorithm applied to nonconvex quadratic minimization. First, we show that the objective function value either diverges or converges Qlinearly to a limit. Using this result, we show that, in the case of box constraints, the iterates ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Abstract We study convergence properties of Dikin's affine scaling algorithm applied to nonconvex quadratic minimization. First, we show that the objective function value either diverges or converges Qlinearly to a limit. Using this result, we show that, in the case of box constraints, the iterates converge to a unique point satisfying 1storder and weak 2ndorder optimality conditions, assuming the objective function Hessian Q is rank dominant with respect to the principal submatrices that are maximally positive semidefinite. Such Q include matrices that are positive semidefinite or negative semidefinite or nondegenerate or have negative diagonals. Preliminary numerical experience is reported. Key words. Nonconvex quadratic minimization, affinescaling algorithm, trust region subproblem, Hoffman's error bound, linear convergence. 1 Introduction We consider the nonconvex quadratic program (QP):
A firstorder interiorpoint method for linearly constrained smooth optimization”, to appear in
 Mathematical Programming
, 2009
"... Abstract: We propose a firstorder interiorpoint method for linearly constrained smooth optimization that unifies and extends firstorder affinescaling method and replicator dynamics method for standard quadratic programming. Global convergence and, in the case of quadratic program, (sub)linear co ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract: We propose a firstorder interiorpoint method for linearly constrained smooth optimization that unifies and extends firstorder affinescaling method and replicator dynamics method for standard quadratic programming. Global convergence and, in the case of quadratic program, (sub)linear convergence rate and iterate convergence results are derived. Numerical experience on simplex constrained problems with 1000 variables is reported. Key words. Linearly constrained optimization, affine scaling, replicator dynamics, interiorpoint method, global convergence, sublinear convergence rate 1
Design Issues in Algorithms for Large Scale Nonlinear Programming
, 1999
"... Design Issues in Algorithms for Large Scale Nonlinear Programming Guanghui Liu Ph.D. Supervisor: Jorge Nocedal This dissertation studies a wide range of issues in the design of interior point methods for large scale nonlinear programming. Strategies for computing high quality steps and for rapidly d ..."
Abstract
 Add to MetaCart
Design Issues in Algorithms for Large Scale Nonlinear Programming Guanghui Liu Ph.D. Supervisor: Jorge Nocedal This dissertation studies a wide range of issues in the design of interior point methods for large scale nonlinear programming. Strategies for computing high quality steps and for rapidly decreasing barrier parameters are introduced. Preconditioning or scaling techniques are developed. To extend the range of applicability of interior methods, feasible variants are proposed, quasiNewton updating methods are introduced, and strategies for the nonmonotone decrease of the merit function are explored. A product of this dissertation is a software, called NITRO, that implements an interior point method using the new algorithmic features presented in this dissertation. The performance of this code is assessed by comparing it with established software for large scale optimization (SNOPT, filterSQP and LOQO). ii ACKNOWLEDGMENT I am grateful to Jorge Nocedal, my PhD supervisor, for i...