Results 1  10
of
24
Interior methods for nonlinear optimization
 SIAM Review
, 2002
"... Abstract. Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their ..."
Abstract

Cited by 77 (4 self)
 Add to MetaCart
Abstract. Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for linear programming was not even contemplated because of the total dominance of the simplex method. Vague but continuing anxiety about barrier methods eventually led to their abandonment in favor of newly emerging, apparently more efficient alternatives such as augmented Lagrangian and sequential quadratic programming methods. By the early 1980s, barrier methods were almost without exception regarded as a closed chapter in the history of optimization. This picture changed dramatically with Karmarkar’s widely publicized announcement in 1984 of a fast polynomialtime interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, interior methods have advanced so far, so fast, that their influence has transformed both the theory and practice of constrained optimization. This article provides a condensed, selective look at classical material and recent research about interior methods for nonlinearly constrained optimization.
Primaldual interior methods for nonconvex nonlinear programming
 SIAM Journal on Optimization
, 1998
"... Abstract. This paper concerns largescale general (nonconvex) nonlinear programming when first and second derivatives of the objective and constraint functions are available. A method is proposed that is based on finding an approximate solution of a sequence of unconstrained subproblems parameterize ..."
Abstract

Cited by 58 (5 self)
 Add to MetaCart
Abstract. This paper concerns largescale general (nonconvex) nonlinear programming when first and second derivatives of the objective and constraint functions are available. A method is proposed that is based on finding an approximate solution of a sequence of unconstrained subproblems parameterized by a scalar parameter. The objective function of each unconstrained subproblem is an augmented penaltybarrier function that involves both primal and dual variables. Each subproblem is solved with a modified Newton method that generates search directions from a primaldual system similar to that proposed for interior methods. The augmented penaltybarrier function may be interpreted as a merit function for values of the primal and dual variables. An inertiacontrolling symmetric indefinite factorization is used to provide descent directions and directions of negative curvature for the augmented penaltybarrier merit function. A method suitable for large problems can be obtained by providing a version of this factorization that will treat large sparse indefinite systems.
A New Trust Region Algorithm For Equality Constrained Optimization
, 1995
"... . We present a new trust region algorithm for solving nonlinear equality constrained optimization problems. At each iterate a change of variables is performed to improve the ability of the algorithm to follow the constraint level sets. The algorithm employs L 2 penalty functions for obtaining global ..."
Abstract

Cited by 51 (7 self)
 Add to MetaCart
. We present a new trust region algorithm for solving nonlinear equality constrained optimization problems. At each iterate a change of variables is performed to improve the ability of the algorithm to follow the constraint level sets. The algorithm employs L 2 penalty functions for obtaining global convergence. Under certain assumptions we prove that this algorithm globally converges to a point satisfying the second order necessary optimality conditions; the local convergence rate is quadratic. Results of preliminary numerical experiments are presented. 1. Introduction. We consider the equality constrained optimization problem minimize f(x) subject to c(x) = 0 (1:1) where x 2 ! n and f : ! n ! !, and c : ! n ! ! m are smooth nonlinear functions. Problem (1.1) is often solved by successive quadratic programming (SQP) methods. At a current point x k 2 ! n , SQP methods determine a search direction d k by solving a quadratic programming problem minimize rf(x k ) T d + 1 2 ...
Complete Orthogonal Decomposition for Weighted Least Squares
 SIAM J. Matrix Anal. Appl
, 1995
"... Consider a fullrank weighted leastsquares problem in which the weight matrix is highly illconditioned. Because of the illconditioning, standard methods for solving leastsquares problems, QR factorization and the nullspace method for example, break down. G. W. Stewart established a norm bound fo ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
Consider a fullrank weighted leastsquares problem in which the weight matrix is highly illconditioned. Because of the illconditioning, standard methods for solving leastsquares problems, QR factorization and the nullspace method for example, break down. G. W. Stewart established a norm bound for such a system of equations, indicating that it may be possible to find an algorithm that gives an accurate solution. S. A. Vavasis proposed a new definition of stability that is based on this result. He also defined the NSH algorithm for solving this leastsquares problem and showed that it satisfies his definition of stability. In this paper, we propose a complete orthogonal decomposition algorithm to solve this problem and show that it is also stable. This new algorithm is simpler and more efficient than the NSH method. 1 Introduction We consider solving the problem min y2R n kD \Gamma1=2 (Ay \Gamma b) k (1) for y, where D is a symmetric positive definite m \Theta m matrix, A is an ...
On the convergence of the Newton/logbarrier method
 Preprint ANL/MCSP681 0897, Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, Ill
, 1997
"... Abstract. In the Newton/logbarrier method, Newton steps are taken for the logbarrier function for a xed value of the barrier parameter until a certain convergence criterion is satis ed. The barrier parameter is then decreased and the Newton process is repeated. A naive analysis indicates that Newt ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
Abstract. In the Newton/logbarrier method, Newton steps are taken for the logbarrier function for a xed value of the barrier parameter until a certain convergence criterion is satis ed. The barrier parameter is then decreased and the Newton process is repeated. A naive analysis indicates that Newton's method does not exhibit superlinear convergence to the minimizer of each instance of the logbarrier function until it reaches a very small neighborhood of the minimizer. By partitioning according to the subspace of active constraint gradients, however, we show that this neighborhood is actually quite large, thus explaining why reasonably fast local convergence can be attained in practice. Moreover, we show that the overall convergence rate of the Newton/logbarrier algorithm is superlinear in the number of function/derivative evaluations, provided that the nonlinear program is formulated with a linear objective and that the schedule for decreasing the barrier parameter is related in a certain way to the convergence criterion for each Newton process. 1.
Properties of the LogBarrier Function on Degenerate Nonlinear Programs
 MATH. OPER. RES
, 1999
"... We examine the sequence of local minimizers of the logbarrier function for a nonlinear program near a solution at which secondordersufficient conditions and the MangasarianFromovitz constraint qualifications are satisfied, but the active constraint gradients are not necessarily linearly independ ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
We examine the sequence of local minimizers of the logbarrier function for a nonlinear program near a solution at which secondordersufficient conditions and the MangasarianFromovitz constraint qualifications are satisfied, but the active constraint gradients are not necessarily linearly independent. When a strict complementarity condition is satisfied, we show uniqueness of the local minimizer of the barrier function in the vicinity of the nonlinear program solution, and obtain a semiexplicit characterization of this point. When strict complementarity does not hold, we obtain several other interesting characterizations, in particular, an estimate of the distance between the minimizers of the barrier function and the nonlinear program in terms of the barrier parameter, and a result about the direction of approach of the sequence of minimizers of the barrier function to the nonlinear programming solution.
Methods for nonlinear constraints in optimization calculations
 THE STATE OF THE ART IN NUMERICAL ANALYSIS
, 1996
"... ..."
Iterative solution of augmented systems arising in interior methods
 SIAM JOURNAL ON OPTIMIZATION
, 2007
"... Iterative methods are proposed for certain augmented systems of linear equations that arise in interior methods for general nonlinear optimization. Interior methods define a sequence of KKT equations that represent the symmetrized (but indefinite) equations associated with Newton’s method for a po ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Iterative methods are proposed for certain augmented systems of linear equations that arise in interior methods for general nonlinear optimization. Interior methods define a sequence of KKT equations that represent the symmetrized (but indefinite) equations associated with Newton’s method for a point satisfying the perturbed optimality conditions. These equations involve both the primal and dual variables and become increasingly illconditioned as the optimization proceeds. In this context, an iterative linear solver must not only handle the illconditioning but also detect the occurrence of KKT matrices with the wrong matrix inertia. A oneparameter family of equivalent linear equations is formulated that includes the KKT system as a special case. The discussion focuses on a particular system from this family, known as the “doubly augmented system, ” that is positive definite with respect to both the primal and dual variables. This property means that a standard preconditioned conjugategradient method involving both primal and dual variables will either terminate successfully or detect if the KKT matrix has the wrong inertia. Constraint preconditioning is a wellknown technique for preconditioning the conjugategradient method on augmented systems. A family of constraint preconditioners is proposed that provably eliminates the inherent illconditioning in the augmented system. A considerable benefit of combining constraint preconditioning with the doubly augmented system is that the preconditioner need not be applied exactly. Two particular “activese ” constraint preconditioners are formulated that involve only a subset of the rows of the augmented system and thereby may be applied with considerably less work. Finally, some numerical experiments illustrate the numerical performance of the proposed preconditioners and highlight some theoretical properties of the preconditioned matrices.
A Note on Using Alternative SecondOrder Models for the Subproblems Arising in Barrier Function Methods for Minimization
, 1993
"... . Inequality constrained minimization problems are often solved by considering a sequence of parameterized barrier functions. Each barrier function is approximately minimized and the relevant parameters subsequently adjusted. It is common for the estimated solution to one barrier function problem to ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
. Inequality constrained minimization problems are often solved by considering a sequence of parameterized barrier functions. Each barrier function is approximately minimized and the relevant parameters subsequently adjusted. It is common for the estimated solution to one barrier function problem to be used as a starting estimate for the next. However, this has unfortunate repercussions for the standard Newtonlike methods applied to the barrier subproblem. In this note, we consider a class of alternative Newton methods which attempt to avoid such difficulties. Such schemes have already proved of use in the Harwell Subroutine Library quadratic programming codes VE14 and VE19. 1 IBM T.J. Watson Research Center, P.O.Box 218, Yorktown Heights, NY 10598, USA Email : arconn@watson.ibm.com 2 CERFACS, 42 Avenue Gustave Coriolis, 31057 Toulouse Cedex, France, EC Email : gould@cerfacs.fr or nimg@directory.rl.ac.uk 3 Department of Mathematics, Facult'es Universitaires ND de la Paix, 61, rue...
On the Boundedness of Penalty Parameters in an Augmented Lagrangian Method with Constrained Subproblems
, 2011
"... Augmented Lagrangian methods are effective tools for solving largescale nonlinear programming problems. At each outer iteration a minimization subproblem with simple constraints, whose objective function depends on updated Lagrange multipliers and penalty parameters, is approximately solved. When t ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Augmented Lagrangian methods are effective tools for solving largescale nonlinear programming problems. At each outer iteration a minimization subproblem with simple constraints, whose objective function depends on updated Lagrange multipliers and penalty parameters, is approximately solved. When the penalty parameter becomes very large the subproblem is difficult, therefore the effectiveness of this approach is associated with boundedness of penalty parameters. In this paper it is proved that, under more natural assumptions than the ones up to now employed, penalty parameters are bounded. For proving the new boundedness result, the original algorithm has been slightly modified. Numerical consequences of the modifications are discussed and computational experiments are presented.