Results 1  10
of
28
Interior methods for nonlinear optimization
 SIAM Review
, 2002
"... Abstract. Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their ..."
Abstract

Cited by 76 (4 self)
 Add to MetaCart
Abstract. Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for linear programming was not even contemplated because of the total dominance of the simplex method. Vague but continuing anxiety about barrier methods eventually led to their abandonment in favor of newly emerging, apparently more efficient alternatives such as augmented Lagrangian and sequential quadratic programming methods. By the early 1980s, barrier methods were almost without exception regarded as a closed chapter in the history of optimization. This picture changed dramatically with Karmarkar’s widely publicized announcement in 1984 of a fast polynomialtime interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, interior methods have advanced so far, so fast, that their influence has transformed both the theory and practice of constrained optimization. This article provides a condensed, selective look at classical material and recent research about interior methods for nonlinearly constrained optimization.
The InteriorPoint Revolution in Constrained Optimization
 of Appl. Optim
, 1998
"... Interior methods are a central, striking feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were widely used during the 1960s to solve nonlinearly constrained problems. However, their use for linear ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
Interior methods are a central, striking feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were widely used during the 1960s to solve nonlinearly constrained problems. However, their use for linear programming was not even contemplated because of the total dominance of the simplex method. During the 1970s, barrier methods were superseded by newly emerging, apparently more efficient alternatives such as augmented Lagrangian and sequential quadratic programming methods. By the early 1980s, barrier methods were almost universally regarded as a closed chapter in the history of optimization. This picture changed dramatically in the mid1980s. In 1984, Karmarkar announced a fast polynomialtime interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, the new incarnations of interior methods ha...
Constraint identification and algorithm stabilization for degenerate nonlinear programs
 Mathematical Programming
, 2003
"... Abstract. In the vicinity of a solution of a nonlinear programming problem at which both strict complementarity and linear independence of the active constraints may fail to hold, we describe a technique for distinguishing weakly active from strongly active constraints. We show that this information ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
Abstract. In the vicinity of a solution of a nonlinear programming problem at which both strict complementarity and linear independence of the active constraints may fail to hold, we describe a technique for distinguishing weakly active from strongly active constraints. We show that this information can be used to modify the sequential quadratic programming algorithm so that it exhibits superlinear convergence to the solution under assumptions weaker than those made in previous analyses.
The interiorpoint revolution in optimization: history, recent developments, and lasting consequences
 Bull. Amer. Math. Soc. (N.S
, 2005
"... Abstract. Interior methods are a pervasive feature of the optimization landscape today, but it was not always so. Although interiorpoint techniques, primarily in the form of barrier methods, were widely used during the 1960s for problems with nonlinear constraints, their use for the fundamental pro ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
Abstract. Interior methods are a pervasive feature of the optimization landscape today, but it was not always so. Although interiorpoint techniques, primarily in the form of barrier methods, were widely used during the 1960s for problems with nonlinear constraints, their use for the fundamental problem of linear programming was unthinkable because of the total dominance of the simplex method. During the 1970s, barrier methods were superseded, nearly to the point of oblivion, by newly emerging and seemingly more efficient alternatives such as augmented Lagrangian and sequential quadratic programming methods. By the early 1980s, barrier methods were almost universally regarded as a closed chapter in the history of optimization. This picture changed dramatically in 1984, when Narendra Karmarkar announced a fast polynomialtime interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, interior methods have continued to transform both the theory and practice of constrained optimization. We present a condensed,
On the convergence of the Newton/logbarrier method
 Preprint ANL/MCSP681 0897, Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, Ill
, 1997
"... Abstract. In the Newton/logbarrier method, Newton steps are taken for the logbarrier function for a xed value of the barrier parameter until a certain convergence criterion is satis ed. The barrier parameter is then decreased and the Newton process is repeated. A naive analysis indicates that Newt ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
Abstract. In the Newton/logbarrier method, Newton steps are taken for the logbarrier function for a xed value of the barrier parameter until a certain convergence criterion is satis ed. The barrier parameter is then decreased and the Newton process is repeated. A naive analysis indicates that Newton's method does not exhibit superlinear convergence to the minimizer of each instance of the logbarrier function until it reaches a very small neighborhood of the minimizer. By partitioning according to the subspace of active constraint gradients, however, we show that this neighborhood is actually quite large, thus explaining why reasonably fast local convergence can be attained in practice. Moreover, we show that the overall convergence rate of the Newton/logbarrier algorithm is superlinear in the number of function/derivative evaluations, provided that the nonlinear program is formulated with a linear objective and that the schedule for decreasing the barrier parameter is related in a certain way to the convergence criterion for each Newton process. 1.
A Comparison of Interior Point Methods and a MoreauYosida Based Active Set Strategy for Constrained Optimal Control Problems
 SIAM Journal on Optimization
, 1998
"... . In this note we focus on a comparison of two efficient methods to solve quadratic constrained optimal control problems governed by elliptic partialdifferential equations. One of them is based on a generalized MoreauYosida formulation of the constrained optimal control problem which results in an ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
. In this note we focus on a comparison of two efficient methods to solve quadratic constrained optimal control problems governed by elliptic partialdifferential equations. One of them is based on a generalized MoreauYosida formulation of the constrained optimal control problem which results in an active set strategy involving primal and dual variables. The second approach is based on interior point methods. Keywords. Optimal Control, Augmented Lagrangian, Interior Point Methods, MoreauYosidaapproximation, Active Sets. AMS subject classification. 49J20, 65K, 90C20. 1 Introduction In recent years significant research efforts were focused on developing numerical techniques to solve optimal control problems governed by partial differential equations. For unconstrained problems a high level of sophistication was reached. We refer to the contributions in [AM, GT, KS] and many further references given there. For constrained optimal control problems the level of research is less complet...
Properties of the LogBarrier Function on Degenerate Nonlinear Programs
 Math. Oper. Res
, 1999
"... . We examine the sequence of local minimizers of the logbarrier function for a nonlinear program near a solution at which secondordersufficient conditions and the MangasarianFromovitz constraint qualifications are satisfied, but the active constraint gradients are not necessarily linearly independ ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
. We examine the sequence of local minimizers of the logbarrier function for a nonlinear program near a solution at which secondordersufficient conditions and the MangasarianFromovitz constraint qualifications are satisfied, but the active constraint gradients are not necessarily linearly independent. When a strict complementarity condition is satisfied, we show uniqueness of the local minimizer of the barrier function in the vicinity of the nonlinear program solution, and obtain a semiexplicit characterization of this point. When strict complementarity does not hold, we obtain several other interesting characterizations, in particular, an estimate of the distance between the minimizers of the barrier function and the nonlinear program in terms of the barrier parameter, and a result about the direction of approach of the sequence of minimizers of the barrier function to the nonlinear programming solution. 1. Introduction We consider the nonlinear programming problem min f(x) subjec...
Topics in Sparse Least Squares Problems
 Linkoping University, Linkoping, Sweden, Dept. of Mathematics
, 2000
"... This thesis addresses topics in sparse least squares computation. A stable method for solving the least squares problem, min kAx; bk2 is based on the QR factorization. Here we haveaddressed the di culty for storing the orthogonal matrix Q. Using traditional methods, the number of nonzero elements in ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
This thesis addresses topics in sparse least squares computation. A stable method for solving the least squares problem, min kAx; bk2 is based on the QR factorization. Here we haveaddressed the di culty for storing the orthogonal matrix Q. Using traditional methods, the number of nonzero elements in Q makes it in many cases not feasible to store. Using the multifrontal technique when computing the QR factorization, Q may be stored and used more e ciently. A new user friendly Matlab implementation is developed. When a row in A is dense the factor R from the QR factorization may be completely dense. Therefore problems with dense rows must be treated by special techniques. The usual way to handle dense rows is to partition the problem into one sparse and one dense subproblem. The drawback with this approach is that the sparse subproblem may bemore illconditioned than the original problem or even not have a unique solution. Another method, useful for problems with few dense rows, is based on matrix stretching, where the dense rows are split into several less dense rows linked then together with new arti cial
Tits. NewtonKKT interiorpoint methods for indefinite quadratic programming
 Comput. Optim. Appl
"... Two interiorpoint algorithms are proposed and analyzed, for the (local) solution of (possibly) indefinite quadratic programming problems. They are of the NewtonKKT variety in that (much like in the case of primaldual algorithms for linear programming) search directions for the “primal ” variables ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Two interiorpoint algorithms are proposed and analyzed, for the (local) solution of (possibly) indefinite quadratic programming problems. They are of the NewtonKKT variety in that (much like in the case of primaldual algorithms for linear programming) search directions for the “primal ” variables and the KarushKuhnTucker (KKT) multiplier estimates are components of the Newton (or quasiNewton)
PrimalDual Interior Point Methods For Semidefinite Programming In Finite Precision
 SIAM J. Optimization
, 1997
"... . Recently, a number of primaldual interiorpoint methods for semidefinite programming have been developed. To reduce the number of floating point operations, each iteration of these methods typically performs block Gaussian elimination with block pivots that are close to singular near the optimal ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
. Recently, a number of primaldual interiorpoint methods for semidefinite programming have been developed. To reduce the number of floating point operations, each iteration of these methods typically performs block Gaussian elimination with block pivots that are close to singular near the optimal solution. As a result, these methods often exhibit complex numerical properties in practice. We consider numerical issues related to some of these methods. Our error analysis indicates that these methods could be numerically stable if certain coefficient matrices associated with the iterations are wellconditioned, but are unstable otherwise. With this result, we explain why one particular method, the one introduced by Alizadeh, Haeberly and Overton is in general more stable than others. We also explain why the so called least squares variation, introduced for some of these methods, does not yield more numerical accuracy in general. Finally, we present results from our numerical experiments ...