Results 1  10
of
22
Interior methods for nonlinear optimization
 SIAM Review
, 2002
"... Abstract. Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their ..."
Abstract

Cited by 79 (4 self)
 Add to MetaCart
Abstract. Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for linear programming was not even contemplated because of the total dominance of the simplex method. Vague but continuing anxiety about barrier methods eventually led to their abandonment in favor of newly emerging, apparently more efficient alternatives such as augmented Lagrangian and sequential quadratic programming methods. By the early 1980s, barrier methods were almost without exception regarded as a closed chapter in the history of optimization. This picture changed dramatically with Karmarkar’s widely publicized announcement in 1984 of a fast polynomialtime interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, interior methods have advanced so far, so fast, that their influence has transformed both the theory and practice of constrained optimization. This article provides a condensed, selective look at classical material and recent research about interior methods for nonlinearly constrained optimization.
Primaldual interior methods for nonconvex nonlinear programming
 SIAM Journal on Optimization
, 1998
"... Abstract. This paper concerns largescale general (nonconvex) nonlinear programming when first and second derivatives of the objective and constraint functions are available. A method is proposed that is based on finding an approximate solution of a sequence of unconstrained subproblems parameterize ..."
Abstract

Cited by 59 (5 self)
 Add to MetaCart
Abstract. This paper concerns largescale general (nonconvex) nonlinear programming when first and second derivatives of the objective and constraint functions are available. A method is proposed that is based on finding an approximate solution of a sequence of unconstrained subproblems parameterized by a scalar parameter. The objective function of each unconstrained subproblem is an augmented penaltybarrier function that involves both primal and dual variables. Each subproblem is solved with a modified Newton method that generates search directions from a primaldual system similar to that proposed for interior methods. The augmented penaltybarrier function may be interpreted as a merit function for values of the primal and dual variables. An inertiacontrolling symmetric indefinite factorization is used to provide descent directions and directions of negative curvature for the augmented penaltybarrier merit function. A method suitable for large problems can be obtained by providing a version of this factorization that will treat large sparse indefinite systems.
Why a pure primal Newton barrier step may be infeasible, Numerical Analysis Manuscript 93{02
, 1993
"... ..."
The InteriorPoint Revolution in Constrained Optimization
 HighPerformance Algorithms and Software in Nonlinear Optimization
"... ..."
The interiorpoint revolution in optimization: history, recent developments, and lasting consequences
 Bull. Amer. Math. Soc. (N.S
, 2005
"... Abstract. Interior methods are a pervasive feature of the optimization landscape today, but it was not always so. Although interiorpoint techniques, primarily in the form of barrier methods, were widely used during the 1960s for problems with nonlinear constraints, their use for the fundamental pro ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
Abstract. Interior methods are a pervasive feature of the optimization landscape today, but it was not always so. Although interiorpoint techniques, primarily in the form of barrier methods, were widely used during the 1960s for problems with nonlinear constraints, their use for the fundamental problem of linear programming was unthinkable because of the total dominance of the simplex method. During the 1970s, barrier methods were superseded, nearly to the point of oblivion, by newly emerging and seemingly more efficient alternatives such as augmented Lagrangian and sequential quadratic programming methods. By the early 1980s, barrier methods were almost universally regarded as a closed chapter in the history of optimization. This picture changed dramatically in 1984, when Narendra Karmarkar announced a fast polynomialtime interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, interior methods have continued to transform both the theory and practice of constrained optimization. We present a condensed,
Complete Orthogonal Decomposition for Weighted Least Squares
 SIAM J. Matrix Anal. Appl
, 1995
"... Consider a fullrank weighted leastsquares problem in which the weight matrix is highly illconditioned. Because of the illconditioning, standard methods for solving leastsquares problems, QR factorization and the nullspace method for example, break down. G. W. Stewart established a norm bound fo ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
Consider a fullrank weighted leastsquares problem in which the weight matrix is highly illconditioned. Because of the illconditioning, standard methods for solving leastsquares problems, QR factorization and the nullspace method for example, break down. G. W. Stewart established a norm bound for such a system of equations, indicating that it may be possible to find an algorithm that gives an accurate solution. S. A. Vavasis proposed a new definition of stability that is based on this result. He also defined the NSH algorithm for solving this leastsquares problem and showed that it satisfies his definition of stability. In this paper, we propose a complete orthogonal decomposition algorithm to solve this problem and show that it is also stable. This new algorithm is simpler and more efficient than the NSH method. 1 Introduction We consider solving the problem min y2R n kD \Gamma1=2 (Ay \Gamma b) k (1) for y, where D is a symmetric positive definite m \Theta m matrix, A is an ...
InteriorPoint Methodology for 3D PET Reconstruction
, 2000
"... Interiorpoint methods have been successfully applied to a wide variety of linear and nonlinear programming applications. This paper presents a class of algorithms, based on pathfollowing interiorpoint methodology, for performing regularized maximumlikelihood (ML) reconstructions on threedimensi ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
Interiorpoint methods have been successfully applied to a wide variety of linear and nonlinear programming applications. This paper presents a class of algorithms, based on pathfollowing interiorpoint methodology, for performing regularized maximumlikelihood (ML) reconstructions on threedimensional (3D) emission tomography data. The algorithms solve a sequence of subproblems that converge to the regularized maximum likelihood solution from the interior of the feasible region (the nonnegative orthant). We propose two methods, a primal method which updates only the primal image variables and a primaldual method which simultaneously updates the primal variables and the Lagrange multipliers. A parallel implementation permits the interiorpoint methods to scale to very large reconstruction problems. Termination is based on welldefined convergence measures, namely, the KarushKuhnTucker firstorder necessary conditions for optimality. We demonstrate the rapid convergence of the pathfollowing interiorpoint methods using both data from a small animal scanner and Monte Carlo simulated data. The proposed methods can readily be applied to solve the regularized, weighted least squares reconstruction problem.
On the convergence of the Newton/logbarrier method
 Preprint ANL/MCSP681 0897, Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, Ill
, 1997
"... Abstract. In the Newton/logbarrier method, Newton steps are taken for the logbarrier function for a xed value of the barrier parameter until a certain convergence criterion is satis ed. The barrier parameter is then decreased and the Newton process is repeated. A naive analysis indicates that Newt ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
Abstract. In the Newton/logbarrier method, Newton steps are taken for the logbarrier function for a xed value of the barrier parameter until a certain convergence criterion is satis ed. The barrier parameter is then decreased and the Newton process is repeated. A naive analysis indicates that Newton's method does not exhibit superlinear convergence to the minimizer of each instance of the logbarrier function until it reaches a very small neighborhood of the minimizer. By partitioning according to the subspace of active constraint gradients, however, we show that this neighborhood is actually quite large, thus explaining why reasonably fast local convergence can be attained in practice. Moreover, we show that the overall convergence rate of the Newton/logbarrier algorithm is superlinear in the number of function/derivative evaluations, provided that the nonlinear program is formulated with a linear objective and that the schedule for decreasing the barrier parameter is related in a certain way to the convergence criterion for each Newton process. 1.
Properties of the LogBarrier Function on Degenerate Nonlinear Programs
 MATH. OPER. RES
, 1999
"... We examine the sequence of local minimizers of the logbarrier function for a nonlinear program near a solution at which secondordersufficient conditions and the MangasarianFromovitz constraint qualifications are satisfied, but the active constraint gradients are not necessarily linearly independ ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
We examine the sequence of local minimizers of the logbarrier function for a nonlinear program near a solution at which secondordersufficient conditions and the MangasarianFromovitz constraint qualifications are satisfied, but the active constraint gradients are not necessarily linearly independent. When a strict complementarity condition is satisfied, we show uniqueness of the local minimizer of the barrier function in the vicinity of the nonlinear program solution, and obtain a semiexplicit characterization of this point. When strict complementarity does not hold, we obtain several other interesting characterizations, in particular, an estimate of the distance between the minimizers of the barrier function and the nonlinear program in terms of the barrier parameter, and a result about the direction of approach of the sequence of minimizers of the barrier function to the nonlinear programming solution.