Results 1  10
of
18
An InteriorPoint Algorithm For Nonconvex Nonlinear Programming
 COMPUTATIONAL OPTIMIZATION AND APPLICATIONS
, 1997
"... The paper describes an interiorpoint algorithm for nonconvex nonlinear programming which is a direct extension of interiorpoint methods for linear and quadratic programming. Major modifications include a merit function and an altered search direction to ensure that a descent direction for the mer ..."
Abstract

Cited by 193 (14 self)
 Add to MetaCart
The paper describes an interiorpoint algorithm for nonconvex nonlinear programming which is a direct extension of interiorpoint methods for linear and quadratic programming. Major modifications include a merit function and an altered search direction to ensure that a descent direction for the merit function is obtained. Preliminary numerical testing indicates that the method is robust. Further, numerical comparisons with MINOS and LANCELOT show that the method is efficient, and has the promise of greatly reducing solution times on at least some classes of models.
Interior methods for nonlinear optimization
 SIAM Review
, 2002
"... Abstract. Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their ..."
Abstract

Cited by 125 (5 self)
 Add to MetaCart
(Show Context)
Abstract. Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for linear programming was not even contemplated because of the total dominance of the simplex method. Vague but continuing anxiety about barrier methods eventually led to their abandonment in favor of newly emerging, apparently more efficient alternatives such as augmented Lagrangian and sequential quadratic programming methods. By the early 1980s, barrier methods were almost without exception regarded as a closed chapter in the history of optimization. This picture changed dramatically with Karmarkar’s widely publicized announcement in 1984 of a fast polynomialtime interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, interior methods have advanced so far, so fast, that their influence has transformed both the theory and practice of constrained optimization. This article provides a condensed, selective look at classical material and recent research about interior methods for nonlinearly constrained optimization.
An interior point algorithm for large scale nonlinear programming
, 1997
"... The design and implementation of a new algorithm for solving large nonlinear programming problems is described. It follows a barrier approach that employs sequential quadratic programming and trust regions to solve the subproblems occurring in the iteration. Both primal and primaldual versions of t ..."
Abstract

Cited by 88 (18 self)
 Add to MetaCart
The design and implementation of a new algorithm for solving large nonlinear programming problems is described. It follows a barrier approach that employs sequential quadratic programming and trust regions to solve the subproblems occurring in the iteration. Both primal and primaldual versions of the algorithm are developed, and their performance is illustrated in a set of numerical tests.
A PrimalDual InteriorPoint Method for Nonlinear Programming with Strong Global and Local Convergence Properties
 SIAM Journal on Optimization
, 2002
"... An exactpenaltyfunctionbased schemeinspired from an old idea due to Mayne and Polak (Math. Prog., vol. 11, 1976, pp. 6780)is proposed for extending to general smooth constrained optimization problems any given feasible interiorpoint method for inequality constrained problems. It is s ..."
Abstract

Cited by 37 (5 self)
 Add to MetaCart
An exactpenaltyfunctionbased schemeinspired from an old idea due to Mayne and Polak (Math. Prog., vol. 11, 1976, pp. 6780)is proposed for extending to general smooth constrained optimization problems any given feasible interiorpoint method for inequality constrained problems. It is shown that the primaldual interiorpoint framework allows for a simpler penalty parameter update rule than that discussed and analyzed by the originators of the scheme in the context of first order methods of feasible direction. Strong global and local convergence results are proved under mild assumptions. In particular, (i) the proposed algorithm does not su#er a common pitfall # Department of Electrical and Computer Engineering and Institute for Systems Research, University of Maryland, College Park, MD 20742, USA + IBM T.J. Watson Research Center, Yorktown Heights, NY 10598, USA # Applied Physics Laboratory, Laurel, MD 20723, USA Alphatech, Arlington, VA 22203, USA recently pointed out by Wachter and Biegler; and (ii) the positive definiteness assumption on the Hessian estimate, made in the original version of the algorithm, is relaxed, allowing for the use of exact Hessian information, resulting in local quadratic convergence. Promising numerical results are reported.
On the local behavior of an interior point method for nonlinear programming
 Numerical Analysis 1997
, 1997
"... Jorge Nocedal z We study the local convergence of a primaldual interior point method for nonlinear programming. A linearly convergent version of this algorithm has been shown in [2] to be capable of solving large and di cult nonconvex problems. But for the algorithm to reach its full potential, it ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
Jorge Nocedal z We study the local convergence of a primaldual interior point method for nonlinear programming. A linearly convergent version of this algorithm has been shown in [2] to be capable of solving large and di cult nonconvex problems. But for the algorithm to reach its full potential, it must converge rapidly to the solution. In this paper we describe how to design the algorithm so that it converges superlinearly on regular problems. Key words: constrained optimization, interior point method, largescale optimization, nonlinear programming, primal method, primaldual method, successive quadratic programming.
The interiorpoint revolution in constrained optimization,” in HighPerformance Algorithms and Software in Nonlinear Optimization
, 1998
"... ..."
(Show Context)
A feasible BFGS interior point algorithm for solving strongly convex minimization problems
 SIAM J. OPTIM
, 2000
"... We propose a BFGS primaldual interior point method for minimizing a convex function on a convex set defined by equality and inequality constraints. The algorithm generates feasible iterates and consists in computing approximate solutions of the optimality conditions perturbed by a sequence of posit ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
We propose a BFGS primaldual interior point method for minimizing a convex function on a convex set defined by equality and inequality constraints. The algorithm generates feasible iterates and consists in computing approximate solutions of the optimality conditions perturbed by a sequence of positive parameters µ converging to zero. We prove that it converges qsuperlinearly for each fixed µ. We also show that it is globally convergent to the analytic center of the primaldual optimalset when µ tends to 0 and strict complementarity holds.
A PRIMALDUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION
, 2003
"... This paper concerns general (nonconvex) nonlinear optimization when first and second derivatives of the objective and constraint functions are available. The proposed method is based on finding an approximate solution of a sequence of unconstrained subproblems parameterized by a scalar parameter. T ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
This paper concerns general (nonconvex) nonlinear optimization when first and second derivatives of the objective and constraint functions are available. The proposed method is based on finding an approximate solution of a sequence of unconstrained subproblems parameterized by a scalar parameter. The objective function of each unconstrained subproblem is an augmented penaltybarrier function that involves both primal and dual variables. Each subproblem is solved using a secondderivative Newtontype method that employs a combined trust region and line search strategy to ensure global convergence. It is shown that the trustregion step can be computed by factorizing a sequence of systems with diagonallymodified primaldual structure, where the inertia of these systems can be determined without recourse to a special factorization method. This has the benefit that offtheshelf linear system software can be used at all times, allowing the straightforward extension to largescale problems. Numerical results are given for problems in the COPS test collection.
A Convergent Infeasible InteriorPoint TrustRegion Method For Constrained Minimization
 SIAM Journal on Optimization
, 1999
"... We study an infeasible interiorpoint trustregion method for constrained minimization. This method uses a logarithmicbarrier function for the slack variables and updates the slack variables using secondorder correction. We show that if a certain set containing the iterates is bounded and the orig ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
(Show Context)
We study an infeasible interiorpoint trustregion method for constrained minimization. This method uses a logarithmicbarrier function for the slack variables and updates the slack variables using secondorder correction. We show that if a certain set containing the iterates is bounded and the origin is not in the convex hull of the nearly active constraint gradients everywhere on this set, then any cluster point of the iterates is a 1storder stationary point. If the cluster point satisfies an additional assumption (which holds when the constraints are linear or when the cluster point satisfies strict complementarity and a local error bound holds), then it is a 2ndorder stationary point. Key words. Nonlinear program, logarithmicbarrier function, interiorpoint method, trustregion strategy, 1st and 2ndorder stationary points, semidefinite programming. 1 Introduction We consider the nonlinear program with inequality constraints: minimize f(x) subject to g(x) = [g 1 (x) g m (...
Interior Point Multigrid Methods For Topology Optimization
 Structural Optimization
, 1998
"... . In this paper, a new multigrid interior point approach to topology optimization problems in the context of the homogenization method is presented. The key observation is that nonlinear interior point methods lead to linearquadratic subproblems with structures that can be favorably exploited withi ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
. In this paper, a new multigrid interior point approach to topology optimization problems in the context of the homogenization method is presented. The key observation is that nonlinear interior point methods lead to linearquadratic subproblems with structures that can be favorably exploited within multigrid methods. Primal as well well as primaldual formulations are discussed. The multigrid approach is based on the transformed smoother paradigm. Numerical results for an example problem are presented. 1 Introduction and Problem Formulation The search for optimal shapes of elastic materials is a mathematical task of prevailing importance and still a challenge for numerical algorithms to be developed. For a general overview on the field of structural optimization, we refer to the monograph [2] and the survey article [19]. There are basically two alternative approaches to structural optimization: the discrete one correlated with the term "truss topology design", and the continuous one...