Results 1 
7 of
7
A robust primaldual interior point algorithm for nonlinear programs
 SIAM Journal on Optimization
, 2004
"... Abstract. We present a primaldual interiorpoint algorithm for solving optimization problems with nonlinear inequality constraints. The algorithm has some of the theoretical properties of trust region methods, but works entirely by line search. Global convergence properties are derived without assu ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
Abstract. We present a primaldual interiorpoint algorithm for solving optimization problems with nonlinear inequality constraints. The algorithm has some of the theoretical properties of trust region methods, but works entirely by line search. Global convergence properties are derived without assuming regularity conditions. The penalty parameter ρ in the merit function is updated adaptively and plays two roles in the algorithm. First, it guarantees that the search directions are descent directions of the updated merit function. Second, it helps to determine a suitable search direction in a decomposed SQP step. It is shown that if ρ is bounded for each barrier parameter µ, then every limit point of the sequence generated by the algorithm is a KarushKuhnTucker point, whereas if ρ is unbounded for some µ, then the sequence has a limit point which is either a FritzJohn point or a stationary point of a function measuring the violation of the constraints. Numerical results confirm that the algorithm produces the correct results for some hard problems, including the example provided by Wächter and Biegler, for which many of the existing line searchbased interiorpoint methods have failed to find the right answers.
Convergence to Second Order Stationary Points in Inequality Constrained Optimization
 Mathematics of Operations Research
, 1998
"... : We propose a new algorithm for the nonlinear inequality constrained minimization problem, and prove that it generates a sequence converging to points satisfying the KKT second order necessary conditions for optimality. The algorithm is a line search algorithm using directions of negative curvature ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
: We propose a new algorithm for the nonlinear inequality constrained minimization problem, and prove that it generates a sequence converging to points satisfying the KKT second order necessary conditions for optimality. The algorithm is a line search algorithm using directions of negative curvature and it can be viewed as a non trivial extension of corresponding known techniques from unconstrained to constrained problems. The main tools employed in the definition and in the analysis of the algorithm are a differentiable exact penalty function and results from the theory of LC 1 functions. Key Words: Inequality constrained optimization, KKT second order necessary conditions, penalty function, LC 1 function, negative curvature direction. 1 Introduction We are concerned with the inequality constrained minimization problem (P) min f(x) s.t. g(x) 0; where f : IR n ! IR and g : IR n ! IR m are three times continuously differentiable. Our aim is to develope an algorithm that g...
A Robust Trust Region Algorithm for Solving General Nonlinear Programming
, 1998
"... The trust region approach has been extended to solving nonlinear constrained optimization. Most of these extensions consider only equality constraints and require strong global regularity assumptions. In this paper, a trust region algorithm for solving general nonlinear programming is presented, ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
The trust region approach has been extended to solving nonlinear constrained optimization. Most of these extensions consider only equality constraints and require strong global regularity assumptions. In this paper, a trust region algorithm for solving general nonlinear programming is presented, which solves an unconstrained piecewise quadratic trust region subproblem and a quadratic programming trust region subproblem at each iteration. A new technique for updating the penalty parameter is introduced. Under very mild conditions, the global convergence results are proved.
Secondorder negativecurvature methods for boxconstrained and general constrained optimization
, 2009
"... A Nonlinear Programming algorithm that converges to secondorder stationary points is introduced in this paper. The main tool is a secondorder negativecurvature method for boxconstrained minimization of a certain class of functions that do not possess continuous second derivatives. This method is ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
A Nonlinear Programming algorithm that converges to secondorder stationary points is introduced in this paper. The main tool is a secondorder negativecurvature method for boxconstrained minimization of a certain class of functions that do not possess continuous second derivatives. This method is used to define an Augmented Lagrangian algorithm of PHR (PowellHestenesRockafellar) type. Convergence proofs under weak constraint qualifications are given. Numerical examples showing that the new method converges to secondorder stationary points in situations in which firstorder methods fail are exhibited.
Convergence to a SecondOrder Point of a TrustRegion Algorithm with a Nonmonotonic Penalty Parameter for Constrained Optimization
 Rice University
, 1996
"... In a recent paper, the author (Ref. 1) proposed a trustregion algorithm for solving the problem of minimizing a nonlinear function subject to a set of equality constraints. The main feature of the algorithm is that the penalty parameter in the merit function can be decreased whenever it is warrant ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In a recent paper, the author (Ref. 1) proposed a trustregion algorithm for solving the problem of minimizing a nonlinear function subject to a set of equality constraints. The main feature of the algorithm is that the penalty parameter in the merit function can be decreased whenever it is warranted. He studied the behavior of the penalty parameter and proved several global and local convergence results. One of these results is that there exists a subsequence of the iterates generated by the algorithm, that converges to a point that satisfies the firstorder necessary conditions. In the current paper, we show that, for this algorithm, there exists a subsequence of iterates that converges to a point that satisfies both the firstorder and the secondorder necessary conditions. Key Words : Constrained optimization, equality constrained, penalty parameter, nonmonotonic penalty parameter, convergence, trustregion methods, firstorder point, secondorder point, necessary conditions. B 1...
Managing Approximation Models in Optimization
 Multidisciplinary Design Optimization: StateoftheArt
, 1996
"... It is standard engineering practice to use approximation models in place of expensive simulations to drive an optimal design process based on nonlinear programming algorithms. This paper uses wellestablished notions from the literature on trustregion methods and a powerful global convergence theor ..."
Abstract
 Add to MetaCart
It is standard engineering practice to use approximation models in place of expensive simulations to drive an optimal design process based on nonlinear programming algorithms. This paper uses wellestablished notions from the literature on trustregion methods and a powerful global convergence theory for pattern search methods to manage the interplay between optimization and the fidelity of the approximation models to insure that the process converges to a reasonable solution of the original design problem. We present a specific example from the class of algorithms outlined here, but many other interesting options exist that we will explore in later work. The algorithm we present as an example of the management strategies we propose is based on a family of pattern search algorithms developed by the authors. Pattern search methods can be successfully applied when only ranking (ordinal) information is available and when derivatives are either unavailable or unreliable. Since we are inter...
AN EXACT PENALTY ALGORITHM FOR NONLINEAR EQUALITY CONSTRAINED OPTIMIZATION PROBLEMS
"... Abstract. In this paper we define a trustregion globalization strategies to solve a continuously differentiable nonlinear equality constrained minimization problem. The trustregion approach uses a penalty parameter that is proven to be uniformly bounded. Under rather weak hypotheses and without th ..."
Abstract
 Add to MetaCart
Abstract. In this paper we define a trustregion globalization strategies to solve a continuously differentiable nonlinear equality constrained minimization problem. The trustregion approach uses a penalty parameter that is proven to be uniformly bounded. Under rather weak hypotheses and without the usual regularity assumption that the linearized constraints gradients are linearly independent, we prove that the hybrid algorithm is globally convergent Moreover, under the standard hypotheses of the SQP method, we prove that the rate of convergence is qquadratic.