Results 1  10
of
27
Nonlinear Programming without a penalty function
 Mathematical Programming
, 2000
"... In this paper the solution of nonlinear programming problems by a Sequential Quadratic Programming (SQP) trustregion algorithm is considered. The aim of the present work is to promote global convergence without the need to use a penalty function. Instead, a new concept of a "filter" is in ..."
Abstract

Cited by 181 (29 self)
 Add to MetaCart
In this paper the solution of nonlinear programming problems by a Sequential Quadratic Programming (SQP) trustregion algorithm is considered. The aim of the present work is to promote global convergence without the need to use a penalty function. Instead, a new concept of a "filter" is introduced which allows a step to be accepted if it reduces either the objective function or the constraint violation function. Numerical tests on a wide range of test problems are very encouraging and the new algorithm compares favourably with LANCELOT and an implementation of Sl 1 QP.
The PATH Solver: A NonMonotone Stabilization Scheme for Mixed Complementarity Problems
 OPTIMIZATION METHODS AND SOFTWARE
, 1995
"... The Path solver is an implementation of a stabilized Newton method for the solution of the Mixed Complementarity Problem. The stabilization scheme employs a pathgeneration procedure which is used to construct a piecewiselinear path from the current point to the Newton point; a step length acceptan ..."
Abstract

Cited by 179 (35 self)
 Add to MetaCart
The Path solver is an implementation of a stabilized Newton method for the solution of the Mixed Complementarity Problem. The stabilization scheme employs a pathgeneration procedure which is used to construct a piecewiselinear path from the current point to the Newton point; a step length acceptance criterion and a nonmonotone pathsearch are then used to choose the next iterate. The algorithm is shown to be globally convergent under assumptions which generalize those required to obtain similar results in the smooth case. Several implementation issues are discussed, and extensive computational results obtained from problems commonly found in the literature are given.
Sequential Quadratic Programming
, 1995
"... this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can ..."
Abstract

Cited by 121 (3 self)
 Add to MetaCart
this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can
Integrating SQP and branchandbound for Mixed Integer Nonlinear Programming
 Computational Optimization and Applications
, 1998
"... This paper considers the solution of Mixed Integer Nonlinear Programming (MINLP) problems. Classical methods for the solution of MINLP problems decompose the problem by separating the nonlinear part from the integer part. This approach is largely due to the existence of packaged software for solving ..."
Abstract

Cited by 27 (0 self)
 Add to MetaCart
This paper considers the solution of Mixed Integer Nonlinear Programming (MINLP) problems. Classical methods for the solution of MINLP problems decompose the problem by separating the nonlinear part from the integer part. This approach is largely due to the existence of packaged software for solving Nonlinear Programming (NLP) and Mixed Integer Linear Programming problems. In contrast, an integrated approach to solving MINLP problems is considered here. This new algorithm is based on branchandbound, but does not require the NLP problem at each node to be solved to optimality. Instead, branching is allowed after each iteration of the NLP solver. In this way, the nonlinear part of the MINLP problem is solved whilst searching the tree. The nonlinear solver that is considered in this paper is a Sequential Quadratic Programming solver. A numerical comparison of the new method with nonlinear branchandbound is presented and a factor of about 3 improvement over branchandbound is observed...
Modified Wilson's Method For Nonlinear Programs With Nonunique Multipliers
, 1999
"... this paper we deal with arbitrary nonlinear constraint functions. We first present a general framework for obtaining superlinear convergence of Newtontype methods for generalized equations with compact solution sets. Then our main aim is to show how this framework can be applied to the KarushKuhnT ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
this paper we deal with arbitrary nonlinear constraint functions. We first present a general framework for obtaining superlinear convergence of Newtontype methods for generalized equations with compact solution sets. Then our main aim is to show how this framework can be applied to the KarushKuhnTucker system and to derive conditions that imply local qquadratic convergence of a Modified Wilson Method but not the uniqueness of the multiplier vector. This rate of convergence will be shown for the distances of the iterates to the set of KKT points. Josephy [8] proved that Newton's method for generalized equations converges locally
Quadratically And Superlinearly Convergent Algorithms For The Solution Of Inequality Constrained Minimization Problems
, 1995
"... . In this paper some Newton and quasiNewton algorithms for the solution of inequality constrained minimization problems are considered. All the algorithms described produce sequences fx k g converging qsuperlinearly to the solution. Furthermore, under mild assumptions, a qquadratic convergence ra ..."
Abstract

Cited by 21 (8 self)
 Add to MetaCart
. In this paper some Newton and quasiNewton algorithms for the solution of inequality constrained minimization problems are considered. All the algorithms described produce sequences fx k g converging qsuperlinearly to the solution. Furthermore, under mild assumptions, a qquadratic convergence rate in x is also attained. Other features of these algorithms are that the solution of linear systems of equations only is required at each iteration and that the strict complementarity assumption is never invoked. First the superlinear or quadratic convergence rate of a Newtonlike algorithm is proved. Then, a simpler version of this algorithm is studied and it is shown that it is superlinearly convergent. Finally, quasiNewton versions of the previous algorithms are considered and, provided the sequence defined by the algorithms converges, a characterization of superlinear convergence extending the result of Boggs, Tolle and Wang is given. Key Words. Inequality constrained optimization, New...
A Robust Algorithm for Optimization With General Equality and Inequality Constraints
 of Unkown Multipath Channels Based on Block Precoding and Transmit Diversity,” in Asilomar Conference on Signals, Systems, and Computers
"... An algorithm for general nonlinearly constrained optimization is presented, which solves an unconstrained piecewise quadratic subproblem and a quadratic programming subproblem at each iterate. The algorithm is robust since it can circumvent the difficulties associated with the possible inconsistency ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
An algorithm for general nonlinearly constrained optimization is presented, which solves an unconstrained piecewise quadratic subproblem and a quadratic programming subproblem at each iterate. The algorithm is robust since it can circumvent the difficulties associated with the possible inconsistency of QP subproblem of the original SQP method. Moreover, the algorithm can converge to a point which satisfies a certain firstorder necessary optimality condition even when the original problem is itself infeasible, which is a feature of Burke and Han's methods(1989). Unlike Burke and Han's methods(1989), however, we do not introduce additional bound constraints. The algorithm solves the same subproblems as HanPowell SQP algorithm at feasible points of the original problem. Under certain assumptions, it is shown that the algorithm coincide with the HanPowell method when the iterates are sufficiently close to the solution. Some global convergence results are proved and local superlinear co...
On the realization of the Wolfe conditions in reduced quasiNewton methods for equality constrained optimization
 SIAM Journal on Optimization
, 1997
"... Abstract. This paper describes a reduced quasiNewton method for solving equality constrained optimization problems. A major difficulty encountered by this type of algorithm is the design of a consistent technique for maintaining the positive definiteness of the matrices approximating the reduced He ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Abstract. This paper describes a reduced quasiNewton method for solving equality constrained optimization problems. A major difficulty encountered by this type of algorithm is the design of a consistent technique for maintaining the positive definiteness of the matrices approximating the reduced Hessian of the Lagrangian. A new approach is proposed in this paper. The idea is to search for the next iterate along a piecewise linear path. The path is designed so that some generalized Wolfe conditions can be satisfied. These conditions allow the algorithm to sustain the positive definiteness of the matrices from iteration to iteration by a mechanism that has turned out to be efficient in unconstrained optimization.
Trust Region SQP Methods With Inexact Linear System Solves For LargeScale Optimization
, 2006
"... by ..."
Convergence to a SecondOrder Point of a TrustRegion Algorithm with a Nonmonotonic Penalty Parameter for Constrained Optimization
 Rice University
, 1996
"... In a recent paper, the author (Ref. 1) proposed a trustregion algorithm for solving the problem of minimizing a nonlinear function subject to a set of equality constraints. The main feature of the algorithm is that the penalty parameter in the merit function can be decreased whenever it is warrant ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
In a recent paper, the author (Ref. 1) proposed a trustregion algorithm for solving the problem of minimizing a nonlinear function subject to a set of equality constraints. The main feature of the algorithm is that the penalty parameter in the merit function can be decreased whenever it is warranted. He studied the behavior of the penalty parameter and proved several global and local convergence results. One of these results is that there exists a subsequence of the iterates generated by the algorithm, that converges to a point that satisfies the firstorder necessary conditions. In the current paper, we show that, for this algorithm, there exists a subsequence of iterates that converges to a point that satisfies both the firstorder and the secondorder necessary conditions. Key Words : Constrained optimization, equality constrained, penalty parameter, nonmonotonic penalty parameter, convergence, trustregion methods, firstorder point, secondorder point, necessary conditions. B 1...