Results 1 
9 of
9
Nonlinear Programming without a penalty function
 Mathematical Programming
, 2000
"... In this paper the solution of nonlinear programming problems by a Sequential Quadratic Programming (SQP) trustregion algorithm is considered. The aim of the present work is to promote global convergence without the need to use a penalty function. Instead, a new concept of a "filter" is introduced w ..."
Abstract

Cited by 164 (27 self)
 Add to MetaCart
In this paper the solution of nonlinear programming problems by a Sequential Quadratic Programming (SQP) trustregion algorithm is considered. The aim of the present work is to promote global convergence without the need to use a penalty function. Instead, a new concept of a "filter" is introduced which allows a step to be accepted if it reduces either the objective function or the constraint violation function. Numerical tests on a wide range of test problems are very encouraging and the new algorithm compares favourably with LANCELOT and an implementation of Sl 1 QP.
Integrating SQP and branchandbound for Mixed Integer Nonlinear Programming
 Computational Optimization and Applications
, 1998
"... This paper considers the solution of Mixed Integer Nonlinear Programming (MINLP) problems. Classical methods for the solution of MINLP problems decompose the problem by separating the nonlinear part from the integer part. This approach is largely due to the existence of packaged software for solving ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
This paper considers the solution of Mixed Integer Nonlinear Programming (MINLP) problems. Classical methods for the solution of MINLP problems decompose the problem by separating the nonlinear part from the integer part. This approach is largely due to the existence of packaged software for solving Nonlinear Programming (NLP) and Mixed Integer Linear Programming problems. In contrast, an integrated approach to solving MINLP problems is considered here. This new algorithm is based on branchandbound, but does not require the NLP problem at each node to be solved to optimality. Instead, branching is allowed after each iteration of the NLP solver. In this way, the nonlinear part of the MINLP problem is solved whilst searching the tree. The nonlinear solver that is considered in this paper is a Sequential Quadratic Programming solver. A numerical comparison of the new method with nonlinear branchandbound is presented and a factor of about 3 improvement over branchandbound is observed...
SQP methods for largescale nonlinear programming
, 1999
"... We compare and contrast a number of recent sequential quadratic programming (SQP) methods that have been proposed for the solution of largescale nonlinear programming problems. Both linesearch and trustregion approaches are considered, as are the implications of interiorpoint and quadratic progr ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
We compare and contrast a number of recent sequential quadratic programming (SQP) methods that have been proposed for the solution of largescale nonlinear programming problems. Both linesearch and trustregion approaches are considered, as are the implications of interiorpoint and quadratic programming methods.
Relaxing Convergence Conditions To Improve The Convergence Rate
, 1999
"... Standard global convergence proofs are examined to determine why some algorithms perform better than other algorithms. We show that relaxing the conditions required to prove global convergence can improve an algorithm's performance. Further analysis indicates that minimizing an estimate of the dista ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Standard global convergence proofs are examined to determine why some algorithms perform better than other algorithms. We show that relaxing the conditions required to prove global convergence can improve an algorithm's performance. Further analysis indicates that minimizing an estimate of the distance to the minimum relaxes the convergence conditions in such a way as to improve an algorithm's convergence rate. A new linesearch algorithm based on these ideas is presented that does not force a reduction in the objective function at each iteration, yet it allows the objective function to increase during an iteration only if this will result in faster convergence. Unlike the nonmonotone algorithms in the literature, these new functions dynamically adjust to account for changes between the influence of curvature and descent. The result is an optimal algorithm in the sense that an estimate of the distance to the minimum is minimized at each iteration. The algorithm is shown to be well defi...
Symbiosis between Linear Algebra and Optimization
, 1999
"... The efficiency and effectiveness of most optimization algorithms hinges on the numerical linear algebra algorithms that they utilize. Effective linear algebra is crucial to their success, and because of this, optimization applications have motivated fundamental advances in numerical linear algebra. ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
The efficiency and effectiveness of most optimization algorithms hinges on the numerical linear algebra algorithms that they utilize. Effective linear algebra is crucial to their success, and because of this, optimization applications have motivated fundamental advances in numerical linear algebra. This essay will highlight contributions of numerical linear algebra to optimization, as well as some optimization problems encountered within linear algebra that contribute to a symbiotic relationship. 1 Introduction The work in any continuous optimization algorithm neatly partitions into two pieces: the work in acquiring information through evaluation of the function and perhaps its derivatives, and the overhead involved in generating points approximating an optimal point. More often than not, this second part of the work is dominated by linear algebra, usually in the form of solution of a linear system or least squares problem and updating of matrix information. Thus, members of the optim...
Smooth Exact Penalty and Barrier Functions for Nonsmooth Optimization
"... For constrained nonsmooth optimization problems, continuously differentiable penalty functions and barrier functions are given. They are proved exact in the sense that under some nondegeneracy assumption, local optimizers of a nonlinear program are also optimizers of the associated penalty or barrie ..."
Abstract
 Add to MetaCart
For constrained nonsmooth optimization problems, continuously differentiable penalty functions and barrier functions are given. They are proved exact in the sense that under some nondegeneracy assumption, local optimizers of a nonlinear program are also optimizers of the associated penalty or barrier function. This is achieved by augmenting the dimension of the program by a variable that controls the regularization of the nonsmooth terms and the weight of the penalty or barrier terms.
Math. Program., Ser. A 91: 239–269 (2002) Digital Object Identifier (DOI) 10.1007/s101070100244
, 2001
"... Abstract. In this paper the solution of nonlinear programming problems by a Sequential Quadratic Programming (SQP) trustregion algorithm is considered. The aim of the present work is to promote global convergence without the need to use a penalty function. Instead, a new concept of a “filter ” is i ..."
Abstract
 Add to MetaCart
Abstract. In this paper the solution of nonlinear programming problems by a Sequential Quadratic Programming (SQP) trustregion algorithm is considered. The aim of the present work is to promote global convergence without the need to use a penalty function. Instead, a new concept of a “filter ” is introduced which allows a step to be accepted if it reduces either the objective function or the constraint violation function. Numerical tests on a wide range of test problems are very encouraging and the new algorithm compares favourably with LANCELOT and an implementation of Sl1QP. Key words. nonlinear programming – SQP – filter – penalty function
www.elsevier.nl/locate/cam Symbiosis between linear algebra and optimization �
, 1999
"... The e ciency and e ectiveness of most optimization algorithms hinges on the numerical linear algebra algorithms that they utilize. E ective linear algebra is crucial to their success, and because of this, optimization applications have motivated fundamental advances in numerical linear algebra. This ..."
Abstract
 Add to MetaCart
The e ciency and e ectiveness of most optimization algorithms hinges on the numerical linear algebra algorithms that they utilize. E ective linear algebra is crucial to their success, and because of this, optimization applications have motivated fundamental advances in numerical linear algebra. This essay will highlight contributions of numerical linear algebra to optimization, as well as some optimization problems encountered within linear algebra that contribute to a
Impact Factor: 1.852
"... In recent years cone constraint optimization problems has been favored by scholars, especially for the secondorder cone constraints programming problem,they have carried on the detailed study, had got the corresponding theoretical achievements. In this paper, on the basis of the existing cone const ..."
Abstract
 Add to MetaCart
In recent years cone constraint optimization problems has been favored by scholars, especially for the secondorder cone constraints programming problem,they have carried on the detailed study, had got the corresponding theoretical achievements. In this paper, on the basis of the existing cone constraint optimization problem, we puts forward the definition of projection secondorder cone,we related properties of this cone and the corresponding convexity function, the linear projected secondorder cone constraint programme problem, the dual problem, the optimality conditions,we can converted it into corresponding secondorder cone programme problems.