Results 1  10
of
19
Sequential Quadratic Programming
, 1995
"... this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can ..."
Abstract

Cited by 162 (4 self)
 Add to MetaCart
this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can
On the sequential quadratically constrained quadratic programming methods
 Math. Oper. Res
, 2004
"... doi 10.1287/moor.1030.0069 ..."
(Show Context)
Sequential quadratically constrained quadratic programming normRelaxed algorithm of strongly subfeasible directions, manuscript
"... Abstract. This paper discusses optimization problems with nonlinear inequality constraints and presents a new sequential quadraticallyconstrained quadratic programming (NSQCQP) method of feasible directions for solving such problems. At each iteration, the NSQCQP method solves only one subproblem ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
Abstract. This paper discusses optimization problems with nonlinear inequality constraints and presents a new sequential quadraticallyconstrained quadratic programming (NSQCQP) method of feasible directions for solving such problems. At each iteration, the NSQCQP method solves only one subproblem which consists of a convex quadratic objective function, convex quadratic equality constraints, as well as a perturbation variable and yields a feasible direction of descent (improved direction). The following results on the NSQCQP are obtained: the subproblem solved at each iteration is feasible and solvable: the NSQCQP is globally convergent under the MangasarianFromovitz constraint qualification (MFCQ); the improved direction can avoid the Maratos effect without the assumption of strict complementarity; the NSQCQP is superlinearly and quasiquadratically convergent under some weak assumptions without the strict complementarity assumption and the linear independence constraint qualification (LICQ). Key Words. Inequality constraints, optimization, quadratic constraints, quadratic programming, convergence rate. 1.
A Variant of the TopkisVeinott Method for Solving Inequality Constrained Optimization Problems
 J. Appl. Math. Optim
, 1997
"... . In this paper, we give a variant of the TopkisVeinott method for solving inequality constrained optimization problems. This method uses a linearly constrained positive semidefinite quadratic problem to generate a feasible descent direction at each iteration. Under mild assumptions, the algorithm ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
. In this paper, we give a variant of the TopkisVeinott method for solving inequality constrained optimization problems. This method uses a linearly constrained positive semidefinite quadratic problem to generate a feasible descent direction at each iteration. Under mild assumptions, the algorithm is shown to be globally convergent in the sense that every accumulation point of the sequence generated by the algorithm is a FritzJohn point of the problem. We introduce a FritzJohn (FJ) function, an FJ1 strong secondorder sufficiency condition (FJ1SSOSC) and an FJ2 strong secondorder sufficiency condition (FJ2SSOSC), and then show, without any constraint qualification (CQ), that (i) if an FJ point z satisfies the FJ1SSOSC, then there exists a neighborhood N(z) of z such that for any FJ point y 2 N(z) n fzg, f 0 (y) 6= f 0 (z), where f 0 is the objective function of the problem; (ii) if an FJ point z satisfies the FJ2SSOSC, then z is a strict local minimum of the problem. The resu...
An SQP Feasible Descent Algorithm for Nonlinear Inequality Constrained Optimization Without Strict Complementarity
, 2003
"... Available online at www.sciencedirect.com computers & oc,,,.c. ~ , , ,., cT. mathematics with applications ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Available online at www.sciencedirect.com computers & oc,,,.c. ~ , , ,., cT. mathematics with applications
Exact Penalty Methods
 In I. Ciocco (Ed.), Algorithms for Continuous Optimization
, 1994
"... . Exact penalty methods for the solution of constrained optimization problems are based on the construction of a function whose unconstrained minimizing points are also solution of the constrained problem. In the first part of this paper we recall some definitions concerning exactness properties of ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
. Exact penalty methods for the solution of constrained optimization problems are based on the construction of a function whose unconstrained minimizing points are also solution of the constrained problem. In the first part of this paper we recall some definitions concerning exactness properties of penalty functions, of barrier functions, of augmented Lagrangian functions, and discuss under which assumptions on the constrained problem these properties can be ensured. In the second part of the paper we consider algorithmic aspects of exact penalty methods; in particular we show that, by making use of continuously differentiable functions that possess exactness properties, it is possible to define implementable algorithms that are globally convergent with superlinear convergence rate towards KKT points of the constrained problem. 1 Introduction "It would be a major theoretic breakthrough in nonlinear programming if a simple continuously differentiable function could be exhibited with th...
A Nonsmooth Equation Based BFGS Method for Solving KKT Systems in Mathematical Programming
 Journal of Optimization Theory and Applications
, 1998
"... In this paper, we present a BFGS method for solving a KKT system in mathematical programming, based on a nonsmooth equation reformulation of the KKT system. We successively split the nonsmooth equation into equivalent equations with particular structure. Based on the splitting, we develop a BFGS met ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
In this paper, we present a BFGS method for solving a KKT system in mathematical programming, based on a nonsmooth equation reformulation of the KKT system. We successively split the nonsmooth equation into equivalent equations with particular structure. Based on the splitting, we develop a BFGS method in which subproblems are systems of linear equations with symmetric and positive definite coefficient matrices. A suitable line search is introduced under which the generated iterates exhibit an approximately norm decent property. The method is well defined and, under suitable conditions, converges to a KKT point globally and superlinearly without convexity assumption on the problem.
A decomposition method based on SQP for a class of multistage nonlinear stochastic programs
"... Multistage stochastic programming problems arise in many practical situations, such as production and manpower planning, portfolio selections and so on. Generally, the size of the deterministic equivalent of stochastic programs can be very large and not be solvable directly by optimization approach ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Multistage stochastic programming problems arise in many practical situations, such as production and manpower planning, portfolio selections and so on. Generally, the size of the deterministic equivalent of stochastic programs can be very large and not be solvable directly by optimization approaches. Sequential quadratic programming methods are iterative and very effective for solving mediumsize nonlinear programming. Based on scenario analysis, a decomposition method based on SQP for solving a class of multistage nonlinear stochastic programs is proposed, which generates the search direction by solving parallelly a set of quadratic programming subproblems with size much less than the original problem at each iteration. Conjugate gradient methods can be introduced to derive the estimates of the dual multiplier associated with the nonanticipativity constraints. By selecting the stepsize to reduce an exact penalty function sufficiently, the algorithm terminate finitely at an approxim...
Constructive Existence Conditions for Systems of Nonlinear Inequalities
"... . The aim of the present paper is that of deriving a few unifying principles at the basis of numerically implementable existence conditions for systems of nonlinear inequalities in IR n . We define different criteria in terms of suitable merit functions and we derive, as special cases, most of the ..."
Abstract
 Add to MetaCart
. The aim of the present paper is that of deriving a few unifying principles at the basis of numerically implementable existence conditions for systems of nonlinear inequalities in IR n . We define different criteria in terms of suitable merit functions and we derive, as special cases, most of the known regularity conditions employed for ensuring the convergence of algorithms towards feasible solutions. We establish also new extensions and connections with fixed point theory for nonlinear operators. Key words. Solution of nonlinear inequalities, feasible set, nonlinear programming. 1 Introduction The problem of determining a solution to a system of nonlinear inequalities is a fundamental problem in nonlinear optimization, which plays a major role both in global optimization and in constrained local optimization. In the general case, it is equivalent to a global optimization problem [10][8]. Indeed, the problem of determining ¯ x 2 IR n that satisfies a system of nonlinear inequa...