Results 1 
9 of
9
Sequential Quadratic Programming
, 1995
"... this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can ..."
Abstract

Cited by 114 (2 self)
 Add to MetaCart
this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can
On the sequential quadratically constrained quadratic programming methods
 Math. Oper. Res
, 2004
"... doi 10.1287/moor.1030.0069 ..."
A Variant of the TopkisVeinott Method for Solving Inequality Constrained Optimization Problems
 J. Appl. Math. Optim
, 1997
"... . In this paper, we give a variant of the TopkisVeinott method for solving inequality constrained optimization problems. This method uses a linearly constrained positive semidefinite quadratic problem to generate a feasible descent direction at each iteration. Under mild assumptions, the algorithm ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
. In this paper, we give a variant of the TopkisVeinott method for solving inequality constrained optimization problems. This method uses a linearly constrained positive semidefinite quadratic problem to generate a feasible descent direction at each iteration. Under mild assumptions, the algorithm is shown to be globally convergent in the sense that every accumulation point of the sequence generated by the algorithm is a FritzJohn point of the problem. We introduce a FritzJohn (FJ) function, an FJ1 strong secondorder sufficiency condition (FJ1SSOSC) and an FJ2 strong secondorder sufficiency condition (FJ2SSOSC), and then show, without any constraint qualification (CQ), that (i) if an FJ point z satisfies the FJ1SSOSC, then there exists a neighborhood N(z) of z such that for any FJ point y 2 N(z) n fzg, f 0 (y) 6= f 0 (z), where f 0 is the objective function of the problem; (ii) if an FJ point z satisfies the FJ2SSOSC, then z is a strict local minimum of the problem. The resu...
Exact Penalty Methods
 In I. Ciocco (Ed.), Algorithms for Continuous Optimization
, 1994
"... . Exact penalty methods for the solution of constrained optimization problems are based on the construction of a function whose unconstrained minimizing points are also solution of the constrained problem. In the first part of this paper we recall some definitions concerning exactness properties of ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
. Exact penalty methods for the solution of constrained optimization problems are based on the construction of a function whose unconstrained minimizing points are also solution of the constrained problem. In the first part of this paper we recall some definitions concerning exactness properties of penalty functions, of barrier functions, of augmented Lagrangian functions, and discuss under which assumptions on the constrained problem these properties can be ensured. In the second part of the paper we consider algorithmic aspects of exact penalty methods; in particular we show that, by making use of continuously differentiable functions that possess exactness properties, it is possible to define implementable algorithms that are globally convergent with superlinear convergence rate towards KKT points of the constrained problem. 1 Introduction "It would be a major theoretic breakthrough in nonlinear programming if a simple continuously differentiable function could be exhibited with th...
A Nonsmooth Equation Based BFGS Method for Solving KKT Systems in Mathematical Programming
 Journal of Optimization Theory and Applications
, 1998
"... In this paper, we present a BFGS method for solving a KKT system in mathematical programming, based on a nonsmooth equation reformulation of the KKT system. We successively split the nonsmooth equation into equivalent equations with particular structure. Based on the splitting, we develop a BFGS met ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
In this paper, we present a BFGS method for solving a KKT system in mathematical programming, based on a nonsmooth equation reformulation of the KKT system. We successively split the nonsmooth equation into equivalent equations with particular structure. Based on the splitting, we develop a BFGS method in which subproblems are systems of linear equations with symmetric and positive definite coefficient matrices. A suitable line search is introduced under which the generated iterates exhibit an approximately norm decent property. The method is well defined and, under suitable conditions, converges to a KKT point globally and superlinearly without convexity assumption on the problem.
SQP and PDIP algorithms for Nonlinear Programming
, 2007
"... Penalty and barrier methods are indirect ways of solving constrained optimization problems, using techniques developed in the unconstrained optimization realm. In what follows we shall give the foundation of two more direct ways of solving constrained optimization problems, namely Sequential Quadrat ..."
Abstract
 Add to MetaCart
Penalty and barrier methods are indirect ways of solving constrained optimization problems, using techniques developed in the unconstrained optimization realm. In what follows we shall give the foundation of two more direct ways of solving constrained optimization problems, namely Sequential Quadratic Programming (SQP) methods and PrimalDual Interior Point (PDIP) methods. 1 Sequential Quadratic Programming For the derivation of the Sequential Quadratic Programming method we shall use the equality constrained problem minimize f(x) x subject to h(x) = 0, (ECP) where f: R n → R and h: R n → R m are smooth functions. An understanding of this problem is essential in the design of SQP methods for general nonlinear programming problems. The KKT conditions for this problem are given by ∇f(x) + m� λi∇hi(x) = 0 (1a) i=1 1 h(x) = 0 (1b) where λ ∈ R m are the Lagrange multipliers associated with the equality constraints. If we use the Lagrangian L(x, λ) = f(x) + m� λihi(x) (2) we can write the KKT conditions (1) more compactly as ∇x L(x, λ) = 0. (EQKKT) ∇λ L(x, λ) As with Newton’s method unconstrained optimization, the main idead behind SQP is to model problem (ECP) at a given point x (k) by a quadratic programming subrpoblem and then use the solution to this problem to construct a more accurate approximation x (k+1). If we perform a Taylor series expansion of system (EQKKT) about (x (k) , λ (k) ) we obtain ∇x L(x (k) , λ (k))
Constructive Existence Conditions for Systems of Nonlinear Inequalities
"... . The aim of the present paper is that of deriving a few unifying principles at the basis of numerically implementable existence conditions for systems of nonlinear inequalities in IR n . We define different criteria in terms of suitable merit functions and we derive, as special cases, most of the ..."
Abstract
 Add to MetaCart
. The aim of the present paper is that of deriving a few unifying principles at the basis of numerically implementable existence conditions for systems of nonlinear inequalities in IR n . We define different criteria in terms of suitable merit functions and we derive, as special cases, most of the known regularity conditions employed for ensuring the convergence of algorithms towards feasible solutions. We establish also new extensions and connections with fixed point theory for nonlinear operators. Key words. Solution of nonlinear inequalities, feasible set, nonlinear programming. 1 Introduction The problem of determining a solution to a system of nonlinear inequalities is a fundamental problem in nonlinear optimization, which plays a major role both in global optimization and in constrained local optimization. In the general case, it is equivalent to a global optimization problem [10][8]. Indeed, the problem of determining ¯ x 2 IR n that satisfies a system of nonlinear inequa...
A decomposition method based on SQP for a class of multistage nonlinear stochastic programs
"... Multistage stochastic programming problems arise in many practical situations, such as production and manpower planning, portfolio selections and so on. Generally, the size of the deterministic equivalent of stochastic programs can be very large and not be solvable directly by optimization approach ..."
Abstract
 Add to MetaCart
Multistage stochastic programming problems arise in many practical situations, such as production and manpower planning, portfolio selections and so on. Generally, the size of the deterministic equivalent of stochastic programs can be very large and not be solvable directly by optimization approaches. Sequential quadratic programming methods are iterative and very effective for solving mediumsize nonlinear programming. Based on scenario analysis, a decomposition method based on SQP for solving a class of multistage nonlinear stochastic programs is proposed, which generates the search direction by solving parallelly a set of quadratic programming subproblems with size much less than the original problem at each iteration. Conjugate gradient methods can be introduced to derive the estimates of the dual multiplier associated with the nonanticipativity constraints. By selecting the stepsize to reduce an exact penalty function sufficiently, the algorithm terminate finitely at an approxim...
Switching Stepsize Strategies for SQP
, 2010
"... An SQP algorithm is presented for solving constrained nonlinear programming problems. The algorithm uses three stepsize strategies in order to achieve global and superlinear convergence. Switching rules are implemented that combine the merits and avoid the drawbacks of the three stepsize strategies. ..."
Abstract
 Add to MetaCart
An SQP algorithm is presented for solving constrained nonlinear programming problems. The algorithm uses three stepsize strategies in order to achieve global and superlinear convergence. Switching rules are implemented that combine the merits and avoid the drawbacks of the three stepsize strategies. A penalty parameter is determined using an adaptive strategy that aims to achieve sufficient decrease of the activated merit function. Global convergence is established and it is also shown that, locally, unity step sizes are accepted, and therefore superlinear convergence is not impeded under standard assumptions. Global convergence and convergence of the stepsizes is displayed on test problems from the Hock and Schittkowski collection. Keywords: