Results 1  10
of
21
Sequential Quadratic Programming
, 1995
"... this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can ..."
Abstract

Cited by 114 (2 self)
 Add to MetaCart
this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can
Parallel LagrangeNewtonKrylovSchur methods for PDEconstrained optimization. Part I: The KrylovSchur solver
 SIAM J. Sci. Comput
, 2000
"... Abstract. Large scale optimization of systems governed by partial differential equations (PDEs) is a frontier problem in scientific computation. The stateoftheart for such problems is reduced quasiNewton sequential quadratic programming (SQP) methods. These methods take full advantage of existin ..."
Abstract

Cited by 72 (11 self)
 Add to MetaCart
Abstract. Large scale optimization of systems governed by partial differential equations (PDEs) is a frontier problem in scientific computation. The stateoftheart for such problems is reduced quasiNewton sequential quadratic programming (SQP) methods. These methods take full advantage of existing PDE solver technology and parallelize well. However, their algorithmic scalability is questionable; for certain problem classes they can be very slow to converge. In this twopart article we propose a new method for steadystate PDEconstrained optimization, based on the idea of full space SQP with reduced space quasiNewton SQP preconditioning. The basic components of the method are: Newton solution of the firstorder optimality conditions that characterize stationarity of the Lagrangian function; Krylov solution of the KarushKuhnTucker (KKT) linear systems arising at each Newton iteration using a symmetric quasiminimum residual method; preconditioning of the KKT system using an approximate state/decision variable decomposition that replaces the forward PDE Jacobians by their own preconditioners, and the decision space Schur complement (the reduced Hessian) by a BFGS approximation or by a twostep stationary method. Accordingly, we term the new method LagrangeNewtonKrylov Schur (LNKS). It is fully parallelizable, exploits the structure of available parallel algorithms for the PDE forward problem, and is locally quadratically convergent. In the first part of the paper we investigate the effectiveness of the KKT linear system solver. We test the method on two optimal control problems in which the flow is described by the steadystate Stokes equations. The
A Sqp Method For General Nonlinear Programs Using Only Equality Constrained Subproblems
 MATHEMATICAL PROGRAMMING
, 1993
"... In this paper we describe a new version of a sequential equality constrained quadratic programming method for general nonlinear programs with mixed equality and inequality constraints. Compared with an older version [34] it is much simpler to implement and allows any kind of changes of the working s ..."
Abstract

Cited by 46 (2 self)
 Add to MetaCart
In this paper we describe a new version of a sequential equality constrained quadratic programming method for general nonlinear programs with mixed equality and inequality constraints. Compared with an older version [34] it is much simpler to implement and allows any kind of changes of the working set in every step. Our method relies on a strong regularity condition. As far as it is applicable the new approach is superior to conventional SQPmethods, as demonstrated by extensive numerical tests.
A New Technique For Inconsistent QP Problems In The SQP Method
 University at Darmstadt, Department of Mathematics, preprint 1561, Darmstadt
, 1993
"... Successful treatment of inconsistent QP problems is of major importance in the SQP method, since such occur quite often even for well behaved nonlinear programming problems. This paper presents a new technique for regularizing inconsistent QP problems, which compromises in its properties between the ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Successful treatment of inconsistent QP problems is of major importance in the SQP method, since such occur quite often even for well behaved nonlinear programming problems. This paper presents a new technique for regularizing inconsistent QP problems, which compromises in its properties between the simple technique of Pantoja and Mayne [34] and the highly successful, but expensive one of Tone [44]. Global convergence of a corresponding algorithm is shown under reasonable weak conditions. Numerical results are reported which show that this technique, combined with a special method for the case of regular subproblems, is quite competitive to highly appreciated established ones. Key words: sequential quadratic programming, SQP method, nonlinear programming AMS(MOS) subject classification: primary 90C30, secondary 65K05 1 NOTATION Superscripts on a vector denote elements of sequences. All vectors are column vectors. For a vectorvalued function g rg(x) denotes the transposed Jacobian eval...
Discrete Optimization Methods and their Role in the Integration of Planning and Scheduling
 AICHE SYMPSIUM SERIES
, 2002
"... The need for improvement in process operations, logistics and supply chain management has created a great demand for the development of optimization models for planning and scheduling. In this paper we first review the major classes of planning and scheduling models that arise in process operations, ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
The need for improvement in process operations, logistics and supply chain management has created a great demand for the development of optimization models for planning and scheduling. In this paper we first review the major classes of planning and scheduling models that arise in process operations, and establish the underlying mathematical structure of these problems. As will be shown, the nature of these models is greatly affected by the time representation (discrete or continuous), and is often dominated by discrete decisions. We then briefly review the major recent developments in mixedinteger linear and nonlinear programming, disjunctive programming and constraint programming, as well as general decomposition techniques for solving these problems. We present a general formulation for integrating planning and scheduling to illustrate the models and methods discussed in this paper.
A Robust Algorithm for Optimization With General Equality and Inequality Constraints
 of Unkown Multipath Channels Based on Block Precoding and Transmit Diversity,” in Asilomar Conference on Signals, Systems, and Computers
"... An algorithm for general nonlinearly constrained optimization is presented, which solves an unconstrained piecewise quadratic subproblem and a quadratic programming subproblem at each iterate. The algorithm is robust since it can circumvent the difficulties associated with the possible inconsistency ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
An algorithm for general nonlinearly constrained optimization is presented, which solves an unconstrained piecewise quadratic subproblem and a quadratic programming subproblem at each iterate. The algorithm is robust since it can circumvent the difficulties associated with the possible inconsistency of QP subproblem of the original SQP method. Moreover, the algorithm can converge to a point which satisfies a certain firstorder necessary optimality condition even when the original problem is itself infeasible, which is a feature of Burke and Han's methods(1989). Unlike Burke and Han's methods(1989), however, we do not introduce additional bound constraints. The algorithm solves the same subproblems as HanPowell SQP algorithm at feasible points of the original problem. Under certain assumptions, it is shown that the algorithm coincide with the HanPowell method when the iterates are sufficiently close to the solution. Some global convergence results are proved and local superlinear co...
Analyse und Restrukturierung eines Verfahrens zur direkten Lösung von OptimalSteuerungsproblemen (The Theory of MUSCOD in a Nutshell)
, 1995
"... MUSCOD (MU ltiple Shooting COde for Direct Optimal Control) is the implementation of an algorithm for the direct solution of optimal control problems. The method is based on multiple shooting combined with a sequential quadratic programming (SQP) technique; its original version was developed in the ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
MUSCOD (MU ltiple Shooting COde for Direct Optimal Control) is the implementation of an algorithm for the direct solution of optimal control problems. The method is based on multiple shooting combined with a sequential quadratic programming (SQP) technique; its original version was developed in the early 1980s by Plitt under the supervision of Bock [Plitt81, Bock84]. The following report is intended to describe the basic aspects of the underlying theory in a concise but readable form. Such a description is not yet available: the paper by Bock and Plitt [Bock84] gives a good overview of the method, but it leaves out too many important details to be a complete reference, while the diploma thesis by Plitt [Plitt81], on the other hand, presents a fairly complete description, but is rather difficult to read. Throughout the present document, emphasis is given to a clear presentation of the concepts upon which MUSCOD is based. An effort has been made to properly reflect the structure of the a...
Advances in Mathematical Programming for Automated Design Integration
 KOREAN J. CHEM. ENG
, 1999
"... This paper presents a review of advances that have taken place in the mathematical programming approach to process design and synthesis. A review is first presented on the algorithms that are available for solving MINLP problems, and its most recent variant, Generalized Disjunctive Programming model ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
This paper presents a review of advances that have taken place in the mathematical programming approach to process design and synthesis. A review is first presented on the algorithms that are available for solving MINLP problems, and its most recent variant, Generalized Disjunctive Programming models. The formulation of superstructures, models and solution strategies is also discussed for the effective solution of the corresponding optimization problems. The rest of the paper is devoted to reviewing recent mathematical programming models for the synthesis of reactor networks, distillation sequences, heat exchanger networks, mass exchanger networks, utility plants, and total flowsheets. As will be seen from this review, the progress that has been achieved in this area over the last decade is very significant.
Relaxing Convergence Conditions To Improve The Convergence Rate
, 1999
"... Standard global convergence proofs are examined to determine why some algorithms perform better than other algorithms. We show that relaxing the conditions required to prove global convergence can improve an algorithm's performance. Further analysis indicates that minimizing an estimate of the dista ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Standard global convergence proofs are examined to determine why some algorithms perform better than other algorithms. We show that relaxing the conditions required to prove global convergence can improve an algorithm's performance. Further analysis indicates that minimizing an estimate of the distance to the minimum relaxes the convergence conditions in such a way as to improve an algorithm's convergence rate. A new linesearch algorithm based on these ideas is presented that does not force a reduction in the objective function at each iteration, yet it allows the objective function to increase during an iteration only if this will result in faster convergence. Unlike the nonmonotone algorithms in the literature, these new functions dynamically adjust to account for changes between the influence of curvature and descent. The result is an optimal algorithm in the sense that an estimate of the distance to the minimum is minimized at each iteration. The algorithm is shown to be well defi...
Exact Penalty Methods
 In I. Ciocco (Ed.), Algorithms for Continuous Optimization
, 1994
"... . Exact penalty methods for the solution of constrained optimization problems are based on the construction of a function whose unconstrained minimizing points are also solution of the constrained problem. In the first part of this paper we recall some definitions concerning exactness properties of ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
. Exact penalty methods for the solution of constrained optimization problems are based on the construction of a function whose unconstrained minimizing points are also solution of the constrained problem. In the first part of this paper we recall some definitions concerning exactness properties of penalty functions, of barrier functions, of augmented Lagrangian functions, and discuss under which assumptions on the constrained problem these properties can be ensured. In the second part of the paper we consider algorithmic aspects of exact penalty methods; in particular we show that, by making use of continuously differentiable functions that possess exactness properties, it is possible to define implementable algorithms that are globally convergent with superlinear convergence rate towards KKT points of the constrained problem. 1 Introduction "It would be a major theoretic breakthrough in nonlinear programming if a simple continuously differentiable function could be exhibited with th...