Results 1 
6 of
6
Inexactness issues in the LagrangeNewtonKrylovSchur method for PDEconstrained optimization
 LargeScale PDEConstrained Optimization, number 30 in Lecture
"... Abstract. In this article we present an outline of the LagrangeNewtonKrylovSchur (LNKS) method and we discuss how we can improve its work efficiency by carrying out certain computations inexactly, without compromising convergence. LNKS has been designed for PDEconstrained optimization problems. I ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Abstract. In this article we present an outline of the LagrangeNewtonKrylovSchur (LNKS) method and we discuss how we can improve its work efficiency by carrying out certain computations inexactly, without compromising convergence. LNKS has been designed for PDEconstrained optimization problems. It solves the KarushKuhnTucker optimality conditions by a NewtonKrylov algorithm. Its key component is a preconditioner based on quasiNewton reduced space Sequential Quadratic Programming (QNRSQP) variants. LNKS combines the fastconvergence properties of a Newton method with the capability of preconditioned Krylov methods to solve very large linear systems. Nevertheless, even with good preconditioners, the solution of an optimization problem has a cost which is several times higher than the cost of the solution of the underlying PDE problem. To accelerate LNKS, its computational components are carried out inexactly: premature termination of iterative algorithms, inexact evaluation of gradients and Jacobians, approximate line searches. Naturally, several issues arise with respect to the tradeoffs between speed and robustness. 1
Parallel NewtonKrylov Methods For PDEConstrained Optimization
 In Proceedings of Supercomputing ’99
, 1999
"... . Large scale optimization of systems governed by partial differential equations (PDEs) is a frontier problem in scientific computation. The stateoftheart for solving such problems is reducedspace quasiNewton sequential quadratic programming (SQP) methods. These take full advantage of existing ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
. Large scale optimization of systems governed by partial differential equations (PDEs) is a frontier problem in scientific computation. The stateoftheart for solving such problems is reducedspace quasiNewton sequential quadratic programming (SQP) methods. These take full advantage of existing PDE solver technology and parallelize well. However, their algorithmic scalability is questionable; for certain problem classes they can be very slow to converge. In this paper we propose a fullspace NewtonKrylov SQP method that uses the reducedspace quasiNewton method as a preconditioner. The new method is fully parallelizable; exploits the structure of and available parallel algorithms for the PDE forward problem; and is quadratically convergent close to a local minimum. We restrict our attention to boundary value problems and we solve a model optimal flow control problem, with both Stokes and NavierStokes equations as constraints. Algorithmic comparisons, scalability results, and para...
Adaptive Algorithms for Optimal Control of TimeDependent Partial DifferentialAlgebraic Equation Systems
"... This paper describes an adaptive algorithm for optimal control of timedependent partial differential algebraic equation (PDAE) systems. A direct method based on a modified multiple shooting type technique and sequential quadratic programming (SQP) is used for solving the optimal control problem, ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
This paper describes an adaptive algorithm for optimal control of timedependent partial differential algebraic equation (PDAE) systems. A direct method based on a modified multiple shooting type technique and sequential quadratic programming (SQP) is used for solving the optimal control problem, while an adaptive mesh refinement (AMR) algorithm is employed to dynamically adapt the spatial integration mesh. Issues of coupling the AMR solver to the optimization algorithm are addressed. For timedependent PDAEs which can benefit from the use of an adaptive mesh, the resulting method is shown to be highly efficient.
A Remark on Second Order Methods in Control of Fluid Flow
, 2000
"... : 9 > > > > > > > = > > > > > > > ; (1) Here Q c := c (0; T ) and Q o := o (0; T ), with c and o subsets of = (0; 1) 2 denoting control and observation volumes, respectively. The rst term in the cost functional values the control gain which here is to track the state z, and the ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
: 9 > > > > > > > = > > > > > > > ; (1) Here Q c := c (0; T ) and Q o := o (0; T ), with c and o subsets of = (0; 1) 2 denoting control and observation volumes, respectively. The rst term in the cost functional values the control gain which here is to track the state z, and the second term measures the control cost, where > 0 denotes a weighting factor. In this form solving (1) appears at rst to be a standard task. However, the formidable size of (1) and the goal of analyzing second order methods necessitate an independent analysis. One of the few contributions focusing on second order methods for optimal control of uids are given by Ghattas et al [2] and Heinkenschloss [3]. These works are r
and O. Ghattas
"... this paper we propose a preconditioner for the KKT system based on a reduced space quasiNewton algorithm. Battermann and Heinkenschloss [2] have suggested a preconditioner that is also motivated by reduced methods; the present one can be thought of as a generalization of their method. As in reduced ..."
Abstract
 Add to MetaCart
this paper we propose a preconditioner for the KKT system based on a reduced space quasiNewton algorithm. Battermann and Heinkenschloss [2] have suggested a preconditioner that is also motivated by reduced methods; the present one can be thought of as a generalization of their method. As in reduced quasiNewton algorithms, the new preconditioner requires just two linearized flow solves per iteration, but permits the fast convergence associated with full Newton methods. Furthermore, the two flow solves can be approximate, for example using any appropriate flow preconditioner. Finally, the resulting full space SQP parallelizes and scales as well as the flow solver itself. Our method is inspired by the domaindecomposed Schur complement algorithms. In these techniques, reduction onto the interface space requires exact subdomain solves, so one often prefers to iterate within the full space while using a preconditioner based on approximate subdomain solution [8]. Here, decomposition is performed into states and controls, as opposed to subdomain and interface spaces. Below we describe reduced and full space SQP methods and the proposed reduced spacebased KKT preconditioner. We also give some performance results on a Cray T3E for a model Stokes flow problem. Our implementation is based on the PETSc library for PDE solution [1], and makes use of PETSc domaindecomposition preconditioners for the approximate flow solves. 2. Reduced SQP methods
The SQPMethod For Tracking Type Control Of The Instationary NavierStokes Equations
, 2000
"... . The SQPmethod is investigated for trackingtype optimal control of the instationary NavierStokes equations. It is argued that the apriori formidable SQPstep can be decomposed into linear primal and linear adjoint systems, which is amenable for existing CFLsoftware. We report a numerical test ..."
Abstract
 Add to MetaCart
. The SQPmethod is investigated for trackingtype optimal control of the instationary NavierStokes equations. It is argued that the apriori formidable SQPstep can be decomposed into linear primal and linear adjoint systems, which is amenable for existing CFLsoftware. We report a numerical test which demonstrates the feasibility of the approach. In addition the functional analytic setting of the convergence analysis is presented. Key words. Optimal control, SQPmethod, NavierStokes equations AMS subject classifications. 34H05, 49J20, 49K20, 65K10, 76D55 1. The optimal control problem. We consider the optimal control problem 8 ? ? ? ? ? ? ? ! ? ? ? ? ? ? ? : min (y;u)2W \ThetaU J(y; u) := 1 2 R Qo jy \Gamma zj 2 dxdt + ff 2 R Qc juj 2 dxdt subject to @y @t + (y \Delta r)y \Gamma \Deltay +rp = Bu in Q = (0; T ) \Theta\Omega ; div y = 0 in Q; y(t; \Delta) = 0 on \Sigma = (0; T ) \Theta @\Omega ; y(0; \Delta) = y 0 in\Omega : (1) where Q c :=\Omega c \Th...