Results 1  10
of
25
Parallel LagrangeNewtonKrylovSchur methods for PDEconstrained optimization. Part I: The KrylovSchur solver
 SIAM J. Sci. Comput
, 2000
"... Abstract. Large scale optimization of systems governed by partial differential equations (PDEs) is a frontier problem in scientific computation. The stateoftheart for such problems is reduced quasiNewton sequential quadratic programming (SQP) methods. These methods take full advantage of existin ..."
Abstract

Cited by 106 (16 self)
 Add to MetaCart
(Show Context)
Abstract. Large scale optimization of systems governed by partial differential equations (PDEs) is a frontier problem in scientific computation. The stateoftheart for such problems is reduced quasiNewton sequential quadratic programming (SQP) methods. These methods take full advantage of existing PDE solver technology and parallelize well. However, their algorithmic scalability is questionable; for certain problem classes they can be very slow to converge. In this twopart article we propose a new method for steadystate PDEconstrained optimization, based on the idea of full space SQP with reduced space quasiNewton SQP preconditioning. The basic components of the method are: Newton solution of the firstorder optimality conditions that characterize stationarity of the Lagrangian function; Krylov solution of the KarushKuhnTucker (KKT) linear systems arising at each Newton iteration using a symmetric quasiminimum residual method; preconditioning of the KKT system using an approximate state/decision variable decomposition that replaces the forward PDE Jacobians by their own preconditioners, and the decision space Schur complement (the reduced Hessian) by a BFGS approximation or by a twostep stationary method. Accordingly, we term the new method LagrangeNewtonKrylov Schur (LNKS). It is fully parallelizable, exploits the structure of available parallel algorithms for the PDE forward problem, and is locally quadratically convergent. In the first part of the paper we investigate the effectiveness of the KKT linear system solver. We test the method on two optimal control problems in which the flow is described by the steadystate Stokes equations. The
A Multigrid Method For Distributed Parameter Estimation Problems
 Trans. Numer. Anal
, 2001
"... . This paper considers problems of distributed parameter estimation from data measurements on solutions of partial differential equations (PDEs). A nonlinear least squares functional is minimized to approximately recover the sought parameter function (i.e., the model). This functional consists of a ..."
Abstract

Cited by 43 (13 self)
 Add to MetaCart
(Show Context)
. This paper considers problems of distributed parameter estimation from data measurements on solutions of partial differential equations (PDEs). A nonlinear least squares functional is minimized to approximately recover the sought parameter function (i.e., the model). This functional consists of a data fitting term, involving the solution of a finite volume or finite element discretization of the forward differential equation, and a Tikhonovtype regularization term, involving the discretization of a mix of model derivatives. We develop a multigrid method for the resulting constrained optimization problem. The method directly addresses the discretized PDE system which defines a critical point of the Lagrangian. The discretization is cellbased. This system is strongly coupled when the regularization parameter is small. Moreover, the compactness of the discretization scheme does not necessarily follow from compact discretizations of the forward model and of the regularization term. We therefore employ a Marquardttype modification on coarser grids. Alternatively, fewer grids are used and a preconditioned Krylovspace method is utilized on the coarsest grid. A collective point relaxation method (weighted Jacobi or a GaussSeidel variant) is used for smoothing. We demonstrate the efficiency of our method on a classical model problem from hydrology. 1.
Preconditioned AllAtOnce Methods for Large, Sparse Parameter Estimation Problems
, 2000
"... The problem of recovering a parameter function based on measurements of solutions of a system of partial differential equations in several space variables leads to a number of computational challenges. Upon discretization of a regularized formulation a large, sparse constrained optimization prob ..."
Abstract

Cited by 29 (4 self)
 Add to MetaCart
The problem of recovering a parameter function based on measurements of solutions of a system of partial differential equations in several space variables leads to a number of computational challenges. Upon discretization of a regularized formulation a large, sparse constrained optimization problem is obtained. Typically in the literature, the constraints are eliminated and the resulting unconstrained formulation is solved by some variant of Newton's method, usually the GaussNewton method. A preconditioned conjugate gradient algorithm is applied at each iteration for the resulting reduced Hessian system. In this paper we apply instead a preconditioned Krylov method directly to the KKT system arising from a Newtontype method for the constrained formulation (an "allatonce" approach). A variant of symmetric QMR is employed, and an effective preconditioner is obtained by solving the reduced Hessian system approximately. Since the reduced Hessian system presents significa...
Inexact SQP methods for equality constrained optimization
 SIAM J. Opt
"... Abstract. We present an algorithm for largescale equality constrained optimization. The method is based on a characterization of inexact sequential quadratic programming (SQP) steps that can ensure global convergence. Inexact SQP methods are needed for largescale applications for which the iterati ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
(Show Context)
Abstract. We present an algorithm for largescale equality constrained optimization. The method is based on a characterization of inexact sequential quadratic programming (SQP) steps that can ensure global convergence. Inexact SQP methods are needed for largescale applications for which the iteration matrix cannot be explicitly formed or factored and the arising linear systems must be solved using iterative linear algebra techniques. We address how to determine when a given inexact step makes sufficient progress toward a solution of the nonlinear program, as measured by an exact penalty function. The method is globalized by a line search. An analysis of the global convergence properties of the algorithm and numerical results are presented. Key words. largescale optimization, constrained optimization, sequential quadratic programming, inexact linear system solvers, Krylov subspace methods AMS subject classifications. 49M37, 65K05, 90C06, 90C30, 90C55 1. Introduction. In
AN EFFICIENT NUMERICAL METHOD FOR THE SOLUTION OF THE L2 OPTIMAL MASS TRANSFER PROBLEM ∗
"... Abstract. In this paper we present a new computationally efficient numerical scheme for the minimizing flow approach for the computation of the optimal L2 mass transport mapping. In contrast to the integration of a time dependent partial differential equation proposed in [S. Angenent, S. Haker, and ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we present a new computationally efficient numerical scheme for the minimizing flow approach for the computation of the optimal L2 mass transport mapping. In contrast to the integration of a time dependent partial differential equation proposed in [S. Angenent, S. Haker, and A. Tannenbaum, SIAM J. Math. Anal., 35 (2003), pp. 61–97], we employ in the present work a direct variational method. The efficacy of the approach is demonstrated on both real and synthetic data.
An Inexact Newton Method for Nonconvex Equality Constrained Optimization
"... We present a matrixfree line search algorithm for largescale equality constrained optimization that allows for inexact step computations. For strictly convex problems, the method reduces to the inexact sequential quadratic programming approach proposed by Byrd, Curtis, and Nocedal [2]. For noncon ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
We present a matrixfree line search algorithm for largescale equality constrained optimization that allows for inexact step computations. For strictly convex problems, the method reduces to the inexact sequential quadratic programming approach proposed by Byrd, Curtis, and Nocedal [2]. For nonconvex problems, the methodology developed in this paper allows for the presence of negative curvature without requiring information about the inertia of the primaldual iteration matrix. Negative curvature may arise from secondorder information of the problem functions, but in fact exact second derivatives are not required in the approach. The complete algorithm is characterized by its emphasis on sufficient reductions in a model of an exact penalty function. We analyze the global behavior of the algorithm and present numerical results on a collection of test problems.
A TRUNCATED SQP METHOD BASED ON INEXACT INTERIORPOINT SOLUTIONS OF SUBPROBLEMS ∗
"... Abstract. We consider sequential quadratic programming (SQP) methods applied to optimization problems with nonlinear equality constraints and simple bounds. In particular, we propose and analyze a truncated SQP algorithm in which subproblems are solved approximately by an infeasible predictorcorrec ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
(Show Context)
Abstract. We consider sequential quadratic programming (SQP) methods applied to optimization problems with nonlinear equality constraints and simple bounds. In particular, we propose and analyze a truncated SQP algorithm in which subproblems are solved approximately by an infeasible predictorcorrector interiorpoint method, followed by setting to zero some variables and some multipliers so that complementarity conditions for approximate solutions are enforced. Verifiable truncation conditions based on the residual of optimality conditions of subproblems are developed to ensure both global and fast local convergence. Global convergence is established under assumptions that are standard for linesearch SQP with exact solution of subproblems. The local superlinear convergence rate is shown under the weakest assumptions that guarantee this property for pure SQP with exact solution of subproblems, namely, the strict Mangasarian–Fromovitz constraint qualification and secondorder sufficiency. Local convergence results for our truncated method are presented as a special case of the local convergence for a more general perturbed SQP framework, which is of independent interest and is applicable even to some algorithms whose subproblems are not quadratic programs. For example, the framework can also be used to derive sharp local convergence results for linearly constrained Lagrangian methods. Preliminary numerical results confirm that it can be indeed beneficial to solve subproblems approximately, especially on early iterations. Key words. sequential quadratic programming, inexact sequential quadratic programming, truncated sequential quadratic programming, interiorpoint method, superlinear convergence
Inexactness issues in the LagrangeNewtonKrylovSchur method for PDEconstrained optimization
 LargeScale PDEConstrained Optimization, number 30 in Lecture
"... Abstract. In this article we present an outline of the LagrangeNewtonKrylovSchur (LNKS) method and we discuss how we can improve its work efficiency by carrying out certain computations inexactly, without compromising convergence. LNKS has been designed for PDEconstrained optimization problems. I ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In this article we present an outline of the LagrangeNewtonKrylovSchur (LNKS) method and we discuss how we can improve its work efficiency by carrying out certain computations inexactly, without compromising convergence. LNKS has been designed for PDEconstrained optimization problems. It solves the KarushKuhnTucker optimality conditions by a NewtonKrylov algorithm. Its key component is a preconditioner based on quasiNewton reduced space Sequential Quadratic Programming (QNRSQP) variants. LNKS combines the fastconvergence properties of a Newton method with the capability of preconditioned Krylov methods to solve very large linear systems. Nevertheless, even with good preconditioners, the solution of an optimization problem has a cost which is several times higher than the cost of the solution of the underlying PDE problem. To accelerate LNKS, its computational components are carried out inexactly: premature termination of iterative algorithms, inexact evaluation of gradients and Jacobians, approximate line searches. Naturally, several issues arise with respect to the tradeoffs between speed and robustness. 1
Trust Region SQP Methods With Inexact Linear System Solves For LargeScale Optimization
, 2006
"... by ..."
(Show Context)
Nonlinear programming without a penalty function
 Mathematical Programming
, 2002
"... a filter ..."