Results 11  20
of
24
Second Order Methods For Optimal Control Of TimeDependent Fluid Flow
, 1999
"... Second order methods for open loop optimal control problems governed by the twodimensional instationary NavierStokes equations are investigated. Optimality systems based on a Lagrangian formulation and adjoint equations are derived. The Newton and quasiNewton methods as well as various variants o ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
Second order methods for open loop optimal control problems governed by the twodimensional instationary NavierStokes equations are investigated. Optimality systems based on a Lagrangian formulation and adjoint equations are derived. The Newton and quasiNewton methods as well as various variants of SQPmethods are developed for applications to optimal ow control and their complexity in terms of system solves is discussed. Local convergence and rate of convergence are proved. A numerical example illustrates the feasibility of solving optimal control problems for twodimensional instationary NavierStokes equations by second order numerical methods in a standard workstation environment. Previously such problems were solved by gradient type methods.
On InteriorPoint Newton Algorithms For Discretized Optimal Control Problems With State Constraints
 OPTIM. METHODS SOFTW
, 1998
"... In this paper we consider a class of nonlinear programming problems that arise from the discretization of optimal control problems with bounds on both the state and the control variables. For this class of problems, we analyze constraint qualifications and optimality conditions in detail. We derive ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
In this paper we consider a class of nonlinear programming problems that arise from the discretization of optimal control problems with bounds on both the state and the control variables. For this class of problems, we analyze constraint qualifications and optimality conditions in detail. We derive an affinescaling and two primaldual interiorpoint Newton algorithms by applying, in an interiorpoint way, Newton's method to equivalent forms of the firstorder optimality conditions. Under appropriate assumptions, the interiorpoint Newton algorithms are shown to be locally welldefined with a qquadratic rate of local convergence. By using the structure of the problem, the linear algebra of these algorithms can be reduced to the null space of the Jacobian of the equality constraints. The similarities between the three algorithms are pointed out, and their corresponding versions for the general nonlinear programming problem are discussed.
The Penalty Interior Point Method fails to converge for mathematical programs with equilibrium constraints
 University of Dundee
, 2002
"... Equilibrium equations in the form of complementarity conditions often appear as constraints in optimization problems. Problems of this type are commonly referred to as mathematical programs with complementarity constraints (MPCCs). A popular method for solving MPCCs is the penalty interiorpoint alg ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Equilibrium equations in the form of complementarity conditions often appear as constraints in optimization problems. Problems of this type are commonly referred to as mathematical programs with complementarity constraints (MPCCs). A popular method for solving MPCCs is the penalty interiorpoint algorithm (PIPA). This paper presents a small example for which PIPA converges to a nonstationary point, providing a counterexample to the established theory. The reasons for this adverse behavior are discussed.
Secondorder negativecurvature methods for boxconstrained and general constrained optimization
, 2009
"... A Nonlinear Programming algorithm that converges to secondorder stationary points is introduced in this paper. The main tool is a secondorder negativecurvature method for boxconstrained minimization of a certain class of functions that do not possess continuous second derivatives. This method is ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
A Nonlinear Programming algorithm that converges to secondorder stationary points is introduced in this paper. The main tool is a secondorder negativecurvature method for boxconstrained minimization of a certain class of functions that do not possess continuous second derivatives. This method is used to define an Augmented Lagrangian algorithm of PHR (PowellHestenesRockafellar) type. Convergence proofs under weak constraint qualifications are given. Numerical examples showing that the new method converges to secondorder stationary points in situations in which firstorder methods fail are exhibited.
Local Convergence of the AffineScaling InteriorPoint Algorithm for Nonlinear Programming
 COMPUT. OPTIM. AND APPL
, 1999
"... This paper addresses the local convergence properties of the anescaling interiorpoint algorithm for nonlinear programming. The analysis of local convergence is developed in terms of parameters that control the interiorpoint scheme and the size of the residual of the linear system that provides the ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
This paper addresses the local convergence properties of the anescaling interiorpoint algorithm for nonlinear programming. The analysis of local convergence is developed in terms of parameters that control the interiorpoint scheme and the size of the residual of the linear system that provides the step direction. The analysis follows the classical theory for quasiNewton methods and addresses qlinear, qsuperlinear, and qquadratic rates of convergence.
Vector reduction/transformation operators
 ACM Transactions on Mathematical Software
, 2004
"... Development of flexible linear algebra interfaces is an increasingly critical issue. Efficient and expressive interfaces are well established for some linear algebra abstractions, but not for vectors. Vectors differ from other abstractions in the diversity of necessary operations, sometimes requirin ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Development of flexible linear algebra interfaces is an increasingly critical issue. Efficient and expressive interfaces are well established for some linear algebra abstractions, but not for vectors. Vectors differ from other abstractions in the diversity of necessary operations, sometimes requiring dozens for a given algorithm (e.g. interiorpoint methods for optimization). We discuss a new approach based on operator objects that are transported to the underlying data by the linear algebra library implementation, allowing developers of abstract numerical algorithms to easily extend the functionality regardless of computer architecture, application or data locality/organization. Numerical experiments demonstrate efficient implementation.
InteriorPoint Gradient Methods with DiagonalScalings for SimpleBound Constrained Optimization
, 2004
"... In this paper, we study diagonally scaled gradient methods for simplebound constrained optimization in a framework almost identical to that for unconstrained optimization, except that iterates are kept within the interior of the feasible region. We establish a satisfactory global convergence the ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
In this paper, we study diagonally scaled gradient methods for simplebound constrained optimization in a framework almost identical to that for unconstrained optimization, except that iterates are kept within the interior of the feasible region. We establish a satisfactory global convergence theory for such interiorpoint gradient methods applied to Lipschitz continuously di#erentiable functions without any further assumption. Moreover,
InteriorPoint l_2Penalty Methods for Nonlinear Programming with Strong Global Convergence Properties
 Math. Programming
, 2004
"... We propose two line search primaldual interiorpoint methods that have a generic barrierSQP outer structure and approximately solve a sequence of equality constrained barrier subproblems. To enforce convergence for each subproblem, these methods use an # 2 exact penalty function eliminating the n ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We propose two line search primaldual interiorpoint methods that have a generic barrierSQP outer structure and approximately solve a sequence of equality constrained barrier subproblems. To enforce convergence for each subproblem, these methods use an # 2 exact penalty function eliminating the need to drive the corresponding penalty parameter to infinity when finite multipliers exist. Instead of directly decreasing an equality constraint infeasibility measure, these methods attain feasibility by forcing this measure to zero whenever the steps generated by the methods tend to zero. Our analysis shows that under standard assumptions, our methods have strong global convergence properties. Specifically, we show that if the penalty parameter remains bounded, any limit point of the iterate sequence is either a KKT point of the barrier subproblem, or a FritzJohn (FJ) point of the original problem that fails to satisfy the MangasarianFromovitz constraint qualification (MFCQ); if the penalty parameter tends to infinity, there is a limit point that is either an infeasible FJ point of the inequality constrained feasibility problem (an infeasible stationary point of the infeasibility measure if slack variables are added) or a FJ point of the original problem at which the MFCQ fails to hold. Numerical results are given that illustrate these outcomes.
Local Analysis of a New Multipliers Method
 European Journal of Operational Research (special volume on Continuous Optimization
"... In this paper we introduce a penalty function and a corresponding multipliers method for the solution of a class of nonlinear programming problems where the equality constraints have a particular structure. The class models optimal control and engineering design problems with bounds on the state and ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper we introduce a penalty function and a corresponding multipliers method for the solution of a class of nonlinear programming problems where the equality constraints have a particular structure. The class models optimal control and engineering design problems with bounds on the state and control variables and has wide applicability. The multipliers method updates multipliers corresponding to inequality constraints (maintaining their nonnegativity) instead of dealing with multipliers associated with equality constraints. The basic local convergence properties of the method are proved and a dual framework is introduced. We also analyze the properties of the penalized problem related with the penalty function. Keywords. Nonlinear programming, optimal control, state constraints, penalty function, multipliers method, augmented Lagrangian. AMS subject classications. 49M37, 90C06, 90C30 1