Results 1  10
of
18
On a modified subgradient algorithm for dual problems via sharp augmented Lagrangian
 Journal of Global Optimization
, 2006
"... We study convergence properties of a modified subgradient algorithm, applied to the dual problem defined by the sharp augmented Lagrangian. The primal problem we consider is nonconvex and nondifferentiable, with equality constraints. We obtain primal and dual convergence results, as well as a condit ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
We study convergence properties of a modified subgradient algorithm, applied to the dual problem defined by the sharp augmented Lagrangian. The primal problem we consider is nonconvex and nondifferentiable, with equality constraints. We obtain primal and dual convergence results, as well as a condition for existence of a dual solution. Using a practical selection of the stepsize parameters, we demonstrate the algorithm and its advantages on test problems, including an integer programming and an optimal control problem. Key words: Nonconvex programming; nonsmooth optimization; augmented Lagrangian; sharp Lagrangian; subgradient optimization.
Euler Discretization and Inexact Restoration for Optimal Control ∗
, 2006
"... A computational technique for unconstrained optimal control problems is presented. First an Euler discretization is carried out to obtain a finitedimensional approximation of the continoustime (infinitedimensional) problem. Then an inexact restoration (IR) method due to Birgin and Martínez is app ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
A computational technique for unconstrained optimal control problems is presented. First an Euler discretization is carried out to obtain a finitedimensional approximation of the continoustime (infinitedimensional) problem. Then an inexact restoration (IR) method due to Birgin and Martínez is applied to the discretized problem to find an approximate solution. Convergence of the technique to a solution of the continuoustime problem is facilitated by the convergence of the IR method and the convergence of the discrete (approximate) solution as finer subdivisions are taken. It is shown that a special case of the IR method is equivalent to the projected Newton method for equality constrained quadratic optimization problems. The technique is numerically demonstrated by means of a scalar system and the van der Pol system, and comprehensive comparisons are made with the Newton and projected Newton methods.
Smooth regularization of bangbang optimal control problems
 IEEE Trans. Automat. Control
"... Consider the minimal time control problem for a singleinput controlaffine system ˙x = X(x) + u1Y1(x) in IR n, where the scalar control u1(·) satisfies the constraint u1(·)  � 1. When applying a shooting method for solving this kind of optimal control problem, one may encounter numerical problem ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Consider the minimal time control problem for a singleinput controlaffine system ˙x = X(x) + u1Y1(x) in IR n, where the scalar control u1(·) satisfies the constraint u1(·)  � 1. When applying a shooting method for solving this kind of optimal control problem, one may encounter numerical problems due to the fact that the shooting function is not smooth whenever the control is bangbang. In this article we propose the following smoothing procedure. For ε> 0 small, we consider the minimal time problem for the control system ˙x = X(x) + u ε mX 1Y1(x) + ε controls u ε i(·), i = 1,..., m, with m � 2, satisfy the constraint i=2 u ε iYi (x), where the scalar mX (u ε i(t)) 2 � 1. We prove, under appropriate assumptions, a strong convergence result of the solution of the regularized problem to the solution of the initial problem.
A Simplification of the AgrachevGamkrelidze SecondOrder Variation for BangBang Controls
, 2009
"... We consider an expression for the second–order variation (SOV) of bangbang controls derived by Agrachev and Gamkrelidze. The SOV plays an important role in both necessary and sufficient second–order optimality conditions for bangbang controls. These conditions are stronger than the one provided by ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We consider an expression for the second–order variation (SOV) of bangbang controls derived by Agrachev and Gamkrelidze. The SOV plays an important role in both necessary and sufficient second–order optimality conditions for bangbang controls. These conditions are stronger than the one provided by the first–order Pontryagin maximum principle (PMP). For a bangbang control with k switching points, the SOV contains k(k + 1)/2 Liealgebraic terms. We derive a simplification of the SOV by relating k of these terms to the derivative of the switching function, defined in the PMP, evaluated at the switching points. We prove that this simplification can be used to reduce the computational burden associated with applying the SOV to analyze optimal controls. We demonstrate this by using the simplified expression for the SOV to show that the chattering control in Fuller’s problem satisfies a secondorder sufficient condition for optimality.
Sufficient conditions and sensitivity analysis for optimal bangbang control problems with state constraints
 Proceedings of the 23rd IFIP Conference on System Modeling and Optimization
, 2007
"... Abstract Bang–bang control problems subject to a state inequality constraint are considered. It is shown that the control problem induces an optimization problem, where the optimization vector assembles the switching and junction times for bang–bang and boundary arcs. Second order sufficient cond ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract Bang–bang control problems subject to a state inequality constraint are considered. It is shown that the control problem induces an optimization problem, where the optimization vector assembles the switching and junction times for bang–bang and boundary arcs. Second order sufficient conditions (SSC) for the state–constrained control problem are given which require that SSC for the induced optimization problem are satisfied and a generalized strict bangbang property holds at switching and junction times. This type of SSC ensures solution differentiability of optimal solutions under parameter perturbations and allows to compute parametric sensitivity derivatives. A numerical algorithm is presented that simultaneously determines a solution candidate, performs the second–order test and computes parametric sensitivity derivatives. We illustrate the algorithm with two stateconstrained optimal control problems in biomedicine.
SECONDORDER NECESSARY/SUFFICIENT CONDITIONS FOR OPTIMAL CONTROL PROBLEMS IN THE ABSENCE OF LINEAR STRUCTURE
"... Abstract. Secondorder necessary conditions for optimal control problems are considered, where the “secondorder ” is in the sense of that Pontryagin’s maximum principle is viewed as a firstorder necessary optimality condition. A sufficient condition for a local minimizer is also given. 1. Introduc ..."
Abstract
 Add to MetaCart
Abstract. Secondorder necessary conditions for optimal control problems are considered, where the “secondorder ” is in the sense of that Pontryagin’s maximum principle is viewed as a firstorder necessary optimality condition. A sufficient condition for a local minimizer is also given. 1. Introduction. We
Leapfrog for Optimal Control
, 2008
"... The leapfrog algorithm, socalled because of its geometric nature, for solving a class of optimal control problems is proposed. Initially a feasible trajectory is given and subdivided into smaller pieces. In each subdivision, with the assumption that local optimal controls can easily be calculated, ..."
Abstract
 Add to MetaCart
(Show Context)
The leapfrog algorithm, socalled because of its geometric nature, for solving a class of optimal control problems is proposed. Initially a feasible trajectory is given and subdivided into smaller pieces. In each subdivision, with the assumption that local optimal controls can easily be calculated, a piecewiseoptimal trajectory is obtained. Then the junctions of these smaller pieces of optimal control trajectories are updated through a scheme of midpoint maps. Under some broad assumptions the sequence of trajectories is shown to converge to a trajectory that satisfies the Maximum Principle. The main advantages of the leapfrog algorithm are that (i) it does not need an initial guess for the costates, (ii) the piecewiseoptimal trajectory generated in each iteration is feasible. These are illustrated through a numerical implementation of leapfrog on a problem involving the van der Pol system.
Direct Methods with Maximal . . .
"... Many practical optimal control problems include discrete decisions. These may be either time–independent parameters or time–dependent control functions as gears or valves that can only take discrete values at any given time. While great progress has been achieved in the solution of optimization prob ..."
Abstract
 Add to MetaCart
Many practical optimal control problems include discrete decisions. These may be either time–independent parameters or time–dependent control functions as gears or valves that can only take discrete values at any given time. While great progress has been achieved in the solution of optimization problems involving integer variables, in particular mixed–integer linear programs, as well as in continuous optimal control problems, the combination of the two is yet an open field of research. We consider the question of lower bounds that can be obtained by a relaxation of the integer requirements. For general nonlinear mixed–integer programs such lower bounds typically suffer from a huge integer gap. We convexify (with respect to binary controls) and relax the original problem and prove that the optimal solution of this continuous control problem yields the best lower bound for the nonlinear integer problem. Building on this theoretical result we present a novel algorithm to solve mixed–integer optimal control problems, with a focus on discrete–valued control functions. Our algorithm is based on the direct multiple shooting method, an adaptive refinement of the underlying control discretization grid and tailored heuristic integer methods. Its applicability is shown by a challenging application, the energy optimal control of a subway train with discrete gears and velocity limits.
Control and Cybernetics
"... with bangbang components in problems with semilinear state equation by ..."
(Show Context)
Control and Cybernetics
"... Second order optimality conditions for bang–bang control problems 1 by ..."