Results 1  10
of
47
Efficient Interior Point Methods for Multistage Problems Arising
 in Receding Horizon Control,” in IEEE Conference on Decision and Control (CDC), Maui
, 2012
"... AbstractReceding horizon control requires the solution of an optimization problem at every sampling instant. We present efficient interior point methods tailored to convex multistage problems, a problem class which most relevant MPC problems with linear dynamics can be cast in, and specify importa ..."
Abstract

Cited by 26 (4 self)
 Add to MetaCart
(Show Context)
AbstractReceding horizon control requires the solution of an optimization problem at every sampling instant. We present efficient interior point methods tailored to convex multistage problems, a problem class which most relevant MPC problems with linear dynamics can be cast in, and specify important algorithmic details required for a high speed implementation with superior numerical stability. In particular, the presented approach allows for quadratic constraints, which is not supported by existing fast MPC solvers. A categorization of widely used MPC problem formulations into classes of different complexity is given, and we show how the computational burden of certain quadratic or linear constraints can be decreased by a low rank matrix forward substitution scheme. Implementation details are provided that are crucial to obtain high speed solvers. We present extensive numerical studies for the proposed methods and compare our solver to three wellknown solver packages, outperforming the fastest of these by a factor 25 in speed and 370 in code size. Moreover, our solver is shown to be very efficient for large problem sizes and for quadratically constrained QPs, extending the set of systems amenable to advanced MPC formulations on lowcost embedded hardware.
Nonlinear programming strategies for state estimation and model predictive control
 In Nonlinear Model Predictive Control
, 2009
"... Abstract. Sensitivitybased strategies for online moving horizon estimation (MHE) and nonlinear model predictive control (NMPC) are presented both from a stability and computational perspective. These strategies make use of fullspace interiorpoint nonlinear programming (NLP) algorithms and NLP se ..."
Abstract

Cited by 14 (8 self)
 Add to MetaCart
(Show Context)
Abstract. Sensitivitybased strategies for online moving horizon estimation (MHE) and nonlinear model predictive control (NMPC) are presented both from a stability and computational perspective. These strategies make use of fullspace interiorpoint nonlinear programming (NLP) algorithms and NLP sensitivity concepts. In particular, NLP sensitivity allows us to partition the solution of the optimization problems into background and negligible online computations, thus avoiding the problem of computational delay even with large dynamic models. We demonstrate these developments through a distributed polymerization reactor model containing around 10,000 differential and algebraic equations (DAEs).
Iterative Linear Algebra for Constrained Optimization
, 2005
"... Each step of an interior point method for nonlinear optimization requires the solution of a symmetric indefinite linear system known as a KKT system, or more generally, a saddle point problem. As the problem size increases, direct methods become prohibitively expensive to use for solving these probl ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
Each step of an interior point method for nonlinear optimization requires the solution of a symmetric indefinite linear system known as a KKT system, or more generally, a saddle point problem. As the problem size increases, direct methods become prohibitively expensive to use for solving these problems; this leads to iterative solvers being the only viable alternative. In this thesis we consider iterative methods for solving saddle point systems and show that a projected preconditioned conjugate gradient method can be applied to these indefinite systems. Such a method requires the use of a specific class of preconditioners, (extended) constraint preconditioners, which exactly replicate some parts of the saddle point system that we wish to solve. The standard method for using constraint preconditioners, at least in the optimization community, has been to choose the constraint
Exploiting Sparsity in Direct Collocation Pseudospectral Methods for Solving Optimal Control Problems
 MarchApril 2012
"... In adirect collocation pseudospectralmethod, a continuoustime optimal control problem is transcribed to afinitedimensional nonlinear programming problem. Solving this nonlinear programming problem as efficiently as possible requires that sparsity at both the first and secondderivative levels be ..."
Abstract

Cited by 10 (9 self)
 Add to MetaCart
(Show Context)
In adirect collocation pseudospectralmethod, a continuoustime optimal control problem is transcribed to afinitedimensional nonlinear programming problem. Solving this nonlinear programming problem as efficiently as possible requires that sparsity at both the first and secondderivative levels be exploited. In this paper, a computationally efficient method is developed for computing the first and second derivatives of the nonlinear programming problem functions arising from a pseudospectral discretization of a continuoustime optimal control problem. Specifically, in this paper, expressions are derived for the objective function gradient, constraint Jacobian, and Lagrangian Hessian arising from the previously developed Radau pseudospectral method. It is shown that the computation of these derivative functions can be reduced to computing the first and second derivatives of the functions in the continuoustime optimal control problem. As a result, the method derived in this paper reduces significantly the amount of computation required to obtain the first and second derivatives required by a nonlinear programming problem solver. The approach derived in this paper is demonstrated on an example where it is found that significant computational benefits are obtained when compared against direct differentiation of the nonlinear programming problem functions. The approach developed in this paper improves the computational efficiency of solving nonlinear programming problems arising from pseudospectral discretizations of continuoustime optimal control problems.
Finding a point in the relative interior of a polyhedron
, 2007
"... A new initialization or ‘Phase I ’ strategy for feasible interior point methods for linear programming is proposed that computes a point on the primaldual central path associated with the linear program. Provided there exist primaldual strictly feasible points — an allpervasive assumption in inte ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
(Show Context)
A new initialization or ‘Phase I ’ strategy for feasible interior point methods for linear programming is proposed that computes a point on the primaldual central path associated with the linear program. Provided there exist primaldual strictly feasible points — an allpervasive assumption in interior point method theory that implies the existence of the central path — our initial method (Algorithm 1) is globally Qlinearly and asymptotically Qquadratically convergent, with a provable worstcase iteration complexity bound. When this assumption is not met, the numerical behaviour of Algorithm 1 is highly disappointing, even when the problem is primaldual feasible. This is due to the presence of implicit equalities, inequality constraints that hold as equalities at all the feasible points. Controlled perturbations of the inequality constraints of the primaldual problems are introduced — geometrically equivalent to enlarging the primaldual feasible region and then systematically contracting it back to its initial shape — in order for the perturbed problems to satisfy the assumption. Thus Algorithm 1 can successfully be employed to solve each of the perturbed problems. We show that, when there exist primaldual strictly feasible points of the original problems, the resulting method, Algorithm 2, finds such a point in a finite number of changes to the perturbation parameters. When implicit equalities are present, but the original problem and its dual are feasible, Algorithm 2 asymptotically detects all the primaldual implicit equalities and generates a point in the relative interior of the primaldual feasible set. Algorithm 2 can also asymptotically detect primaldual infeasibility. Successful numerical experience with Algorithm 2 on linear programs from NETLIB and CUTEr, both with and without any significant preprocessing of the problems, indicates that Algorithm 2 may be used as an algorithmic preprocessor for removing implicit equalities, with theoretical guarantees of convergence. 1
Finiteelement preconditioning of GNI spectral methods
 SIAM J. Sci. Comput
"... Several old and new finiteelement preconditioners for nodalbased spectral discretizations of −∆u = f in the domain Ω = (−1, 1)d (d = 2 or 3), with Dirichlet or Neumann boundary conditions, are considered and compared in terms of both condition number and computational efficiency. The computation ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
(Show Context)
Several old and new finiteelement preconditioners for nodalbased spectral discretizations of −∆u = f in the domain Ω = (−1, 1)d (d = 2 or 3), with Dirichlet or Neumann boundary conditions, are considered and compared in terms of both condition number and computational efficiency. The computational domain covers the case of classical singledomain spectral approximations (see [5]), as well as that of more general spectralelement methods in which the preconditioners are expressed in terms of local (upon every element) algebraic solvers. The primal spectral approximation is based on the Galerkin approach with Numerical Integration (GNI) at the LegendreGaussLobatto (LGL) nodes in the domain. The preconditioning matrices rely on either P1 or Q1 or Q1,NI (i.e., with Numerical Integration) finite elements on meshes whose vertices coincide with the LGL nodes used 1 for the spectral approximation. The analysis highlights certain preconditioners, that yield the solution at an overall cost proportional to Nd+1, where N denotes the polynomial degree in each direction. 1
Block structured quadratic programming for the direct multiple shooting method for optimal control
 Optimization Methods and Software
, 2011
"... Abstract. In this contribution we address the efficient solution of optimal control problems of dynamic processes with many controls. Such problems arise, e.g., from the outer convexification of integer control decisions. We treat this optimal control problem class using the direct multiple shooting ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
(Show Context)
Abstract. In this contribution we address the efficient solution of optimal control problems of dynamic processes with many controls. Such problems arise, e.g., from the outer convexification of integer control decisions. We treat this optimal control problem class using the direct multiple shooting method to discretize the optimal control problem. The resulting nonlinear problems are solved using sequential quadratic programming methods. We review the classical condensing algorithm that preprocesses the large but sparse quadratic programs to obtain small but dense ones. We show that this approach leaves room for improvement when applied in conjunction with outer convexification. To this end, we present a new complementary condensing algorithm for quadratic programs with many controls. This algorithm is based on a hybrid null–space range–space approach to exploit the block sparse structure of the quadratic programs that is due to direct multiple shooting. An assessment of the theoretical run time complexity reveals significant advantages of the proposed algorithm. We give a detailed account on the required number of floating point operations, depending on the process dimensions. Finally we demonstrate the merit of the new complementary condensing approach by comparing the behavior of both methods for a vehicle control problem in which the integer gear decision is convexified. 1.
An efficient overloaded method for computing derivatives of mathematical functions in MATLAB
 ACM Transactions on Mathematical Software
"... An objectoriented method is presented that computes without truncation error derivatives of functions defined by MATLAB computer codes. The method implements forward mode automatic differentiation via operator overloading in a manner that produces a new MATLAB code which computes the derivatives o ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
An objectoriented method is presented that computes without truncation error derivatives of functions defined by MATLAB computer codes. The method implements forward mode automatic differentiation via operator overloading in a manner that produces a new MATLAB code which computes the derivatives of the outputs of the original function with respect to the variables of differentiation. Because the derivative code has the same input as the original function code, the method can be used recursively to generate derivatives of any order that are desired. In addition, the approach developed in this paper has the feature that the derivatives are generated simply by evaluating the function on an instance of the class, thus making the method straightforward to use while simultaneously enabling differentiation of highly complex functions. A detailed description of the method is presented and the approach is illustrated and is shown to be efficient on four examples.
A computational study of the use of an optimizationbased method for simulating large multibody systems
 OPTIMIZATION METHODS AND SOFTWARE
, 2008
"... ..."