Results 1  10
of
48
Tutorial paper: Parallel architectures for model predictive control
 In: Proc. of the European Control Conference 2009
, 2009
"... Abstract — This tutorial paper surveys recent developments in parallel computer architecture, focusing on the fieldprogrammable gate array and the graphics processor. We aim to illustrate the potential of these architectures for the type of highspeed numerical computation required in online optimi ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Abstract — This tutorial paper surveys recent developments in parallel computer architecture, focusing on the fieldprogrammable gate array and the graphics processor. We aim to illustrate the potential of these architectures for the type of highspeed numerical computation required in online optimization for model predictive control. While significant performance advantages can be gained by migrating existing control algorithms to these processor architectures, in order to realise their full potential, further research is needed at the boundary of control theory, digital electronics, and computer architecture. We survey some of the open questions in this area. I.
Code generation for receding horizon control
 In Proceedings of the IEEE International Symposium on ComputerAided Control System Design
, 2010
"... Abstract — Receding horizon control (RHC), also known as model predictive control (MPC), is a general purpose control scheme that involves repeatedly solving a constrained optimization problem, using predictions of future costs, disturbances, and constraints over a moving time horizon to choose the ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Abstract — Receding horizon control (RHC), also known as model predictive control (MPC), is a general purpose control scheme that involves repeatedly solving a constrained optimization problem, using predictions of future costs, disturbances, and constraints over a moving time horizon to choose the control action. RHC handles constraints, such as limits on control variables, in a direct and natural way, and generates sophisticated feedforward actions. The main disadvantage of RHC is that an optimization problem has to be solved at each step, which leads many control engineers to think that it can only be used for systems with slow sampling (say, less than one Hz). Several techniques have recently been developed to get around this problem. In one approach, called explicit MPC, the optimization problem is solved analytically and explicitly, so evaluating the control policy requires only a lookup table search. Another approach, which is our focus here, is to exploit the structure in the optimization problem to solve it efficiently. This approach has previously been applied in several specific cases, using custom, hand written code. However, this requires significant development time, and specialist knowledge of optimization and numerical algorithms. Recent developments in convex optimization code generation have made the task much easier and quicker. With code generation, the RHC policy is specified in a highlevel language, then automatically transformed into source code for a custom solver. The custom solver is typically orders of magnitude faster than a generic solver, solving in milliseconds or microseconds on standard processors, making it possible to use RHC policies at kilohertz rates. In this paper we demonstrate code generation with two simple control examples. They show a range of problems that may be handled by RHC. In every case, we show a speedup of several hundred times from generic parsersolvers. I.
Fast evaluation of quadratic controlLyapunov policy [Online]. Available: http://stanford.edu/~boyd/papers/fast_clf.html
, 2009
"... Abstract—The evaluation of a controlLyapunov policy, with quadratic Lyapunov function, requires the solution of a quadratic program (QP) at each time step. For small problems this QP can be solved explicitly; for larger problems an online optimization method can be used. For this reason the control ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
Abstract—The evaluation of a controlLyapunov policy, with quadratic Lyapunov function, requires the solution of a quadratic program (QP) at each time step. For small problems this QP can be solved explicitly; for larger problems an online optimization method can be used. For this reason the controlLyapunov control policy is considered a computationally intensive control law, as opposed to an “analytical ” control law, such as conventional linear state feedback, linear quadratic Gaussian control, or H, too complex or slow to be used in high speed control applications. In this note we show that by precomputing certain quantities, the controlLyapunov policy can be evaluated extremely efficiently. We will show that when the number of inputs is on the order of the squareroot of the state dimension, the cost of evaluating a controlLyapunov policy is on the same order as the cost of evaluating a simple linear state feedback policy, and less (in order)
Performance bounds and suboptimal policies for linear stochastic control via LMIs
"... In a recent paper, the authors showed how to compute performance bounds for infinitehorizon stochastic control problems with linear system dynamics and arbitrary constraints, objective, and noise distribution. In this paper, we extend these results to the finitehorizon case, with asymmetric costs ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
In a recent paper, the authors showed how to compute performance bounds for infinitehorizon stochastic control problems with linear system dynamics and arbitrary constraints, objective, and noise distribution. In this paper, we extend these results to the finitehorizon case, with asymmetric costs and constraint sets. In addition, we derive our bounds using a new method, where we relax the Bellman equation to an inequality. The method is based on bounding the objective with a general quadratic function, and using linear matrix inequalities (LMIs) and semidefinite programming (SDP) to optimize the bound. The resulting LMIs are more complicated than in the previous paper (which only used quadratic forms) but this extension allows us to obtain good bounds for problems with substantial asymmetry, such as supply chain problems. The method also yields very good suboptimal control policies, using controlLyapunov
Dataflowbased Implementation of Model Predictive Control
, 2009
"... Model Predictive Control (MPC) has been used in a wide range of application areas including chemical engineering, food processing, automotive engineering, aerospace, and metallurgy. MPC is often computation intensive, which limits the class of systems to which it can be applied and the performance ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Model Predictive Control (MPC) has been used in a wide range of application areas including chemical engineering, food processing, automotive engineering, aerospace, and metallurgy. MPC is often computation intensive, which limits the class of systems to which it can be applied and the performance criteria it can use. This paper describes a general framework called reactive, controlintegrated dataflow modeling for analyzing and improving the algorithms used for MPC and their hardware implementations. The utility of the framework is demonstrated by applying it to the NewtonKKT algorithm. The results show significant reductions in computation time for test cases.
Fast Linear Model Predictive Control via Custom Integrated Circuit Architecture
"... This paper addresses the implementation of linear model predictive control (MPC) at millisecond range, or faster, sampling rates. This is achieved by designing a custom integrated circuit architecture that is specifically targeted to the MPC problem. As opposed to the more usual approach using a g ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
This paper addresses the implementation of linear model predictive control (MPC) at millisecond range, or faster, sampling rates. This is achieved by designing a custom integrated circuit architecture that is specifically targeted to the MPC problem. As opposed to the more usual approach using a generic serial architecture processor, the design here is implemented using a field programmable gate array (FPGA) and employs parallelism, pipelining and specialized numerical formats. The performance of this approach is profiled via the control of a 14th order resonant structure with 12 sample prediction horizon at 200µs sampling rate. The results indicate that no more than 30µs are required to compute the control action. A feasibility study indicates that the design can also be implemented in 130nm CMOS technology, with a core area of 2.5mm 2. These results illustrate the feasibility of MPC for reasonably complex systems, using relatively cheap, small, and lowpower computing hardware.
Operation and Configuration of a Storage Portfolio via Convex Optimization
 Proceedings IFAC World Congress
, 2011
"... Abstract: We consider a portfolio of storage devices which is used to modify a commodity flow so as to minimize an average cost function. The individual storage devices have different parameters that characterize attributes such as capacity, maximum charging rates, and losses in charging and storage ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Abstract: We consider a portfolio of storage devices which is used to modify a commodity flow so as to minimize an average cost function. The individual storage devices have different parameters that characterize attributes such as capacity, maximum charging rates, and losses in charging and storage. We address two problems related to such a system. The first is the problem of operating a portfolio of storage devices in realtime, i.e., making realtime decisions as to how to charge or discharge each of the storage devices in response to the fluctuating commodity flow and cost function. The second is the problem of configuring the portfolio of storage devices, i.e., choosing a single portfolio from a set of candidate portfolios. Here we are given the cost of each candidate portfolio as a function of its parameters, and seek to minimize a combination of initial configuration cost and average operating cost. In this paper, we show how both problems can be approximately solved using convex optimization. 1.
Nonlinear QDesign for Convex Stochastic Control
, 2009
"... In this note we describe a version of the Qdesign method that can be used to design nonlinear dynamic controllers for a discretetime linear timevarying plant, with convex cost and constraint functions and arbitrary disturbance distribution. Choosing a basis for the nonlinear Qparameter yields ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
In this note we describe a version of the Qdesign method that can be used to design nonlinear dynamic controllers for a discretetime linear timevarying plant, with convex cost and constraint functions and arbitrary disturbance distribution. Choosing a basis for the nonlinear Qparameter yields a convex stochastic optimization problem, which can be solved by standard methods such as sampling. In principle (for a large enough basis, and enough sampling) this method can solve the controller design problem to any degree of accuracy; in any case it can be used to find a suboptimal controller, using convex optimization methods. We illustrate the method with a numerical example, comparing a nonlinear controller found using our method with the optimal linear controller, the certaintyequivalent model predictive controller, and a lower bound on achievable performance obtained by ignoring the causality constraint.
Towards a fixed point QP solver for predictive control
 In Proc. IEEE Conf. on Decision and Control (Submitted
, 2012
"... Abstract — There is a need for high speed, low cost and low energy solutions for convex quadratic programming to enable model predictive control (MPC) to be implemented in a wider set of applications than is currently possible. For most quadratic programming (QP) solvers the computational bottleneck ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract — There is a need for high speed, low cost and low energy solutions for convex quadratic programming to enable model predictive control (MPC) to be implemented in a wider set of applications than is currently possible. For most quadratic programming (QP) solvers the computational bottleneck is the solution of systems of linear equations, which we propose to solve using a fixedpoint implementation of an iterative linear solver to allow for fast and efficient computation in parallel hardware. However, fixed point arithmetic presents additional challenges, such as having to bound peak values of variables and constrain their dynamic ranges. For these types of algorithms the problems cannot be automated by current tools. We employ a preconditioner in a novel manner to allow us to establish tight analytical bounds on all the variables of the Lanczos process, the heart of modern iterative linear solving algorithms. The proposed approach is evaluated through the implementation of a mixed precision interiorpoint controller for a Boeing 747 aircraft. The numerical results show that there does not have to be a loss of control quality by moving from floatingpoint to fixedpoint. I.
MinMax Approximate Dynamic Programming
"... Abstract — In this paper we describe an approximate dynamic programming policy for a discretetime dynamical system perturbed by noise. The approximate value function is the pointwise supremum of a family of lower bounds on the value function of the stochastic control problem; evaluating the control ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract — In this paper we describe an approximate dynamic programming policy for a discretetime dynamical system perturbed by noise. The approximate value function is the pointwise supremum of a family of lower bounds on the value function of the stochastic control problem; evaluating the control policy involves the solution of a minmax or saddlepoint problem. For a quadratically constrained linear quadratic control problem, evaluating the policy amounts to solving a semidefinite program at each time step. By evaluating the policy, we obtain a lower bound on the value function, which can be used to evaluate performance: When the lower bound and the achieved performance of the policy are close, we can conclude that the policy is nearly optimal. We describe several numerical examples where this is indeed the case. I.