Results 1  10
of
10
Adaptive dynamic programming
 IEEE Trans. Syst. Man Cyber. 2002
"... Abstract—Unlike the many soft computing applications where it suffices to achieve a “good approximation most of the time, ” a control system must be stable all of the time. As such, if one desires to learn a control law in realtime, a fusion of soft computing techniques to learn the appropriate co ..."
Abstract

Cited by 44 (2 self)
 Add to MetaCart
(Show Context)
Abstract—Unlike the many soft computing applications where it suffices to achieve a “good approximation most of the time, ” a control system must be stable all of the time. As such, if one desires to learn a control law in realtime, a fusion of soft computing techniques to learn the appropriate control law with hard computing techniques to maintain the stability constraint and guarantee convergence is required. The objective of the present paper is to describe an adaptive dynamic programming algorithm (ADPA) which fuses soft computing techniques to learn the optimal cost (or return) functional for a stabilizable nonlinear system with unknown dynamics and hard computing techniques to verify the stability and convergence of the algorithm. Specifically, the algorithm is initialized with a (stabilizing) cost functional and the system is run with the corresponding control law (defined by the Hamilton–Jacobi–Bellman equation), with the resultant state trajectories used to update the cost functional in a soft computing mode. Hard computing techniques are then used to show that this process is globally convergent with stepwise stability to the optimal cost functional/control law pair for an (unknown) input affine system with an input quadratic performance measure (modulo the appropriate technical conditions). Three specific implementations of the ADPA are developed for 1) the linear case, 2) for the nonlinear case using a locally quadratic approximation to the cost functional, and 3) the nonlinear case using a radial basis function approximation of the cost functional; illustrated by applications to flight control. Index Terms—Adaptive control, adaptive critic, dynamic programming, nonlinear control, optimal control. I.
Two Numerical Methods for Optimizing Matrix Stability
 Linear Algebra Appl
, 2001
"... Consider the ane matrix family A(x) = A 0 + k=1 x k A k , mapping a design vector x 2 R into the space of n n real matrices. ..."
Abstract

Cited by 33 (8 self)
 Add to MetaCart
(Show Context)
Consider the ane matrix family A(x) = A 0 + k=1 x k A k , mapping a design vector x 2 R into the space of n n real matrices.
Computing Hopf Bifurcations I
, 1993
"... This paper addresses the problems of detecting Hopf bifurcations in systems of ordinary differential equations and following curves of Hopf points in two parameter families of vector fields. The established approach to this problem relies upon augmenting the equilibrium condition so that a Hopf bifu ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
This paper addresses the problems of detecting Hopf bifurcations in systems of ordinary differential equations and following curves of Hopf points in two parameter families of vector fields. The established approach to this problem relies upon augmenting the equilibrium condition so that a Hopf bifurcation occurs at an isolated, regular point of the extended system. We propose two new methods of this type, based on classical algebraic results regarding the roots of polynomial equations and properties of Kronecker products for matrices. In addition to their utility as augmented systems for use with standard Newtontype continuation methods, they are also particularly welladapted for solution by computer algebra techniques for vector fields of small or moderate dimension.
Robust Stabilization of Uncertain Systems Based on Energy Dissipation Concepts
, 1996
"... Robust stability conditions obtained through generalization of the notion of energy dissipation in physical systems are discussed in this report. Linear timeinvariant (LTI) systems which dissipate energy corresponding to quadratic power functions are characterized in the timedomain and the frequen ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Robust stability conditions obtained through generalization of the notion of energy dissipation in physical systems are discussed in this report. Linear timeinvariant (LTI) systems which dissipate energy corresponding to quadratic power functions are characterized in the timedomain and the frequencydomain, in terms of linear matrix inequalities (LMIs) and algebraic Riccati equations (AREs). A novel characterization of strictly dissipative LTI systems is introduced in this report. Sufficient conditions in terms of dissipativity and strict dissipativity are presented for (1) stability of the feedback interconnection of dissipative LTI systems, (2) stability of dissipative LTI systems with memoryless feedback nonlinearities, and (3) quadratic stability of uncertain linear systems. It is demonstrated that the framework of dissipative LTI systems investigated in this report unifies and extends small gain, passivity and sector conditions for stability. Techniques for selecting power funct...
1 Introduction to Optimal and Robust Control
"... There have been at least five distinct stages in the development of the subject of modelbased control systems. Early work by Nyquist [1], Bode [2] and Nichols [3] was ..."
Abstract
 Add to MetaCart
(Show Context)
There have been at least five distinct stages in the development of the subject of modelbased control systems. Early work by Nyquist [1], Bode [2] and Nichols [3] was
An elementary proof of Barnett's Theorem about the greatest common divisor of several univariate polynomials
"... This article provides a new proof of Barnett's Theorem giving the degree of the greatest common divisor of several univariate polynomials with coefficients in a field by means of the rank of a well precised matrix. The new proof is elementary and selfcontained (no use of Jordan Form or invaria ..."
Abstract
 Add to MetaCart
This article provides a new proof of Barnett's Theorem giving the degree of the greatest common divisor of several univariate polynomials with coefficients in a field by means of the rank of a well precised matrix. The new proof is elementary and selfcontained (no use of Jordan Form or invariant factors) and it is based in some easy to state properties of Subresultants. Moreover this proof allows to generalize Barnett's results to the case when the considered polynomials have their coefficients in an integral domain. 1 Introduction Let IF be a field and fA(x); B 1 (x); : : : ; B t (x)g a family of polynomials in IF [x] with A(x) monic and n = deg(A(x)) ? deg(B j (x)) for every j 2 f1; : : : ; tg. Barnett's Theorem (see [1] or [2]) assures that the degree of the greatest common divisor of A(x), B 1 (x), : : :, B t (x) verifies: deg(gcd(A(x); B 1 (x); : : : ; B t (x))) = n \Gamma rank i B 1 (\Delta A ); B 2 (\Delta A ); : : : ; B t (\Delta A ) j where \Delta A is the companion matr...
NUMERICAL SOLUTION OF MATRIX INTERPOLATION PROBLEMS
"... A copy can be downloaded for personal noncommercial research or study, without prior permission or charge This thesis cannot be reproduced or quoted extensively from without first obtaining permission in writing from the Author The content must not be changed in any way or sold commercially in any ..."
Abstract
 Add to MetaCart
(Show Context)
A copy can be downloaded for personal noncommercial research or study, without prior permission or charge This thesis cannot be reproduced or quoted extensively from without first obtaining permission in writing from the Author The content must not be changed in any way or sold commercially in any format or medium without the formal permission of the Author When referring to this work, full bibliographic details including the author, title, awarding institution and date of the thesis must be given