Results 1  10
of
16
Krylov Projection Methods For Model Reduction
, 1997
"... This dissertation focuses on efficiently forming reducedorder models for large, linear dynamic systems. Projections onto unions of Krylov subspaces lead to a class of reducedorder models known as rational interpolants. The cornerstone of this dissertation is a collection of theory relating Krylov p ..."
Abstract

Cited by 119 (3 self)
 Add to MetaCart
This dissertation focuses on efficiently forming reducedorder models for large, linear dynamic systems. Projections onto unions of Krylov subspaces lead to a class of reducedorder models known as rational interpolants. The cornerstone of this dissertation is a collection of theory relating Krylov projection to rational interpolation. Based on this theoretical framework, three algorithms for model reduction are proposed. The first algorithm, dual rational Arnoldi, is a numerically reliable approach involving orthogonal projection matrices. The second, rational Lanczos, is an efficient generalization of existing Lanczosbased methods. The third, rational power Krylov, avoids orthogonalization and is suited for parallel or approximate computations. The performance of the three algorithms is compared via a combination of theory and examples. Independent of the precise algorithm, a host of supporting tools are also developed to form a complete modelreduction package. Techniques for choosing the matching frequencies, estimating the modeling error, insuring the model's stability, treating multipleinput multipleoutput systems, implementing parallelism, and avoiding a need for exact factors of large matrix pencils are all examined to various degrees.
A note on the stochastic realization problem
 Hemisphere Publishing Corporation
, 1976
"... Abstract. Given a mean square continuous stochastic vector process y with stationary increments and a rational spectral density such that (oo) is finite and nonsingular, consider the problem of finding all minimal (wide sense) Markov representations (stochastic realizations) of y. All such realizati ..."
Abstract

Cited by 98 (23 self)
 Add to MetaCart
Abstract. Given a mean square continuous stochastic vector process y with stationary increments and a rational spectral density such that (oo) is finite and nonsingular, consider the problem of finding all minimal (wide sense) Markov representations (stochastic realizations) of y. All such realizations are characterized and classified with respect to deterministic as well as probabilistic properties. It is shown that only certain realizations (internal stochastic realizations) can be determined from the given output process y. All others (external stochastic realizations)require that the probability space be extended with an exogeneous random component. A complete characterization of the sets of internal and external stochastic realizations is provided. It is shown that the state process of any internal stochastic realization can be expressed in terms of two steadystate KalmanBucy filters, one evolving forward in time over the infinite past and one backward over the infinite future. An algorithm is presented which generates families Of external realizations defined on the same probability space and totally ordered with respect to state covariances. 1. Introduction. One
Model reduction of state space systems via an Implicitly Restarted Lanczos method
 Numer. Algorithms
, 1996
"... The nonsymmetric Lanczos method has recently received significant attention as a model reduction technique for largescale systems. Unfortunately, the Lanczos method may produce an unstable partial realization for a given, stable system. To remedy this situation, inexpensive implicit restarts are de ..."
Abstract

Cited by 56 (8 self)
 Add to MetaCart
The nonsymmetric Lanczos method has recently received significant attention as a model reduction technique for largescale systems. Unfortunately, the Lanczos method may produce an unstable partial realization for a given, stable system. To remedy this situation, inexpensive implicit restarts are developed which can be employed to stabilize the Lanczos generated model.
Canonical Correlation Analysis, Approximate Covariance Extension, and Identification of Stationary Time Series
 Automatica
, 1996
"... In this paper we analyze a class of statespace identification algorithms for timeseries, based on canonical correlation analysis, in the light of recent results on stochastic systems theory. In principle, these so called "subspace methods" can be described as covariance estimation followed by stoc ..."
Abstract

Cited by 37 (17 self)
 Add to MetaCart
In this paper we analyze a class of statespace identification algorithms for timeseries, based on canonical correlation analysis, in the light of recent results on stochastic systems theory. In principle, these so called "subspace methods" can be described as covariance estimation followed by stochastic realization. The methods o#er the major advantage of converting the nonlinear parameter estimation phase in traditional ARMA models identification into the solution of a Riccati equation but introduce at the same time some nontrivial mathematical problems related to positivity. The reason for this is that an essential part of the problem is equivalent to the wellknown rational covariance extension problem. Therefore the usual deterministic arguments based on factorization of a Hankel matrix are not valid for generic data, something that is habitually overlooked in the literature. We demonstrate that there is no guarantee that several popular identification procedures based on the same principle will not fail to produce a positive extension, unless some rather stringent assumptions are made which, in general, are not explicitly reported. In this paper the statistical problem of stochastic modeling from estimated covariances is phrased in the geometric language of stochastic realization theory. We review the basic ideas of stochastic realization theory in the context of identification, discuss the concept of stochastic balancing and of stochastic model reduction by principal subsystem truncation. The model reduction method of Desai and Pal, based on truncated balanced stochastic realizations, is partially justified, showing that the reduced system structure has a positive covariance sequence but is in general not balanced. As a byproduct of this analysis we obtain a t...
A complete parameterization of all positive rational extensions of a covariance sequence
 IEEE Trans. Automat. Control
, 1995
"... Abstract. In this paper we formalize the observation that filtering and interpolation induce complementary, or ”dual ” decompositions of the space of positive real rational functions of degree less than or equal to n. From this basic result about thegeometry of thespaceof positivereal functions, wea ..."
Abstract

Cited by 24 (20 self)
 Add to MetaCart
Abstract. In this paper we formalize the observation that filtering and interpolation induce complementary, or ”dual ” decompositions of the space of positive real rational functions of degree less than or equal to n. From this basic result about thegeometry of thespaceof positivereal functions, weareableto deducetwo complementary sets of conclusions about positive rational extensions of a given partial covariance sequence. On the one hand, by viewing a certain fast filtering algorithm as a nonlinear dynamical system defined on this space, we are able to develop estimates on the asymptotic behavior of the Schur parameters of positive rational extensions. On the other hand we are also able to provide a characterization of all positive rational extensions of a given partial covariance sequence. Indeed, motivated by its application to signal processing, speech processing and stochastic realization theory, this characterization is in terms of a complete parameterization using familiar objects from systems theory and proves a conjecture made by Georgiou. However, our basic result also enables us to analyze the robustness of this parameterization with respect to variations in the problem data. The methodology employed is a combination of complex analysis, geometry, linear systems and nonlinear dynamics. 1.
Padé Approximation Of LargeScale Dynamic Systems With Lanczos Methods
, 1994
"... The utility of Lanczos methods for the approximation of largescale dynamical systems is considered. In particular, it is shown that the Lanczos method is a technique for yielding Pad'e approximants which has several advantages over more traditional explicit moment matching approaches. An extension ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
The utility of Lanczos methods for the approximation of largescale dynamical systems is considered. In particular, it is shown that the Lanczos method is a technique for yielding Pad'e approximants which has several advantages over more traditional explicit moment matching approaches. An extension of the Lanczos algorithm is developed for computing multipoint Pad'e approximations of descriptor systems. Keywords: Dynamic system, Pad'e approximation, Lanczos algorithm, model reduction. 1. Introduction This paper explores the use of Lanczos techniques for the reducedorder modeling and simulation of largescale, SISO dynamical systems. One can define such a system through the set of state space equations ae E x(t) = Ax(t) + bu(t) y(t) = cx(t) + du(t): (1) The scalar functions u(t) and y(t) are the system's input and output while x(t) is the state vector of dimension n. For simplicity, the directcoupling term, d, will be assumed to be zero. The system matrix, A 2 R n\Thetan ...
Geometric methods for state space identification
 In Identification, Adaptation, Learning  The Science of Learning Models from Data, NATO ASI Series F
, 1996
"... The scope of identification theory is to construct algorithms for automatic model building from observed data. In these lectures we shall only discuss the case where the data are collected in one irrepetible experiment and no preparation of the experiment is possible (i.e. we cannot choose the exper ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
The scope of identification theory is to construct algorithms for automatic model building from observed data. In these lectures we shall only discuss the case where the data are collected in one irrepetible experiment and no preparation of the experiment is possible (i.e. we cannot choose the experimental
On some modifications of the Lanczos algorithm and the relation with Padé approximations
, 1995
"... In this paper we try to show the relations between the Lanczos algorithm and Pad'e approximations as used e.g. in identification and model reduction of dynamical systems. 1 1 Introduction For simplicity we assume here that all systems are SISO, although some results do extend to the MIMO case. Let ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
In this paper we try to show the relations between the Lanczos algorithm and Pad'e approximations as used e.g. in identification and model reduction of dynamical systems. 1 1 Introduction For simplicity we assume here that all systems are SISO, although some results do extend to the MIMO case. Let a nth order dynamical system be described by x = Ax + bu (1.1) y = cx + du (1.2) where A is a square, b is a column vector, c is a row vector, and d is a scalar. It is wellknown that the transfer function of this system : h(s) = c(sI \Gamma A) \Gamma1 b + d has a Taylor expansion around s = 1 that looks like : h(s) = d + cbs \Gamma1 + cAbs \Gamma2 + cA 2 bs \Gamma3 + cA 3 bs \Gamma4 + : : : The coefficients m \Gammai of the powers of s \Gammai satisfy thus m 0 = d ; m \Gammai = cA i\Gamma1 b ; i 1: For i 1 these are also called moments or Markov parameters of the system fA; b; cg. It follows already from the work of Hankel that the first 2n moments 1 To appea...
The Lanczos algorithm and Padé approximations
 Short Course, Benelux Meeting on Systems and Control
, 1995
"... Introduction In these two lectures we try to show the relations between the Lanczos algorithm and Pad'e approximations as used e.g. in identification and model reduction of dynamical systems. These notes are based on material in the papers [10, 17, 11, 12] for which a lot of credit ought to be give ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Introduction In these two lectures we try to show the relations between the Lanczos algorithm and Pad'e approximations as used e.g. in identification and model reduction of dynamical systems. These notes are based on material in the papers [10, 17, 11, 12] for which a lot of credit ought to be given to the respective coauthors. For simplicity we assume here that all systems are SISO, although some results do extend to the MIMO case. Let a nth order dynamical system be described by x = Ax + bu (1) y = cx + du (2) where A is a square, b is a column vector, c is a row vector, and d is a scalar. It is wellknown that the transfer function of this system : h(s) = c(sI \Gamma A) \Gamma1 b +<F29
Geometric Methods in Stochastic Realization and System Identification
 In CWI Quarterly special Issue on System Theory
, 1996
"... this paper we discuss some recent advances in modeling and identification of stationary processes. We point out that identification of linear statespace models for stationary signals can be seen as stochastic realization of widesense stationary processes in an appropriate background Hilbert space. ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
this paper we discuss some recent advances in modeling and identification of stationary processes. We point out that identification of linear statespace models for stationary signals can be seen as stochastic realization of widesense stationary processes in an appropriate background Hilbert space. The geometric theory of stochastic realization developed in the last two decades plays an important role in this interpretation. Identification of models with exogenous inputs in conditions of absence of feedback can also be formulated as a stochastic realization problem. We discuss procedures for constructing minimal statespace models in presence of inputs, based on a generalization of stochastic realization theory for time series and we discuss geometric procedures for identifying (generically) minimal statespace models with inputs. This approach leads to numerical linear algebraic algorithms which have been named "subspace methods" in the literature. It has important advantages over the traditional parametric optimization approach, since it attacks directly the dynamic model building problem by system theoretic methods and leads to procedures which are more transparent and more structured than those traditionally used and found in the literature. 1. Introduction