Results 1  10
of
148
The Quadratic Eigenvalue Problem
, 2001
"... . We survey the quadratic eigenvalue problem, treating its many applications, its mathematical properties, and a variety of numerical solution techniques. Emphasis is given to exploiting both the structure of the matrices in the problem (dense, sparse, real, complex, Hermitian, skewHermitian) and t ..."
Abstract

Cited by 156 (17 self)
 Add to MetaCart
. We survey the quadratic eigenvalue problem, treating its many applications, its mathematical properties, and a variety of numerical solution techniques. Emphasis is given to exploiting both the structure of the matrices in the problem (dense, sparse, real, complex, Hermitian, skewHermitian) and the spectral properties of the problem. We classify numerical methods and catalogue available software. Key words. quadratic eigenvalue problem, eigenvalue, eigenvector, matrix, matrix polynomial, secondorder differential equation, vibration, Millennium footbridge, overdamped system, gyroscopic system, linearization, backward error, pseudospectrum, condition number, Krylov methods, Arnoldi method, Lanczos method, JacobiDavidson method AMS subject classifications. 65F30 Contents 1 Introduction 2 2 Applications of QEPs 4 2.1 Secondorder differential equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Vibration analysis of structural systems ...
Krylov Projection Methods For Model Reduction
, 1997
"... This dissertation focuses on efficiently forming reducedorder models for large, linear dynamic systems. Projections onto unions of Krylov subspaces lead to a class of reducedorder models known as rational interpolants. The cornerstone of this dissertation is a collection of theory relating Krylov p ..."
Abstract

Cited by 124 (3 self)
 Add to MetaCart
This dissertation focuses on efficiently forming reducedorder models for large, linear dynamic systems. Projections onto unions of Krylov subspaces lead to a class of reducedorder models known as rational interpolants. The cornerstone of this dissertation is a collection of theory relating Krylov projection to rational interpolation. Based on this theoretical framework, three algorithms for model reduction are proposed. The first algorithm, dual rational Arnoldi, is a numerically reliable approach involving orthogonal projection matrices. The second, rational Lanczos, is an efficient generalization of existing Lanczosbased methods. The third, rational power Krylov, avoids orthogonalization and is suited for parallel or approximate computations. The performance of the three algorithms is compared via a combination of theory and examples. Independent of the precise algorithm, a host of supporting tools are also developed to form a complete modelreduction package. Techniques for choosing the matching frequencies, estimating the modeling error, insuring the model's stability, treating multipleinput multipleoutput systems, implementing parallelism, and avoiding a need for exact factors of large matrix pencils are all examined to various degrees.
Numerical solution of the stable, nonnegative definite Lyapunov equation
 IMA J. Numer. Anal
, 1982
"... We discuss the numerical solution of the Lyapunov equation ..."
Abstract

Cited by 88 (2 self)
 Add to MetaCart
We discuss the numerical solution of the Lyapunov equation
The periodic Schur decomposition. Algorithms and applications
 In Proc. SPIE Conference
, 1992
"... . In this paper we derive a unitary eigendecomposition for a sequence of matrices which we call the periodic Schur decomposition. We prove its existence and discuss its application to the solution of periodic difference equations arising in control. We show how the classical QR algorithm can be ext ..."
Abstract

Cited by 82 (11 self)
 Add to MetaCart
. In this paper we derive a unitary eigendecomposition for a sequence of matrices which we call the periodic Schur decomposition. We prove its existence and discuss its application to the solution of periodic difference equations arising in control. We show how the classical QR algorithm can be extended to provide a stable algorithm for computing this generalized decomposition. We apply the decomposition also to cyclic matrices and two point boundary value problems. Key words. Numerical algorithms, linear algebra, periodic systems, Kcyclic matrices, twopoint boundary value problems 1 Introduction In the study of timevarying control systems in (generalized) state space form : ( E k \Delta z k+1 = F k \Delta z k +G k \Delta u k y k = H k \Delta z k + J k \Delta u k (1) the periodic coefficients case has always been considered the simplest extension of the timeinvariant case. Here the coefficients satisfy, for some K ? 0 the periodicity conditions E k = E k+K , F k = F k+K , G k...
A Numerically Stable, Structure Preserving Method for Computing the Eigenvalues of Real Hamiltonian or Symplectic Pencils
 Numer. Math
, 1996
"... A new method is presented for the numerical computation of the generalized eigenvalues of real Hamiltonian or symplectic pencils and matrices. The method is strongly backward stable, i.e., it is numerically backward stable and preserves the structure (i.e., Hamiltonian or symplectic). In the case of ..."
Abstract

Cited by 68 (31 self)
 Add to MetaCart
A new method is presented for the numerical computation of the generalized eigenvalues of real Hamiltonian or symplectic pencils and matrices. The method is strongly backward stable, i.e., it is numerically backward stable and preserves the structure (i.e., Hamiltonian or symplectic). In the case of a Hamiltonian matrix the method is closely related to the square reduced method of Van Loan, but in contrast to that method which may suffer from a loss of accuracy of order p ", where " is the machine precision, the new method computes the eigenvalues to full possible accuracy. Keywords. eigenvalue problem, Hamiltonian pencil (matrix), symplectic pencil (matrix), skewHamiltonian matrix AMS subject classification. 65F15 1 Introduction The eigenproblem for Hamiltonian and symplectic matrices has received a lot of attention in the last 25 years, since the landmark papers of Laub [13] and Paige/Van Loan [20]. The reason for this is the importance of this problem in many applications in c...
Backward Error and Condition of Polynomial Eigenvalue Problems
 Linear Algebra Appl
, 1999
"... We develop normwise backward errors and condition numbers for the polynomial eigenvalue problem. The standard way of dealing with this problem is to reformulate it as a generalized eigenvalue problem (GEP). For the special case of the quadratic eigenvalue problem (QEP), we show that solving the QEP ..."
Abstract

Cited by 51 (12 self)
 Add to MetaCart
We develop normwise backward errors and condition numbers for the polynomial eigenvalue problem. The standard way of dealing with this problem is to reformulate it as a generalized eigenvalue problem (GEP). For the special case of the quadratic eigenvalue problem (QEP), we show that solving the QEP by applying the QZ algorithm to a corresponding GEP can be backward unstable. The QEP can be reformulated as a GEP in many ways. We investigate the sensitivity of a given eigenvalue to perturbations in each of the GEP formulations and identify which formulations are to be preferred for large and small eigenvalues, respectively. Key words. Polynomial eigenvalue problem, quadratic eigenvalue problem, generalized eigenvalue problem, backward error, condition number. 1 Introduction We are concerned with backward error analysis and conditioning for the nonlinear eigenvalue problem P ()x = 0; (1.1) where P () is a matrix whose elements are polynomials in a scalar . We write P in the form P () =...
Schur Parameter Pencils For The Solution Of The Unitary Eigenproblem
, 1991
"... Let U \Gamma V be an n \Theta n pencil with unitary matrices U and V . An algorithm is presented which reduces U and V simultaneously to unitary block diagonal matrices Go = Q H UP and Ge = Q H V P with block size at most two. It is an O(n 3 ) process using Householder eliminations and it is ..."
Abstract

Cited by 40 (5 self)
 Add to MetaCart
Let U \Gamma V be an n \Theta n pencil with unitary matrices U and V . An algorithm is presented which reduces U and V simultaneously to unitary block diagonal matrices Go = Q H UP and Ge = Q H V P with block size at most two. It is an O(n 3 ) process using Householder eliminations and it is backward stable. In the special case V = I the block diagonal matrices Go ; G H e can be normalized such that their entries are just the Schur parameters of the Hessenberg condensed form of U . We call Go \Gamma Ge a Schur parameter pencil. It can also be derived from U; V by a Lanczoslike process. For the solution of the eigenvalue problem for Go \Gamma Ge a QRtype algorithm can be developed based on this unitary reduction of a pencil U \Gamma V to a Schur parameter pencil. The condensed form is preserved throughout the process. Each iteration step needs only O(n) operations. This method of solving the unitary eigenvalue problem seems to be the closest possible analogy to the QR meth...
A stable numerical method for inverting shape from moments
 SIAM J. Sci. Comput
, 1999
"... Abstract. We derive a stable technique, based upon matrix pencils, for the reconstruction of (or approximation by) polygonal shapes from moments. We point out that this problem can be considered the dual of 2 − D numerical quadrature over polygonal domains. An analysis of the sensitivity of the prob ..."
Abstract

Cited by 32 (8 self)
 Add to MetaCart
Abstract. We derive a stable technique, based upon matrix pencils, for the reconstruction of (or approximation by) polygonal shapes from moments. We point out that this problem can be considered the dual of 2 − D numerical quadrature over polygonal domains. An analysis of the sensitivity of the problem is presented along with some numerical examples illustrating the relevant points. Finally, an application to the problem of gravimetry is explored where the shape of a gravitationally anomalous region is to be recovered from measurements of its exterior gravitational field. Key words. gravimetry
Structured backward error and condition of generalized eigenvalue problems
 SIAM J. Matrix Anal. Appl
, 1998
"... Abstract. Backward errors and condition numbers are defined and evaluated for eigenvalues and eigenvectors of generalized eigenvalue problems. Both normwise and componentwise measures are used. Unstructured problems are considered first, and then the basic definitions are extended so that linear str ..."
Abstract

Cited by 30 (4 self)
 Add to MetaCart
Abstract. Backward errors and condition numbers are defined and evaluated for eigenvalues and eigenvectors of generalized eigenvalue problems. Both normwise and componentwise measures are used. Unstructured problems are considered first, and then the basic definitions are extended so that linear structure in the coefficient matrices (for example, Hermitian, Toeplitz, Hamiltonian, or band structure) is preserved by the perturbations.
QRlike Algorithms for Eigenvalue Problems
 SIAM J. Matrix Anal. Appl
, 2000
"... . In the year 2000 the dominant method for solving matrix eigenvalue problems is still the QR algorithm. This paper discusses the family of GR algorithms, with emphasis on the QR algorithm. Included are historical remarks, an outline of what GR algorithms are and why they work, and descriptions ..."
Abstract

Cited by 26 (11 self)
 Add to MetaCart
. In the year 2000 the dominant method for solving matrix eigenvalue problems is still the QR algorithm. This paper discusses the family of GR algorithms, with emphasis on the QR algorithm. Included are historical remarks, an outline of what GR algorithms are and why they work, and descriptions of the latest, highly parallelizable, versions of the QR algorithm. Now that we know how to parallelize it, the QR algorithm seems likely to retain its dominance for many years to come. 1. Introduction Since the early 1960's the standard algorithms for calculating the eigenvalues and (optionally) eigenvectors of "small" matrices have been the QR algorithm [29] and its variants. This is still the case in the year 2000 and is likely to remain so for many years to come. For us a small matrix is one that can be stored in the conventional way in a computer's main memory and whose complete eigenstructure can be calculated in a matter of minutes without exploiting whatever sparsity the matrix m...