Results 1  10
of
106
The Quadratic Eigenvalue Problem
, 2001
"... . We survey the quadratic eigenvalue problem, treating its many applications, its mathematical properties, and a variety of numerical solution techniques. Emphasis is given to exploiting both the structure of the matrices in the problem (dense, sparse, real, complex, Hermitian, skewHermitian) and t ..."
Abstract

Cited by 174 (18 self)
 Add to MetaCart
(Show Context)
. We survey the quadratic eigenvalue problem, treating its many applications, its mathematical properties, and a variety of numerical solution techniques. Emphasis is given to exploiting both the structure of the matrices in the problem (dense, sparse, real, complex, Hermitian, skewHermitian) and the spectral properties of the problem. We classify numerical methods and catalogue available software. Key words. quadratic eigenvalue problem, eigenvalue, eigenvector, matrix, matrix polynomial, secondorder differential equation, vibration, Millennium footbridge, overdamped system, gyroscopic system, linearization, backward error, pseudospectrum, condition number, Krylov methods, Arnoldi method, Lanczos method, JacobiDavidson method AMS subject classifications. 65F30 Contents 1 Introduction 2 2 Applications of QEPs 4 2.1 Secondorder differential equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Vibration analysis of structural systems ...
ARPACK Users Guide: Solution of Large Scale Eigenvalue Problems by Implicitly Restarted Arnoldi Methods.
, 1997
"... this document is intended to provide a cursory overview of the Implicitly Restarted Arnoldi/Lanczos Method that this software is based upon. The goal is to provide some understanding of the underlying algorithm, expected behavior, additional references, and capabilities as well as limitations of the ..."
Abstract

Cited by 160 (17 self)
 Add to MetaCart
(Show Context)
this document is intended to provide a cursory overview of the Implicitly Restarted Arnoldi/Lanczos Method that this software is based upon. The goal is to provide some understanding of the underlying algorithm, expected behavior, additional references, and capabilities as well as limitations of the software. 1.7 Dependence on LAPACK and BLAS
A Shifted Block Lanczos Algorithm For Solving Sparse Symmetric Generalized Eigenproblems
, 1994
"... An "industrial strength" algorithm for solving sparse symmetric generalized eigenproblems is described. The algorithm has its foundations in known techniques in solving sparse symmetric eigenproblems, notably the spectral transformation of Ericsson and Ruhe and the block Lanczos algorithm. ..."
Abstract

Cited by 90 (7 self)
 Add to MetaCart
An "industrial strength" algorithm for solving sparse symmetric generalized eigenproblems is described. The algorithm has its foundations in known techniques in solving sparse symmetric eigenproblems, notably the spectral transformation of Ericsson and Ruhe and the block Lanczos algorithm. However, the combination of these two techniques is not trivial; there are many pitfalls awaiting the unwary implementor. The focus of this paper is on identifying those pitfalls and avoiding them, leading to a "bombproof" algorithm that can live as a black box eigensolver inside a large applications code. The code that results comprises a robust shift selection strategy and a block Lanczos algorithm that is a novel combination of new techniques and extensions of old techniques.
Diffusion Wavelets
, 2004
"... We present a multiresolution construction for efficiently computing, compressing and applying large powers of operators that have high powers with low numerical rank. This allows the fast computation of functions of the operator, notably the associated Green’s function, in compressed form, and their ..."
Abstract

Cited by 79 (12 self)
 Add to MetaCart
We present a multiresolution construction for efficiently computing, compressing and applying large powers of operators that have high powers with low numerical rank. This allows the fast computation of functions of the operator, notably the associated Green’s function, in compressed form, and their fast application. Classes of operators satisfying these conditions include diffusionlike operators, in any dimension, on manifolds, graphs, and in nonhomogeneous media. In this case our construction can be viewed as a farreaching generalization of Fast Multipole Methods, achieved through a different point of view, and of the nonstandard wavelet representation of CalderónZygmund and pseudodifferential operators, achieved through a different multiresolution analysis adapted to the operator. We show how the dyadic powers of an operator can be used to induce a multiresolution analysis, as in classical LittlewoodPaley and wavelet theory, and we show how to construct, with fast and stable algorithms, scaling function and wavelet bases associated to this multiresolution analysis, and the corresponding downsampling operators, and use them to compress the corresponding powers of the operator. This allows to extend multiscale signal processing to general spaces (such as manifolds and graphs) in a very natural way, with corresponding fast algorithms.
StructurePreserving Methods for Computing Eigenpairs of Large Sparse SkewHamiltonian/Hamiltonian Pencils
 SIAM J. Sci. Comput
, 2000
"... We study large, sparse generalized eigenvalue problems for matrix pencils, where one of the matrices is Hamiltonian and the other skew Hamiltonian. Problems of this form arise in the numerical simulation of elastic deformation of anisotropic materials, in structural mechanics and in the linearquadr ..."
Abstract

Cited by 68 (18 self)
 Add to MetaCart
(Show Context)
We study large, sparse generalized eigenvalue problems for matrix pencils, where one of the matrices is Hamiltonian and the other skew Hamiltonian. Problems of this form arise in the numerical simulation of elastic deformation of anisotropic materials, in structural mechanics and in the linearquadratic control problem for partial differential equations. We develop a structurepreserving skewHamiltonian, isotropic, implicitlyrestarted shiftandinvert Arnoldi algorithm (SHIRA). Several numerical examples demonstrate the superiority of SHIRA over a competing unstructured method.
Model reduction of state space systems via an Implicitly Restarted Lanczos method
 Numer. Algorithms
, 1996
"... The nonsymmetric Lanczos method has recently received significant attention as a model reduction technique for largescale systems. Unfortunately, the Lanczos method may produce an unstable partial realization for a given, stable system. To remedy this situation, inexpensive implicit restarts are de ..."
Abstract

Cited by 60 (8 self)
 Add to MetaCart
The nonsymmetric Lanczos method has recently received significant attention as a model reduction technique for largescale systems. Unfortunately, the Lanczos method may produce an unstable partial realization for a given, stable system. To remedy this situation, inexpensive implicit restarts are developed which can be employed to stabilize the Lanczos generated model.
An Implicitly Restarted Lanczos Method for Large Symmetric. . .
 ETNA
, 1994
"... . The Lanczos process is a well known technique for computing a few, say k, eigenvalues and associated eigenvectors of a large symmetric nn matrix. However, loss of orthogonality of the computed Krylov subspace basis can reduce the accuracy of the computed approximate eigenvalues. In the implicitly ..."
Abstract

Cited by 56 (13 self)
 Add to MetaCart
(Show Context)
. The Lanczos process is a well known technique for computing a few, say k, eigenvalues and associated eigenvectors of a large symmetric nn matrix. However, loss of orthogonality of the computed Krylov subspace basis can reduce the accuracy of the computed approximate eigenvalues. In the implicitly restarted Lanczos method studied in the present paper, this problem is addressed by fixing the number of steps in the Lanczos process at a prescribed value, k +p, where p typically is not much larger, and may be smaller, than k. Orthogonality of the k + p basis vectors of the Krylov subspace is secured by reorthogonalizing these vectors when necessary. The implicitly restarted Lanczos method exploits that the residual vector obtained by the Lanczos process is a function of the initial Lanczos vector. The method updates the initial Lanczos vector through an iterative scheme. The purpose of the iterative scheme is to determine an initial vector such that the associated residual vector is tiny....
Rational Krylov, A Practical Algorithm For Large Sparse Nonsymmetric Matrix Pencils
 SIAM J. Sci. Comput
, 1998
"... The Rational Krylov algorithm computes eigenvalues and eigenvectors of a regular not necessarily symmetric matrix pencil. It is a generalization of the shifted and inverted Arnoldi algorithm, where several factorizations with different shifts are used in one run. It computes an orthogonal basis and ..."
Abstract

Cited by 50 (0 self)
 Add to MetaCart
(Show Context)
The Rational Krylov algorithm computes eigenvalues and eigenvectors of a regular not necessarily symmetric matrix pencil. It is a generalization of the shifted and inverted Arnoldi algorithm, where several factorizations with different shifts are used in one run. It computes an orthogonal basis and a small Hessenberg pencil. The eigensolution of the Hessenberg pencil approximates the solution of the original pencil. Different types of Ritz values and harmonic Ritz values are described and compared. Periodical purging of uninteresting directions reduces the size of the basis, and makes it possible to get many linearly independent eigenvectors and principal vectors to pencils with multiple eigenvalues. Relations to iterative methods are established. Results are reported for two large test examples. One is a symmetric pencil coming from a finite element approximation of a membrane, the other a nonsymmetric matrix modeling an idealized aircraft stability problem.
On restarting the Arnoldi method for large nonsymmetric eigenvalue problems
 Mathematics of Computation
, 1996
"... Abstract. The Arnoldi method computes eigenvalues of large nonsymmetric matrices. Restarting is generally needed to reduce storage requirements and orthogonalization costs. However, restarting slows down the convergence and makes the choice of the new starting vector difficult if several eigenvalues ..."
Abstract

Cited by 43 (9 self)
 Add to MetaCart
(Show Context)
Abstract. The Arnoldi method computes eigenvalues of large nonsymmetric matrices. Restarting is generally needed to reduce storage requirements and orthogonalization costs. However, restarting slows down the convergence and makes the choice of the new starting vector difficult if several eigenvalues are desired. We analyze several approaches to restarting and show why Sorensen’s implicit QR approach is generally far superior to the others. Ritz vectors are combined in precisely the right way for an effective new starting vector. Also, a new method for restarting Arnoldi is presented. It is mathematically equivalent to the Sorensen approach but has additional uses. 1.
Locking and Restarting Quadratic Eigenvalue Solvers
, 1999
"... This paper studies the solution of quadratic eigenvalue problems by the quadratic residual iteration method. The focus is on applications arising from finiteelement simulations in acoustics. One approach is the shiftinvert Arnoldi method applied to the linearized problem. When more than one eigenv ..."
Abstract

Cited by 30 (4 self)
 Add to MetaCart
This paper studies the solution of quadratic eigenvalue problems by the quadratic residual iteration method. The focus is on applications arising from finiteelement simulations in acoustics. One approach is the shiftinvert Arnoldi method applied to the linearized problem. When more than one eigenvalue is wanted, it is advisable to use locking or deflation of converged eigenvectors (or Schur vectors). In order to avoid unlimited growth of the subspace dimension, one can restart the method by purging unwanted eigenvectors (or Schur vectors). Both locking and restarting use the partial Schur form. The disadvantage of this approach is that the dimension of the linearized problem is twice that of the quadratic problem. The quadratic residual iteration and JacobiDavidson methods directly solve the quadratic problem. Unfortunately, the Schur form is not defined, nor are locking and restarting. This paper shows a link between methods for solving quadratic eigenvalue problems and the linearized problem. It aims to combine the benefits of the quadratic and the linearized approaches by employing a locking and restarting scheme based on the Schur form of the linearized problem in quadratic residual iteration and JacobiDavidson. Numerical experiments illustrate quadratic residual iteration and JacobiDavidson for computing the linear Schur form. It also makes a comparison with the shiftinvert Arnoldi method.