Results 1  10
of
73
A JacobiDavidson Iteration Method for Linear Eigenvalue Problems
 SIAM J. Matrix Anal. Appl
, 2000
"... . In this paper we propose a new method for the iterative computation of a few of the extremal eigenvalues of a symmetric matrix and their associated eigenvectors. The method is based on an old and almost unknown method of Jacobi. Jacobi's approach, combined with Davidson's method, leads to a new me ..."
Abstract

Cited by 63 (6 self)
 Add to MetaCart
. In this paper we propose a new method for the iterative computation of a few of the extremal eigenvalues of a symmetric matrix and their associated eigenvectors. The method is based on an old and almost unknown method of Jacobi. Jacobi's approach, combined with Davidson's method, leads to a new method that has improved convergence properties and that may be used for general matrices. We also propose a variant of the new method that may be useful for the computation of nonextremal eigenvalues as well. Key words. eigenvalues and eigenvectors, Davidson's method, Jacobi iterations, harmonic Ritz values AMS subject classifications. 65F15, 65N25 PII. S0036144599363084 1. Introduction. Suppose we want to compute one or more eigenvalues and their corresponding eigenvectors of the n n matrix A. Several iterative methods are available: Jacobi's diagonalization method [9], [23], the power method [9], the method of Lanczos [13], [23], Arnoldi's method [1], [26], and Davidson's method [4], ...
Recent computational developments in Krylov subspace methods for linear systems
 NUMER. LINEAR ALGEBRA APPL
, 2007
"... Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are metho ..."
Abstract

Cited by 50 (12 self)
 Add to MetaCart
Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are methods specifically tailored to systems with special properties such as special forms of symmetry and those depending on one or more parameters.
Theory of inexact Krylov subspace methods and applications to scientific computing
, 2002
"... Abstract. We provide a general frameworkfor the understanding of inexact Krylov subspace methods for the solution of symmetric and nonsymmetric linear systems of equations, as well as for certain eigenvalue calculations. This frameworkallows us to explain the empirical results reported in a series o ..."
Abstract

Cited by 48 (6 self)
 Add to MetaCart
Abstract. We provide a general frameworkfor the understanding of inexact Krylov subspace methods for the solution of symmetric and nonsymmetric linear systems of equations, as well as for certain eigenvalue calculations. This frameworkallows us to explain the empirical results reported in a series of CERFACS technical reports by Bouras, Frayssé, and Giraud in 2000. Furthermore, assuming exact arithmetic, our analysis can be used to produce computable criteria to bound the inexactness of the matrixvector multiplication in such a way as to maintain the convergence of the Krylov subspace method. The theory developed is applied to several problems including the solution of Schur complement systems, linear systems which depend on a parameter, and eigenvalue problems. Numerical experiments for some of these scientific applications are reported.
Rational Krylov, A Practical Algorithm For Large Sparse Nonsymmetric Matrix Pencils
 SIAM J. Sci. Comput
, 1998
"... The Rational Krylov algorithm computes eigenvalues and eigenvectors of a regular not necessarily symmetric matrix pencil. It is a generalization of the shifted and inverted Arnoldi algorithm, where several factorizations with different shifts are used in one run. It computes an orthogonal basis and ..."
Abstract

Cited by 46 (0 self)
 Add to MetaCart
The Rational Krylov algorithm computes eigenvalues and eigenvectors of a regular not necessarily symmetric matrix pencil. It is a generalization of the shifted and inverted Arnoldi algorithm, where several factorizations with different shifts are used in one run. It computes an orthogonal basis and a small Hessenberg pencil. The eigensolution of the Hessenberg pencil approximates the solution of the original pencil. Different types of Ritz values and harmonic Ritz values are described and compared. Periodical purging of uninteresting directions reduces the size of the basis, and makes it possible to get many linearly independent eigenvectors and principal vectors to pencils with multiple eigenvalues. Relations to iterative methods are established. Results are reported for two large test examples. One is a symmetric pencil coming from a finite element approximation of a membrane, the other a nonsymmetric matrix modeling an idealized aircraft stability problem.
GMRES with deflated restarting
 SIAM J. Sci. Comput
"... Abstract. A modification is given of the GMRES iterative method for nonsymmetric systems of linear equations. The new method deflates eigenvalues using Wu and Simon’s thick restarting approach. It has the efficiency of implicit restarting, but is simpler and does not have the same numerical concerns ..."
Abstract

Cited by 41 (8 self)
 Add to MetaCart
Abstract. A modification is given of the GMRES iterative method for nonsymmetric systems of linear equations. The new method deflates eigenvalues using Wu and Simon’s thick restarting approach. It has the efficiency of implicit restarting, but is simpler and does not have the same numerical concerns. The deflation of small eigenvalues can greatly improve the convergence of restarted GMRES. Also, it is demonstrated that using harmonic Ritz vectors is important, because then the whole subspace is a Krylov subspace that contains certain important smaller subspaces.
Implicitly restarted GMRES and Arnoldi methods for nonsymmetric systems of equations
 SIAM J. Matrix Anal. Appl
"... Abstract. The generalized minimum residual method (GMRES) is well known for solving large nonsymmetric systems of linear equations. It generally uses restarting, which slows the convergence. However, some information can be retained at the time of the restart and used in the next cycle. We present a ..."
Abstract

Cited by 33 (7 self)
 Add to MetaCart
Abstract. The generalized minimum residual method (GMRES) is well known for solving large nonsymmetric systems of linear equations. It generally uses restarting, which slows the convergence. However, some information can be retained at the time of the restart and used in the next cycle. We present algorithms that use implicit restarting in order to retain this information. Approximate eigenvectors determined from the previous subspace are included in the new subspace. This deflates the smallest eigenvalues and thus improves the convergence. The subspace that contains the approximate eigenvectors is itself a Krylov subspace, but not with the usual starting vector. The implicitly restarted FOM algorithm includes standard Ritz vectors in the subspace. The eigenvalue portion of its calculations is equivalent to Sorensen’s IRA algorithm. The implicitly restarted GMRES algorithm uses harmonic Ritz vectors. This algorithm also gives a new approach to computing interior eigenvalues. Key words. GMRES, implicit restarting, iterative methods, nonsymmetric systems, harmonic
Analysis of Acceleration Strategies for Restarted Minimal Residual Methods
, 2000
"... We provide an overview of existing strategies which compensate for the deterioration of convergence of minimum residual (MR) Krylov subspace methods due to restarting. We evaluate the popular practice of using nearly invariant subspaces to either augment Krylov subspaces or to construct precondit ..."
Abstract

Cited by 32 (6 self)
 Add to MetaCart
We provide an overview of existing strategies which compensate for the deterioration of convergence of minimum residual (MR) Krylov subspace methods due to restarting. We evaluate the popular practice of using nearly invariant subspaces to either augment Krylov subspaces or to construct preconditioners which invert on these subspaces. In the case where these spaces are exactly invariant, the augmentation approach is shown to be superior. We further show how a strategy recently introduced by de Sturler for truncating the approximation space of an MR method can be interpreted as a controlled loosening of the condition for global MR approximation based on the canonical angles between subspaces. For the special case of Krylov subspace methods, we give a concise derivation of the role of Ritz and harmonic Ritz values and vectors in the polynomial description of Krylov spaces as well as of the use of the implicitly updated Arnoldi method for manipulating Krylov spaces.
A restarted Krylov subspace method for the evaluation of matrix functions
 SIAM J. Numer. Anal
"... Abstract. We show how the Arnoldi algorithm for approximating a function of a matrix times a vector can be restarted in a manner analogous to restarted Krylov subspace methods for solving linear systems of equations. The resulting restarted algorithm reduces to other known algorithms for the recipro ..."
Abstract

Cited by 31 (4 self)
 Add to MetaCart
Abstract. We show how the Arnoldi algorithm for approximating a function of a matrix times a vector can be restarted in a manner analogous to restarted Krylov subspace methods for solving linear systems of equations. The resulting restarted algorithm reduces to other known algorithms for the reciprocal and the exponential functions. We further show that the restarted algorithm inherits the superlinear convergence property of its unrestarted counterpart for entire functions and present the results of numerical experiments.
Efficient expansion of subspaces in the JacobiDavidson method for standard and generalized eigenproblems
, 1998
"... We discuss approaches for an efficient handling of the correction equation in the JacobiDavidson method. The correction equation is effective in a subspace orthogonal to the current eigenvector approximation. The operator in the correction equation is a dense matrix, but it is composed from three f ..."
Abstract

Cited by 26 (6 self)
 Add to MetaCart
We discuss approaches for an efficient handling of the correction equation in the JacobiDavidson method. The correction equation is effective in a subspace orthogonal to the current eigenvector approximation. The operator in the correction equation is a dense matrix, but it is composed from three factors that allow for a sparse representation. If the given matrix eigenproblem is sparse then one often aims for the construction of a preconditioner for that matrix. We discuss how to restrict this preconditioner effectively to the subspace orthogonal to the current eigenvector. The correction equation itself is formulated in terms of approximations for an eigenpair. In order to avoid misconvergence one has to make the right selection for the approximations, and this aspect will be discussed as well.
A Jacobi–Davidson type SVD method
 SIAM J. Sci. Comput
, 2001
"... Abstract. We discuss a new method for the iterative computation of a portion of the singular values and vectors of a large sparse matrix. Similar to the Jacobi–Davidson method for the eigenvalue problem, we compute in each step a correction by (approximately) solving a correction equation. We give a ..."
Abstract

Cited by 24 (7 self)
 Add to MetaCart
Abstract. We discuss a new method for the iterative computation of a portion of the singular values and vectors of a large sparse matrix. Similar to the Jacobi–Davidson method for the eigenvalue problem, we compute in each step a correction by (approximately) solving a correction equation. We give a few variants of this Jacobi–Davidson SVD (JDSVD) method with their theoretical properties. It is shown that the JDSVD can be seen as an accelerated (inexact) Newton scheme. We experimentally compare the method with some other iterative SVD methods. Key words. Jacobi–Davidson, singular value decomposition (SVD), singular values, singular vectors, norm, augmented matrix, correction equation, (inexact) accelerated Newton, improving singular values AMS subject classifications. 65F15 (65F35) PII. S1064827500372973