Results 1  10
of
24
Recycling Krylov Subspaces for Sequences of Linear Systems
 SIAM J. Sci. Comput
, 2004
"... Many problems in engineering and physics require the solution of a large sequence of linear systems. We can reduce the cost of solving subsequent systems in the sequence by recycling information from previous systems. We consider two dierent approaches. For several model problems, we demonstrate tha ..."
Abstract

Cited by 44 (3 self)
 Add to MetaCart
Many problems in engineering and physics require the solution of a large sequence of linear systems. We can reduce the cost of solving subsequent systems in the sequence by recycling information from previous systems. We consider two dierent approaches. For several model problems, we demonstrate that we can reduce the iteration count required to solve a linear system by a factor of two. We consider both Hermitian and nonHermitian problems, and present numerical experiments to illustrate the eects of subspace recycling.
On the occurrence of superlinear convergence of exact and inexact Krylov subspace methods
 SIAM Rev
, 2005
"... We present a general analytical model which describes the superlinear convergence of Krylov subspace methods. We take an invariant subspace approach, so that our results apply also to inexact methods, and to nondiagonalizable matrices. Thus, we provide a unified treatment of the superlinear conve ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
We present a general analytical model which describes the superlinear convergence of Krylov subspace methods. We take an invariant subspace approach, so that our results apply also to inexact methods, and to nondiagonalizable matrices. Thus, we provide a unified treatment of the superlinear convergence of GMRES, Conjugate Gradients, block versions of these, and inexact subspace methods. Numerical experiments illustrate the bounds obtained.
Convergence of polynomial restart Krylov methods for eigenvalue computations
 SIAM Rev
"... Abstract. Krylov subspace methods have led to reliable and effective tools for resolving largescale, nonHermitian eigenvalue problems. Since practical considerations often limit the dimension of the approximating Krylov subspace, modern algorithms attempt to identify and condense significant compo ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
Abstract. Krylov subspace methods have led to reliable and effective tools for resolving largescale, nonHermitian eigenvalue problems. Since practical considerations often limit the dimension of the approximating Krylov subspace, modern algorithms attempt to identify and condense significant components from the current subspace, encode them into a polynomial filter, and then restart the Krylov process with a suitably refined starting vector. In effect, polynomial filters dynamically steer lowdimensional Krylov spaces toward a desired invariant subspace through their action on the starting vector. The spectral complexity of nonnormal matrices makes convergence of these methods difficult to analyze, and these effects are further complicated by the polynomial filter process. The principal object of study in this paper is the angle an approximating Krylov subspace forms with a desired invariant subspace. Convergence analysis is posed in a geometric framework that is robust to eigenvalue illconditioning, yet remains relatively uncluttered. The bounds described here suggest that the sensitivity of desired eigenvalues exerts little influence on convergence, provided the associated invariant subspace is wellconditioned; illconditioning of unwanted eigenvalues plays an essential role. This framework also gives insight into the design of effective polynomial filters. Numerical examples illustrate the subtleties that arise when restarting nonHermitian iterations. Key words. Krylov subspaces, Arnoldi algorithm, Lanczos algorithm, eigenvalue computations, containment gap, pseudospectra
Convergence analysis of Krylov subspace iterations with methods from potential theory
 SIAM Review
"... Abstract. Krylov subspace iterations are among the bestknown and most widely used numerical methods for solving linear systems of equations and for computing eigenvalues of large matrices. These methods are polynomial methods whose convergence behavior is related to the behavior of polynomials on t ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
Abstract. Krylov subspace iterations are among the bestknown and most widely used numerical methods for solving linear systems of equations and for computing eigenvalues of large matrices. These methods are polynomial methods whose convergence behavior is related to the behavior of polynomials on the spectrum of the matrix. This leads to an extremal problem in polynomial approximation theory: how small can a monic polynomial of a given degree be on the spectrum? This survey gives an introduction to a recently developed technique to analyze this extremal problem in the case of symmetric matrices. It is based on global information on the spectrum in the sense that the eigenvalues are assumed to be distributed according to a certain measure. Then depending on the number of iterations, the Lanczos method for the calculation of eigenvalues finds those eigenvalues that lie in a certain region, which is characterized by means of a constrained equilibrium problem from potential theory. The same constrained equilibrium problem also describes the superlinear convergence of conjugate gradients and other iterative methods for solving linear systems. Key words. Krylov subspace iterations, Ritz values, eigenvalue distribution, equilibrium measure, contrained equilibrium, potential theory AMS subject classifications. 15A18, 31A05, 31A15, 65F15 1. Introduction. Krylov
THE ARNOLDI EIGENVALUE ITERATION WITH EXACT SHIFTS CAN FAIL
 SIAM J. MATRIX ANAL. APPL. VOL. 31, NO. 1, PP. 1–10
, 2009
"... The restarted Arnoldi algorithm, implemented in the ARPACK software library and MATLAB’s eigs command, is among the most common means of computing select eigenvalues and eigenvectors of a large, sparse matrix. To assist convergence, a starting vector is repeatedly refined via the application of aut ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
The restarted Arnoldi algorithm, implemented in the ARPACK software library and MATLAB’s eigs command, is among the most common means of computing select eigenvalues and eigenvectors of a large, sparse matrix. To assist convergence, a starting vector is repeatedly refined via the application of automatically constructed polynomial filters whose roots are known as “exact shifts. ” Though Sorensen proved the success of this procedure under mild hypotheses for Hermitian matrices, a convergence proof for the nonHermitian case has remained elusive. The present note describes a class of examples for which the algorithm fails in the strongest possible sense; that is, the polynomial filter used to restart the iteration deflates the eigenspace one is attempting to compute.
The many proofs of an identity on the norm of oblique projections
 Numer. Algorithms
"... Given an oblique projector P on a Hilbert space, i.e., an operator satisfying P 2 = P, which is neither null nor the identity, it holds that ‖P ‖ = ‖I − P ‖. This useful equality, while not widelyknown, has been proven repeatedly in the literature. Many published proofs are reviewed, and simpler o ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Given an oblique projector P on a Hilbert space, i.e., an operator satisfying P 2 = P, which is neither null nor the identity, it holds that ‖P ‖ = ‖I − P ‖. This useful equality, while not widelyknown, has been proven repeatedly in the literature. Many published proofs are reviewed, and simpler ones are presented.
The effect of aggressive early deflation on the convergence of the QR algorithm
 SIAM J. Matrix Anal. Appl
"... Aggressive early deflation has proven to significantly enhance the convergence of the QR algorithm for computing the eigenvalues of a nonsymmetric matrix. One purpose of this paper is to point out that this deflation strategy is equivalent to extracting converged Ritz vectors from certain Krylov sub ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Aggressive early deflation has proven to significantly enhance the convergence of the QR algorithm for computing the eigenvalues of a nonsymmetric matrix. One purpose of this paper is to point out that this deflation strategy is equivalent to extracting converged Ritz vectors from certain Krylov subspaces. As a special case, the singleshift QR algorithm enhanced with aggressive early deflation corresponds to a Krylov subspace method whose starting vector undergoes a Rayleighquotient iteration. It is shown how these observations can be used to derive improved convergence bounds for the QR algorithm. 1
Convergence of the isometric Arnoldi process
, 2003
"... It is well known that the performance of eigenvalue algorithms such as the Lanczos and the Arnoldi method depends on the distribution of eigenvalues. Under fairly general assumptions we characterize the region of good convergence for the Isometric Arnoldi Process. We also determine bounds for the ra ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
It is well known that the performance of eigenvalue algorithms such as the Lanczos and the Arnoldi method depends on the distribution of eigenvalues. Under fairly general assumptions we characterize the region of good convergence for the Isometric Arnoldi Process. We also determine bounds for the rate of convergence and we prove sharpness of these bounds. The distribution of isometric Ritz values is obtained as the minimizer of an extremal problem. We use techniques from logarithmic potential theory in proving these results.
FixedPolynomial Approximate Spectral Transformations for Preconditioning the Eigenvalue Problem
, 2003
"... ..."