Results 1  10
of
45
Recent computational developments in Krylov subspace methods for linear systems
 NUMER. LINEAR ALGEBRA APPL
, 2007
"... Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are metho ..."
Abstract

Cited by 86 (12 self)
 Add to MetaCart
Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are methods specifically tailored to systems with special properties such as special forms of symmetry and those depending on one or more parameters.
Analysis of Acceleration Strategies for Restarted Minimal Residual Methods
, 2000
"... We provide an overview of existing strategies which compensate for the deterioration of convergence of minimum residual (MR) Krylov subspace methods due to restarting. We evaluate the popular practice of using nearly invariant subspaces to either augment Krylov subspaces or to construct precondit ..."
Abstract

Cited by 44 (6 self)
 Add to MetaCart
We provide an overview of existing strategies which compensate for the deterioration of convergence of minimum residual (MR) Krylov subspace methods due to restarting. We evaluate the popular practice of using nearly invariant subspaces to either augment Krylov subspaces or to construct preconditioners which invert on these subspaces. In the case where these spaces are exactly invariant, the augmentation approach is shown to be superior. We further show how a strategy recently introduced by de Sturler for truncating the approximation space of an MR method can be interpreted as a controlled loosening of the condition for global MR approximation based on the canonical angles between subspaces. For the special case of Krylov subspace methods, we give a concise derivation of the role of Ritz and harmonic Ritz values and vectors in the polynomial description of Krylov spaces as well as of the use of the implicitly updated Arnoldi method for manipulating Krylov spaces.
Solving Large Scale Semidefinite Programs via an Iterative Solver on the Augmented Systems
, 2002
"... The search directions in an interiorpoint method for large scale semidefinite programming (SDP) can be computed by applying a Krylov iterative method to either the Schur complement equation (SCE) or the augmented equation. Both methods suffer from slow convergence as interiorpoint iterates appr ..."
Abstract

Cited by 32 (10 self)
 Add to MetaCart
The search directions in an interiorpoint method for large scale semidefinite programming (SDP) can be computed by applying a Krylov iterative method to either the Schur complement equation (SCE) or the augmented equation. Both methods suffer from slow convergence as interiorpoint iterates approach optimality. Numerical experiments have shown that diagonally preconditioned conjugate residual method on the SCE typically takes a huge number of steps to converge. However, it is difficult to incorporate cheap and effective preconditioners into the SCE. This paper proposes to apply the preconditioned symmetric quasiminimal residual (PSQMR) method to a reduced augmented equation that is derived from the augmented equation by utilizing the eigenvalue structure of the interiorpoint iterates. Numerical experiments on SDP problems arising from maximum clique and selected SDPLIB problems show that moderately accurate solutions can be obtained with a modest number of PSQMR steps using the proposed preconditioned reduced augmented equation. An SDP problem with 127600 constraints is solved in about 9.5 hours to an accuracy of 10^6 in relative duality gap.
Expressions And Bounds For The GMRES Residual
 BIT
, 1999
"... . Expressions and bounds are derived for the residual norm in GMRES. It is shown that the minimal residual norm is large as long as the Krylov basis is wellconditioned.For scaled Jordan blocks the minimal residual norm is expressed in terms of eigenvalues and departure from normality.For normal mat ..."
Abstract

Cited by 30 (0 self)
 Add to MetaCart
(Show Context)
. Expressions and bounds are derived for the residual norm in GMRES. It is shown that the minimal residual norm is large as long as the Krylov basis is wellconditioned.For scaled Jordan blocks the minimal residual norm is expressed in terms of eigenvalues and departure from normality.For normal matrices the minimal residual norm is expressed in terms of products of relative eigenvalue di#erences. Key words. linear system, Krylov methods, GMRES, MINRES, Vandermonde matrix, eigenvalues, departure from normality AMS subject classi#cation. 15A03, 15A06, 15A09, 15A12, 15A18, 15A60, 65F10, 65F15, 65F20, 65F35. 1. Introduction.. The generalised minimal residual method #GMRES# #31, 36# #and MINRES for Hermitian matrices #30## is an iterative method for solving systems of linear equations Ax = b. The approximate solution in iteration i minimises the twonorm of the residual b , Az over the Krylov space spanfb;Ab;:::;A i,1 bg. The goal of this paper is to express this minimal residual norm...
Convergence of Restarted Krylov Subspaces to Invariant Subspaces
 SIAM J. Matrix Anal. Appl
, 2001
"... The performance of Krylov subspace eigenvalue algorithms for large matrices can be measured by the angle between a desired invariant subspace and the Krylov subspace. We develop general bounds for this convergence that include the eects of polynomial restarting and impose no restrictions concerning ..."
Abstract

Cited by 28 (4 self)
 Add to MetaCart
(Show Context)
The performance of Krylov subspace eigenvalue algorithms for large matrices can be measured by the angle between a desired invariant subspace and the Krylov subspace. We develop general bounds for this convergence that include the eects of polynomial restarting and impose no restrictions concerning the diagonalizability of the matrix or its degree of nonnormality. Associated with a desired set of eigenvalues is a maximum \reachable invariant subspace" that can be developed from the given starting vector. Convergence for this distinguished subspace is bounded in terms involving a polynomial approximation problem. Elementary results from potential theory lead to convergence rate estimates and suggest restarting strategies based on optimal approximation points (e.g., Leja or Chebyshev points); exact shifts are evaluated within this framework. Computational examples illustrate the utility of these results. Origins of superlinear eects are also described.
Which eigenvalues are found by the Lanczos method
 SIAM J. Matrix Anal. Appl
"... Abstract. When discussing the convergence properties of the Lanczos iteration method for the real symmetric eigenvalue problem, Trefethen and Bau noted that the Lanczos method tends to find eigenvalues in regions that have too little charge when compared to an equilibrium distribution. In this paper ..."
Abstract

Cited by 25 (5 self)
 Add to MetaCart
(Show Context)
Abstract. When discussing the convergence properties of the Lanczos iteration method for the real symmetric eigenvalue problem, Trefethen and Bau noted that the Lanczos method tends to find eigenvalues in regions that have too little charge when compared to an equilibrium distribution. In this paper a quantitative version of this rule of thumbis presented. We describe, in an asymptotic sense, the region containing those eigenvalues that are well approximated by the Ritz values. The region depends on the distribution of eigenvalues and on the ratio between the size of the matrix and the number of iterations, and it is characterized by an extremal problem in potential theory which was first considered by Rakhmanov. We give examples showing the connection with the equilibrium distribution. Key words. Ritz values, equilibrium distribution, potential theory
SUPERLINEAR CONVERGENCE OF CONJUGATE GRADIENTS
, 2001
"... We give a theoretical explanation for superlinear convergence behavior observed while solving large symmetric systems of equations using the conjugate gradient method or other Krylov subspace methods. We present a new bound on the relative error after n iterations. This bound is valid in an asympto ..."
Abstract

Cited by 24 (6 self)
 Add to MetaCart
We give a theoretical explanation for superlinear convergence behavior observed while solving large symmetric systems of equations using the conjugate gradient method or other Krylov subspace methods. We present a new bound on the relative error after n iterations. This bound is valid in an asymptotic sense when the size N of the system grows together with the number of iterations. The bound depends on the asymptotic eigenvalue distribution and on the ratio n/N. Under appropriate conditions we show that the bound is asymptotically sharp. Our findings are related to some recent results concerning asymptotics of discrete orthogonal polynomials. An important tool in our investigations is a constrained energy problem in logarithmic potential theory. The new asymptotic bounds for the rate of convergence are illustrated by discussing Toeplitz systems as well as a model problem stemming from the discretization of the Poisson equation.
Solving Some Large Scale Semidefinite Programs Via the Conjugate Residual Method
, 2000
"... Most current implementations of interiorpoint methods for semidefinite programming use a direct method to solve the Schur complement equation (SCE) M y = h in computing the search direction. When the number of constraints is large, the problem of having insufficient memory to store M can be avoided ..."
Abstract

Cited by 23 (10 self)
 Add to MetaCart
Most current implementations of interiorpoint methods for semidefinite programming use a direct method to solve the Schur complement equation (SCE) M y = h in computing the search direction. When the number of constraints is large, the problem of having insufficient memory to store M can be avoided if an iterative method is used instead. Numerical experiments have shown that the conjugate residual (CR) method typically takes a huge number of steps to generate a high accuracy solution. On the other hand, it is difficult to incorporate traditional preconditioners into the SCE, except for block diagonal preconditioners. We decompose the SCE into a 2 &times; 2 block system by decomposing y (similarly for h) into two orthogonal components with one lying in a certain subspace that is determined from the structure of M . Numerical experiments on semidefinite programming problems arising from LovĂˇsz function of graphs and MAXCUT problems show that high accuracy solutions can be obtained with moderate n...
Convergence of polynomial restart Krylov methods for eigenvalue computations
 SIAM Rev
"... Abstract. Krylov subspace methods have led to reliable and effective tools for resolving largescale, nonHermitian eigenvalue problems. Since practical considerations often limit the dimension of the approximating Krylov subspace, modern algorithms attempt to identify and condense significant compo ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Krylov subspace methods have led to reliable and effective tools for resolving largescale, nonHermitian eigenvalue problems. Since practical considerations often limit the dimension of the approximating Krylov subspace, modern algorithms attempt to identify and condense significant components from the current subspace, encode them into a polynomial filter, and then restart the Krylov process with a suitably refined starting vector. In effect, polynomial filters dynamically steer lowdimensional Krylov spaces toward a desired invariant subspace through their action on the starting vector. The spectral complexity of nonnormal matrices makes convergence of these methods difficult to analyze, and these effects are further complicated by the polynomial filter process. The principal object of study in this paper is the angle an approximating Krylov subspace forms with a desired invariant subspace. Convergence analysis is posed in a geometric framework that is robust to eigenvalue illconditioning, yet remains relatively uncluttered. The bounds described here suggest that the sensitivity of desired eigenvalues exerts little influence on convergence, provided the associated invariant subspace is wellconditioned; illconditioning of unwanted eigenvalues plays an essential role. This framework also gives insight into the design of effective polynomial filters. Numerical examples illustrate the subtleties that arise when restarting nonHermitian iterations. Key words. Krylov subspaces, Arnoldi algorithm, Lanczos algorithm, eigenvalue computations, containment gap, pseudospectra
How Descriptive Are GMRES Convergence Bounds?
 Oxford University Computing Laboratory
, 1999
"... . Eigenvalues with the eigenvector condition number, the field of values, and pseudospectra have all been suggested as the basis for convergence bounds for minimum residual Krylov subspace methods applied to nonnormal coefficient matrices. This paper analyzes and compares these bounds, illustrating ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
(Show Context)
. Eigenvalues with the eigenvector condition number, the field of values, and pseudospectra have all been suggested as the basis for convergence bounds for minimum residual Krylov subspace methods applied to nonnormal coefficient matrices. This paper analyzes and compares these bounds, illustrating with six examples the success and failure of each one. Refined bounds based on eigenvalues and the field of values are suggested to handle lowdimensional nonnormality. It is observed that pseudospectral bounds can capture multiple convergence stages. Unfortunately, computation of pseudospectra can be rather expensive. This motivates an adaptive technique for estimating GMRES convergence based on approximate pseudospectra taken from the Arnoldi process that is the basis for GMRES. Key words. Krylov subspace methods, GMRES convergence, nonnormal matrices, pseudospectra, field of values AMS subject classifications. 15A06, 65F10, 15A18, 15A60, 31A15 1. Introduction. Popular algorithms for...