Results 1  10
of
58
Recent computational developments in Krylov subspace methods for linear systems
 NUMER. LINEAR ALGEBRA APPL
, 2007
"... Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are metho ..."
Abstract

Cited by 85 (12 self)
 Add to MetaCart
Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are methods specifically tailored to systems with special properties such as special forms of symmetry and those depending on one or more parameters.
Error estimation and evaluation of matrix functions via the Faber transform
 SIAM J. Numer. Anal
"... Abstract. The need to evaluate expressions of the form f(A) orf(A)b, wheref is a nonlinear function, A is a large sparse n × n matrix, and b is an nvector, arises in many applications. This paper describes how the Faber transform applied to the field of values of A can be used to determine improved ..."
Abstract

Cited by 39 (13 self)
 Add to MetaCart
(Show Context)
Abstract. The need to evaluate expressions of the form f(A) orf(A)b, wheref is a nonlinear function, A is a large sparse n × n matrix, and b is an nvector, arises in many applications. This paper describes how the Faber transform applied to the field of values of A can be used to determine improved error bounds for popular polynomial approximation methods based on the Arnoldi process. Applications of the Faber transform to rational approximation methods and, in particular, to the rational Arnoldi process also are discussed.
A new investigation of the extended Krylov subspace method for matrix function evaluations
 Numer. Linear Algebra Appl
, 2010
"... Abstract. For large square matrices A and functions f, the numerical approximation of the action of f(A) to a vector v has received considerable attention in the last two decades. In this paper we investigate the Extended Krylov subspace method, a technique that was recently proposed to approximate ..."
Abstract

Cited by 29 (4 self)
 Add to MetaCart
(Show Context)
Abstract. For large square matrices A and functions f, the numerical approximation of the action of f(A) to a vector v has received considerable attention in the last two decades. In this paper we investigate the Extended Krylov subspace method, a technique that was recently proposed to approximate f(A)v for A symmetric. We provide a new theoretical analysis of the method, which improves the original result for A symmetric, and gives a new estimate for A nonsymmetric. Numerical experiments confirm that the new error estimates correctly capture the linear asymptotic convergence rate of the approximation. By using recent algorithmic improvements, we also show that the method is computationally competitive with respect to other enhancement techniques.
DECAY BOUNDS AND O(n) ALGORITHMS FOR APPROXIMATING FUNCTIONS OF SPARSE MATRICES
, 2007
"... We establish decay bounds for the entries of f(A), where A is a sparse (in particular, banded) n × n diagonalizable matrix and f is smooth on a subset of the complex plane containing the spectrum of A. Combined with techniques from approximation theory, the bounds are used to compute sparse (or band ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
(Show Context)
We establish decay bounds for the entries of f(A), where A is a sparse (in particular, banded) n × n diagonalizable matrix and f is smooth on a subset of the complex plane containing the spectrum of A. Combined with techniques from approximation theory, the bounds are used to compute sparse (or banded) approximations to f(A), resulting in algorithms that under appropriate conditions have linear complexity in the matrix dimension. Applications to various types of problems are discussed and illustrated by numerical examples.
A RESTARTED LANCZOS APPROXIMATION TO FUNCTIONS OF A SYMMETRIC MATRIX
"... In this paper, we investigate a method for restarting the Lanczos method for approximating the matrixvector product f(A)b, where A ∈ R n×n is a symmetric matrix. For analytic f we derive a novel restart function that identifies the error in the Lanczos approximation. The restart procedure is then ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
In this paper, we investigate a method for restarting the Lanczos method for approximating the matrixvector product f(A)b, where A ∈ R n×n is a symmetric matrix. For analytic f we derive a novel restart function that identifies the error in the Lanczos approximation. The restart procedure is then generated by a restart formula using a sequence of these restart functions. We present an error bound for the proposed restart scheme. We also present an error bound for the restarted Lanczos approximation of f(A)b for symmetric positive definite A when f is in a particular class of completely monotone functions. We illustrate for some important matrix function applications the usefulness of these bounds for terminating the restart process once the desired accuracy in the matrix function approximation has been achieved.
COMPUTING f(A)b VIA LEAST SQUARES POLYNOMIAL APPROXIMATIONS
, 2009
"... Given a certain function f, various methods have been proposed in the past for addressing the important problem of computing the the matrixvector product f(A)b without explicitly computing the matrix f(A). Such methods were typically used to compute a specific function f, a common case being that ..."
Abstract

Cited by 15 (7 self)
 Add to MetaCart
Given a certain function f, various methods have been proposed in the past for addressing the important problem of computing the the matrixvector product f(A)b without explicitly computing the matrix f(A). Such methods were typically used to compute a specific function f, a common case being that of the exponential. This paper discusses a procedure based on least squares polynomials that can, in principle, be applied to any (continuous) function f. The idea is to start by approximating the function by a spline of a desired accuracy. Then, a particular definition of the function inner product is invoked that facilitates the computation of the least squares polynomial to this spline function. Since the function is approximated by a polynomial, the matrix A is referenced only through a matrixvector multiplication. In addition, the choice of the inner product makes it possible to avoid numerical integration. As an important application, we consider the case when f(t) = √ t and A is a sparse, symmetric positivedefinite matrix, which arises in sampling from a Gaussian process distribution. The covariance matrix of the distribution is defined by using a covariance function that has a compact support, at a very large number of sites that are on a regular or irregular grid. We derive error bounds and show extensive numerical results to illustrate the effectiveness of the proposed technique.
RESIDUAL, RESTARTING AND RICHARDSON ITERATION FOR THE MATRIX EXPONENTIAL
"... Abstract. A wellknown problem in computing some matrix functions iteratively is the lack of a clear, commonly accepted residual notion. An important matrix function for which this is the case is the matrix exponential. Suppose the matrix exponential of a given matrix times a given vector has to be ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
(Show Context)
Abstract. A wellknown problem in computing some matrix functions iteratively is the lack of a clear, commonly accepted residual notion. An important matrix function for which this is the case is the matrix exponential. Suppose the matrix exponential of a given matrix times a given vector has to be computed. We develop the approach of Druskin, Greenbaum and Knizhnerman (1998) and interpret the soughtafter vector as the value of a vector function satisfying the linear system of ordinary differential equations (ODE) whose coefficients form the given matrix. The residual is then defined with respect to the initial value problem for this ODE system. The residual introduced in this way can be seen as a backward error. We show how the residual can be computed efficiently within several iterative methods for the matrix exponential. This resolves the question of reliable stopping criteria for these methods. Further, we show that the residual concept can be used to construct new residualbased iterative methods. In particular, a variant of the Richardson method for the new residual appears to provide an efficient way to restart Krylov subspace methods for evaluating the matrix exponential.
DEFLATED RESTARTING FOR MATRIX FUNCTIONS ∗
"... Abstract. We investigate an acceleration technique for restarted Krylov subspace methods for computing the action of a function of a large sparse matrix on a vector. Its effect is to ultimately deflate a specific invariant subspace of the matrix which most impedes the convergence of the restarted ap ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We investigate an acceleration technique for restarted Krylov subspace methods for computing the action of a function of a large sparse matrix on a vector. Its effect is to ultimately deflate a specific invariant subspace of the matrix which most impedes the convergence of the restarted approximation process. An approximation to the subspace to be deflated is successively refined in the course of the underlying restarted Arnoldi process by extracting Ritz vectors and using those closest to the spectral region of interest as exact shifts. The approximation is constructed with the help of a generalization of Krylov decompositions to linearly dependent vectors. A description of the restarted process as a successive interpolation scheme at Ritz values is given in which the exact shifts are replaced with improved approximations of eigenvalues in each restart cycle. Numerical experiments demonstrate the efficacy of the approach.
A GENERALIZATION OF THE STEEPEST DESCENT METHOD FOR MATRIX FUNCTIONS
, 2008
"... We consider the special case of the restarted Arnoldi method for approximating the product of a function of a Hermitian matrix with a vector which results when the restart length is set to one. When applied to the solution of a linear system of equations, this approach coincides with the method of ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
(Show Context)
We consider the special case of the restarted Arnoldi method for approximating the product of a function of a Hermitian matrix with a vector which results when the restart length is set to one. When applied to the solution of a linear system of equations, this approach coincides with the method of steepest descent. We show that the method is equivalent to an interpolation process in which the node sequence has at most two points of accumulation. This knowledge is used to quantify the asymptotic convergence rate.