Results 1  10
of
24
Numerical solution of saddle point problems
 ACTA NUMERICA
, 2005
"... Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has b ..."
Abstract

Cited by 320 (25 self)
 Add to MetaCart
(Show Context)
Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has been a surge of interest in saddle point problems, and numerous solution techniques have been proposed for solving this type of systems. The aim of this paper is to present and discuss a large selection of solution methods for linear systems in saddle point form, with an emphasis on iterative methods for large and sparse problems.
Recent computational developments in Krylov subspace methods for linear systems
 NUMER. LINEAR ALGEBRA APPL
, 2007
"... Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are metho ..."
Abstract

Cited by 86 (12 self)
 Add to MetaCart
Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are methods specifically tailored to systems with special properties such as special forms of symmetry and those depending on one or more parameters.
Any Nonincreasing Convergence Curve is Possible for GMRES
 SIAM J. Matrix Anal. Appl
, 1996
"... Given a nonincreasing positive sequence, f(0) f(1) : : : f(n \Gamma 1) ? 0, it is shown that there exists an n by n matrix A and a vector r 0 with kr 0 k = f(0) such that f(k) = kr k k, k = 1; : : : ; n \Gamma 1, where r k is the residual at step k of the GMRES algorithm applied to the l ..."
Abstract

Cited by 67 (0 self)
 Add to MetaCart
(Show Context)
Given a nonincreasing positive sequence, f(0) f(1) : : : f(n \Gamma 1) ? 0, it is shown that there exists an n by n matrix A and a vector r 0 with kr 0 k = f(0) such that f(k) = kr k k, k = 1; : : : ; n \Gamma 1, where r k is the residual at step k of the GMRES algorithm applied to the linear system Ax = b, with initial residual r 0 = b \Gamma Ax 0 . Moreover, the matrix A can be chosen to have any desired eigenvalues. 1 Introduction The GMRES algorithm [2] is a popular iterative technique for solving large sparse nonsymmetric (nonHermitian) linear systems. Let A be an n by n nonsingular matrix and b an ndimensional vector (both may be complex). To solve a linear system Ax = b, given an initial guess x 0 for the solution, the algorithm constructs successive approximations x k , k = 1; 2; : : :, ?from the affine spaces x 0 + spanfr 0 ; Ar 0 ; : : : ; A k\Gamma1 r 0 g; (1) Courant Institute of Mathematical Sciences, 251 Mercer St., New York, NY 10012...
Gmres/cr and Arnoldi/Lanczos as Matrix Approximation Problems
 SIAM J. SCI. COMPUT
"... The GMRES and Arnoldi algorithms, which reduce to the CR and Lanczos algorithms in the symmetric case, both minimize kp(A)bk over polynomials p of degree n. The difference is that p is normalized at z = 0 for GMRES and at z = 1 for Arnoldi. Analogous "ideal GMRES " and "ideal Arnoldi ..."
Abstract

Cited by 59 (7 self)
 Add to MetaCart
(Show Context)
The GMRES and Arnoldi algorithms, which reduce to the CR and Lanczos algorithms in the symmetric case, both minimize kp(A)bk over polynomials p of degree n. The difference is that p is normalized at z = 0 for GMRES and at z = 1 for Arnoldi. Analogous "ideal GMRES " and "ideal Arnoldi" problems are obtained if one removes b from the discussion and minimizes kp(A)k instead. Investigation of these true and ideal approximation problems gives insight into how fast GMRES converges and how the Arnoldi iteration locates eigenvalues.
Analysis of Acceleration Strategies for Restarted Minimal Residual Methods
, 2000
"... We provide an overview of existing strategies which compensate for the deterioration of convergence of minimum residual (MR) Krylov subspace methods due to restarting. We evaluate the popular practice of using nearly invariant subspaces to either augment Krylov subspaces or to construct precondit ..."
Abstract

Cited by 44 (6 self)
 Add to MetaCart
We provide an overview of existing strategies which compensate for the deterioration of convergence of minimum residual (MR) Krylov subspace methods due to restarting. We evaluate the popular practice of using nearly invariant subspaces to either augment Krylov subspaces or to construct preconditioners which invert on these subspaces. In the case where these spaces are exactly invariant, the augmentation approach is shown to be superior. We further show how a strategy recently introduced by de Sturler for truncating the approximation space of an MR method can be interpreted as a controlled loosening of the condition for global MR approximation based on the canonical angles between subspaces. For the special case of Krylov subspace methods, we give a concise derivation of the role of Ritz and harmonic Ritz values and vectors in the polynomial description of Krylov spaces as well as of the use of the implicitly updated Arnoldi method for manipulating Krylov spaces.
Geometric Aspects in the Theory of Krylov Subspace Methods
 Acta Numerica
, 1999
"... The recent development of Krylov subspace methods for the solution of operator equations has shown that two basic construction principles, the orthogonal residual (OR) and minimal residual (MR) approaches, underlie the most commonly used algorithms. It is shown that these can both be formulated ..."
Abstract

Cited by 43 (2 self)
 Add to MetaCart
(Show Context)
The recent development of Krylov subspace methods for the solution of operator equations has shown that two basic construction principles, the orthogonal residual (OR) and minimal residual (MR) approaches, underlie the most commonly used algorithms. It is shown that these can both be formulated as techniques for solving an approximation problem on a sequence of nested subspaces of a Hilbert space, a problem not necessarily related to an operator equation. Most of the familiar Krylov subspace algorithms result when these subspaces form a Krylov sequence. The wellknown relations among the iterates and residuals of OR/MR pairs are shown to hold also in this rather general setting. We further show that a common error analysis for these methods involving the canonical angles between subspaces allows many of the recently developed error bounds to be derived in a simple manner. An application of this analysis to compact perturbations of the identity shows that OR/MR pairs of Krylov subspace methods converge qsuperlinearly when applied to such operator equations.
Minimal Residual Method Stronger Than Polynomial Preconditioning
, 1994
"... . This paper compares the convergence behavior of two popular iterative methods for solving systems of linear equations: the sstep restarted minimal residual method (commonly implemented by algorithms such as GMRES(s)), and (s \Gamma 1)degree polynomial preconditioning. It is known that for normal ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
. This paper compares the convergence behavior of two popular iterative methods for solving systems of linear equations: the sstep restarted minimal residual method (commonly implemented by algorithms such as GMRES(s)), and (s \Gamma 1)degree polynomial preconditioning. It is known that for normal matrices, and in particular for symmetric positive definite matrices, the convergence bounds for the two methods are the same. In this paper we demonstrate that for matrices unitarily equivalent to an upper triangular Toeplitz matrix, a similar result holds, namely, either both methods converge or both fail to converge. However, we show this result cannot be generalized to all matrices. Specifically, we develop a method, based on convexity properties of the generalized field of values of powers of the iteration matrix, to obtain examples of real matrices for which GMRES(s) converges for every initial vector, but every (s \Gamma 1) degree polynomial preconditioning stagnates or diverges for...
How Descriptive Are GMRES Convergence Bounds?
 Oxford University Computing Laboratory
, 1999
"... . Eigenvalues with the eigenvector condition number, the field of values, and pseudospectra have all been suggested as the basis for convergence bounds for minimum residual Krylov subspace methods applied to nonnormal coefficient matrices. This paper analyzes and compares these bounds, illustrating ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
(Show Context)
. Eigenvalues with the eigenvector condition number, the field of values, and pseudospectra have all been suggested as the basis for convergence bounds for minimum residual Krylov subspace methods applied to nonnormal coefficient matrices. This paper analyzes and compares these bounds, illustrating with six examples the success and failure of each one. Refined bounds based on eigenvalues and the field of values are suggested to handle lowdimensional nonnormality. It is observed that pseudospectral bounds can capture multiple convergence stages. Unfortunately, computation of pseudospectra can be rather expensive. This motivates an adaptive technique for estimating GMRES convergence based on approximate pseudospectra taken from the Arnoldi process that is the basis for GMRES. Key words. Krylov subspace methods, GMRES convergence, nonnormal matrices, pseudospectra, field of values AMS subject classifications. 15A06, 65F10, 15A18, 15A60, 31A15 1. Introduction. Popular algorithms for...
Iterative Solution of Linear Systems in the 20th Century
 JOURNAL OF COMPUTATIONAL AND APPLIED MATHEMATICS
, 2000
"... This paper sketches the main research developments in the area of iterative methods for solving linear systems during the 20th century. Although iterative methods for solving linear systems find their origin in the early nineteenth century (work by Gauss),the field has seen an explosion of activity ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
This paper sketches the main research developments in the area of iterative methods for solving linear systems during the 20th century. Although iterative methods for solving linear systems find their origin in the early nineteenth century (work by Gauss),the field has seen an explosion of activity spurred by demand due to extraordinary technological advances in engineering and sciences. The past five decades have been particularly rich in new developments,ending with the availability of large toolbox of specialized algorithms for solving the very large problems which arise in scientific and industrial computational models. As in any other scientific area,research in iterative methods has been a journey characterized by a chain of contributions building on each other. It is the aim of this paper not only to sketch the most significant of these contributions during the past century,but also to relate them to one another.
Computable convergence bounds for GMRES
, 2000
"... The purpose of this paper is to derive new computable convergence bounds for GMRES. The new bounds depend on the initial guess and are thus conceptually different from standard âworstcaseâ bounds. Most importantly, approximations to the new bounds can be computed from information generated dur ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
The purpose of this paper is to derive new computable convergence bounds for GMRES. The new bounds depend on the initial guess and are thus conceptually different from standard âworstcaseâ bounds. Most importantly, approximations to the new bounds can be computed from information generated during the run of a certain GMRES implementation. The approximations allow predictions of how the algorithm will perform. Heuristics for such predictions are given. Numerical experiments illustrate the behavior of the new bounds as well as the use of the heuristics.