Results 1  10
of
22
Numerical solution of saddle point problems
 ACTA NUMERICA
, 2005
"... Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has b ..."
Abstract

Cited by 180 (30 self)
 Add to MetaCart
Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has been a surge of interest in saddle point problems, and numerous solution techniques have been proposed for solving this type of systems. The aim of this paper is to present and discuss a large selection of solution methods for linear systems in saddle point form, with an emphasis on iterative methods for large and sparse problems.
Any Nonincreasing Convergence Curve is Possible for GMRES
 SIAM J. Matrix Anal. Appl
, 1996
"... Given a nonincreasing positive sequence, f(0) f(1) : : : f(n \Gamma 1) ? 0, it is shown that there exists an n by n matrix A and a vector r 0 with kr 0 k = f(0) such that f(k) = kr k k, k = 1; : : : ; n \Gamma 1, where r k is the residual at step k of the GMRES algorithm applied to the l ..."
Abstract

Cited by 49 (0 self)
 Add to MetaCart
Given a nonincreasing positive sequence, f(0) f(1) : : : f(n \Gamma 1) ? 0, it is shown that there exists an n by n matrix A and a vector r 0 with kr 0 k = f(0) such that f(k) = kr k k, k = 1; : : : ; n \Gamma 1, where r k is the residual at step k of the GMRES algorithm applied to the linear system Ax = b, with initial residual r 0 = b \Gamma Ax 0 . Moreover, the matrix A can be chosen to have any desired eigenvalues. 1 Introduction The GMRES algorithm [2] is a popular iterative technique for solving large sparse nonsymmetric (nonHermitian) linear systems. Let A be an n by n nonsingular matrix and b an ndimensional vector (both may be complex). To solve a linear system Ax = b, given an initial guess x 0 for the solution, the algorithm constructs successive approximations x k , k = 1; 2; : : :, ?from the affine spaces x 0 + spanfr 0 ; Ar 0 ; : : : ; A k\Gamma1 r 0 g; (1) Courant Institute of Mathematical Sciences, 251 Mercer St., New York, NY 10012...
Recent computational developments in Krylov subspace methods for linear systems
 NUMER. LINEAR ALGEBRA APPL
, 2007
"... Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are metho ..."
Abstract

Cited by 48 (12 self)
 Add to MetaCart
Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are methods specifically tailored to systems with special properties such as special forms of symmetry and those depending on one or more parameters.
Gmres/cr and Arnoldi/Lanczos as Matrix Approximation Problems
 SIAM J. SCI. COMPUT
"... The GMRES and Arnoldi algorithms, which reduce to the CR and Lanczos algorithms in the symmetric case, both minimize kp(A)bk over polynomials p of degree n. The difference is that p is normalized at z = 0 for GMRES and at z = 1 for Arnoldi. Analogous "ideal GMRES " and "ideal Arnoldi" problems are ..."
Abstract

Cited by 47 (7 self)
 Add to MetaCart
The GMRES and Arnoldi algorithms, which reduce to the CR and Lanczos algorithms in the symmetric case, both minimize kp(A)bk over polynomials p of degree n. The difference is that p is normalized at z = 0 for GMRES and at z = 1 for Arnoldi. Analogous "ideal GMRES " and "ideal Arnoldi" problems are obtained if one removes b from the discussion and minimizes kp(A)k instead. Investigation of these true and ideal approximation problems gives insight into how fast GMRES converges and how the Arnoldi iteration locates eigenvalues.
Analysis of Acceleration Strategies for Restarted Minimal Residual Methods
, 2000
"... We provide an overview of existing strategies which compensate for the deterioration of convergence of minimum residual (MR) Krylov subspace methods due to restarting. We evaluate the popular practice of using nearly invariant subspaces to either augment Krylov subspaces or to construct precondit ..."
Abstract

Cited by 31 (6 self)
 Add to MetaCart
We provide an overview of existing strategies which compensate for the deterioration of convergence of minimum residual (MR) Krylov subspace methods due to restarting. We evaluate the popular practice of using nearly invariant subspaces to either augment Krylov subspaces or to construct preconditioners which invert on these subspaces. In the case where these spaces are exactly invariant, the augmentation approach is shown to be superior. We further show how a strategy recently introduced by de Sturler for truncating the approximation space of an MR method can be interpreted as a controlled loosening of the condition for global MR approximation based on the canonical angles between subspaces. For the special case of Krylov subspace methods, we give a concise derivation of the role of Ritz and harmonic Ritz values and vectors in the polynomial description of Krylov spaces as well as of the use of the implicitly updated Arnoldi method for manipulating Krylov spaces.
Geometric Aspects in the Theory of Krylov Subspace Methods
 Acta Numerica
, 1999
"... The recent development of Krylov subspace methods for the solution of operator equations has shown that two basic construction principles, the orthogonal residual (OR) and minimal residual (MR) approaches, underlie the most commonly used algorithms. It is shown that these can both be formulated ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
The recent development of Krylov subspace methods for the solution of operator equations has shown that two basic construction principles, the orthogonal residual (OR) and minimal residual (MR) approaches, underlie the most commonly used algorithms. It is shown that these can both be formulated as techniques for solving an approximation problem on a sequence of nested subspaces of a Hilbert space, a problem not necessarily related to an operator equation. Most of the familiar Krylov subspace algorithms result when these subspaces form a Krylov sequence. The wellknown relations among the iterates and residuals of OR/MR pairs are shown to hold also in this rather general setting. We further show that a common error analysis for these methods involving the canonical angles between subspaces allows many of the recently developed error bounds to be derived in a simple manner. An application of this analysis to compact perturbations of the identity shows that OR/MR pairs of Krylov subspace methods converge qsuperlinearly when applied to such operator equations.
Minimal Residual Method Stronger Than Polynomial Preconditioning
, 1994
"... . This paper compares the convergence behavior of two popular iterative methods for solving systems of linear equations: the sstep restarted minimal residual method (commonly implemented by algorithms such as GMRES(s)), and (s \Gamma 1)degree polynomial preconditioning. It is known that for normal ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
. This paper compares the convergence behavior of two popular iterative methods for solving systems of linear equations: the sstep restarted minimal residual method (commonly implemented by algorithms such as GMRES(s)), and (s \Gamma 1)degree polynomial preconditioning. It is known that for normal matrices, and in particular for symmetric positive definite matrices, the convergence bounds for the two methods are the same. In this paper we demonstrate that for matrices unitarily equivalent to an upper triangular Toeplitz matrix, a similar result holds, namely, either both methods converge or both fail to converge. However, we show this result cannot be generalized to all matrices. Specifically, we develop a method, based on convexity properties of the generalized field of values of powers of the iteration matrix, to obtain examples of real matrices for which GMRES(s) converges for every initial vector, but every (s \Gamma 1) degree polynomial preconditioning stagnates or diverges for...
Iterative Solution of Linear Systems in the 20th Century
 JOURNAL OF COMPUTATIONAL AND APPLIED MATHEMATICS
, 2000
"... This paper sketches the main research developments in the area of iterative methods for solving linear systems during the 20th century. Although iterative methods for solving linear systems find their origin in the early nineteenth century (work by Gauss),the field has seen an explosion of activity ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
This paper sketches the main research developments in the area of iterative methods for solving linear systems during the 20th century. Although iterative methods for solving linear systems find their origin in the early nineteenth century (work by Gauss),the field has seen an explosion of activity spurred by demand due to extraordinary technological advances in engineering and sciences. The past five decades have been particularly rich in new developments,ending with the availability of large toolbox of specialized algorithms for solving the very large problems which arise in scientific and industrial computational models. As in any other scientific area,research in iterative methods has been a journey characterized by a chain of contributions building on each other. It is the aim of this paper not only to sketch the most significant of these contributions during the past century,but also to relate them to one another.
Computable convergence bounds for GMRES
 SIAM Journal on Matrix Analysis and Applications
, 1998
"... The main purpose of this paper is the derivation of computable bounds on the residual norms of (full) GMRES. The new bounds depend on the initial guess and thus are conceptually different from standard 'worstcase' bounds. The analysis is valid for nonsingular linear systems and for any singular lin ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
The main purpose of this paper is the derivation of computable bounds on the residual norms of (full) GMRES. The new bounds depend on the initial guess and thus are conceptually different from standard 'worstcase' bounds. The analysis is valid for nonsingular linear systems and for any singular linear system, provided a certain condition on the initial residual is satisfied. It is shown that approximations to all factors in the new bounds can be obtained early in the GMRES run. The approximations serve to predict the convergence behavior of GMRES in later phases of the iteration. Numerical examples demonstrate that the new bounds are capable to describe the actual convergence behavior of GMRES for the given linear system and initial guess. Key words. linear systems, convergence analysis, GMRES method, Krylov subspace methods, iterative methods AMS Subject Classifications. 65F10, 65F15, 65F50, 65N12, 65N15 1 Introduction The GMRES algorithm by Saad and Schultz [22] is one of the mos...
Construction and Analysis of Polynomial Iterative Methods for NonHermitian Systems of Linear Equations
, 1998
"... apier nach 1 ISO 9706 Contents 1 Introduction 7 1.1 What is a PIM? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.2 Different types of PIMs . . . . . . . . . . . . . . . . . . . . . . . 8 1.3 Organization and summary of our results . . . . . . . . . . . . . 9 2 Background 13 2.1 Krylo ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
apier nach 1 ISO 9706 Contents 1 Introduction 7 1.1 What is a PIM? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.2 Different types of PIMs . . . . . . . . . . . . . . . . . . . . . . . 8 1.3 Organization and summary of our results . . . . . . . . . . . . . 9 2 Background 13 2.1 Krylov spaces and the Arnoldi process . . . . . . . . . . . . . . . 13 2.2 Exterior mapping functions and Faber polynomials . . . . . . . . 14 2.3 Inclusion sets and asymptotic analysis . . . . . . . . . . . . . . . 15 3 Inclusion sets generated by the conformal 'bratwurst' maps 19 3.1 Derivation of the maps . . . . . . . . . . . . . . . . . . . . . . . . 19 3.2 Definition and properties of the 'bratwurst' shape sets . . . . . . 23 3.3 Numerical examples . . . . . . . . . . . . . . . . . . . . . . . . . 25 4 The hybrid ABF method for nonhermitian linear systems 29 4.1 Faber polynomials for the inclusion sets