Results 1  10
of
358
Krylov Projection Methods For Model Reduction
, 1997
"... This dissertation focuses on efficiently forming reducedorder models for large, linear dynamic systems. Projections onto unions of Krylov subspaces lead to a class of reducedorder models known as rational interpolants. The cornerstone of this dissertation is a collection of theory relating Krylov p ..."
Abstract

Cited by 213 (3 self)
 Add to MetaCart
(Show Context)
This dissertation focuses on efficiently forming reducedorder models for large, linear dynamic systems. Projections onto unions of Krylov subspaces lead to a class of reducedorder models known as rational interpolants. The cornerstone of this dissertation is a collection of theory relating Krylov projection to rational interpolation. Based on this theoretical framework, three algorithms for model reduction are proposed. The first algorithm, dual rational Arnoldi, is a numerically reliable approach involving orthogonal projection matrices. The second, rational Lanczos, is an efficient generalization of existing Lanczosbased methods. The third, rational power Krylov, avoids orthogonalization and is suited for parallel or approximate computations. The performance of the three algorithms is compared via a combination of theory and examples. Independent of the precise algorithm, a host of supporting tools are also developed to form a complete modelreduction package. Techniques for choosing the matching frequencies, estimating the modeling error, insuring the model's stability, treating multipleinput multipleoutput systems, implementing parallelism, and avoiding a need for exact factors of large matrix pencils are all examined to various degrees.
Jacobianfree NewtonKrylov methods: a survey of approaches and applications
 J. Comput. Phys
"... Jacobianfree NewtonKrylov (JFNK) methods are synergistic combinations of Newtontype methods for superlinearly convergent solution of nonlinear equations and Krylov subspace methods for solving the Newton correction equations. The link between the two methods is the Jacobianvector product, which ..."
Abstract

Cited by 204 (6 self)
 Add to MetaCart
(Show Context)
Jacobianfree NewtonKrylov (JFNK) methods are synergistic combinations of Newtontype methods for superlinearly convergent solution of nonlinear equations and Krylov subspace methods for solving the Newton correction equations. The link between the two methods is the Jacobianvector product, which may be probed approximately without forming and storing the elements of the true Jacobian, through a variety of means. Various approximations to the Jacobian matrix may still be required for preconditioning the resulting Krylov iteration. As with Krylov methods for linear problems, successful application of the JFNK method to any given problem is dependent on adequate preconditioning. JFNK has potential for application throughout problems governed by nonlinear partial dierential equations and integrodierential equations. In this survey article we place JFNK in context with other nonlinear solution algorithms for both boundary value problems (BVPs) and initial value problems (IVPs). We provide an overview of the mechanics of JFNK and attempt to illustrate the wide variety of preconditioning options available. It is emphasized that JFNK can be wrapped (as an accelerator) around another nonlinear xed point method (interpreted as a preconditioning process, potentially with signicant code reuse). The aim of this article is not to trace fully the evolution of JFNK, nor to provide proofs of accuracy or optimal convergence for all of the constituent methods, but rather to present the reader with a perspective on how JFNK may be applicable to problems of physical interest and to provide sources of further practical information. A review paper solicited by the EditorinChief of the Journal of Computational
Preconditioning techniques for large linear systems: A survey
 J. COMPUT. PHYS
, 2002
"... This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization i ..."
Abstract

Cited by 192 (5 self)
 Add to MetaCart
(Show Context)
This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization issues, and block and multilevel extensions. Some of the challenges ahead are also discussed. An extensive bibliography completes the paper.
Deflated and augmented Krylov subspace techniques
 Numer. Linear Algebra Appl
, 1996
"... We present a general framework for a number of techniques based on projection methods on `augmented Krylov subspaces'. These methods include the deflated GMRES algorithm, an innerouter FGMRES iteration algorithm, and the class of block Krylov methods. Augmented Krylov subspace methods often ..."
Abstract

Cited by 73 (11 self)
 Add to MetaCart
(Show Context)
We present a general framework for a number of techniques based on projection methods on `augmented Krylov subspaces'. These methods include the deflated GMRES algorithm, an innerouter FGMRES iteration algorithm, and the class of block Krylov methods. Augmented Krylov subspace methods often show a significant improvement in convergence rate when compared with their standard counterparts using the subspaces of the same dimension. The methods can all be implemented with a variant of the FGMRES algorithm. KEY WORDS Deflated GMRES Inneriteration GMRES Block GMRES Augmented Krylov subspace Flexible GMRES 1 Introduction There are three techniques which are sometimes used to enhance the robustness of Krylov subspace methods. The first is to exploit block versions of Krylov subspace methods. These block methods are known to be generally more reliable than their scalar equivalents mainly because they tend to better accommodate clustering of eigenvalues around zero. The second techni...
Theory of inexact Krylov subspace methods and applications to scientific computing
, 2003
"... We provide a general frameworkfor the understanding of inexact Krylov subspace methods for the solution of symmetric and nonsymmetric linear systems of equations, as well as for certain eigenvalue calculations. This frameworkallows us to explain the empirical results reported in a series of CERFAC ..."
Abstract

Cited by 72 (7 self)
 Add to MetaCart
We provide a general frameworkfor the understanding of inexact Krylov subspace methods for the solution of symmetric and nonsymmetric linear systems of equations, as well as for certain eigenvalue calculations. This frameworkallows us to explain the empirical results reported in a series of CERFACS technical reports by Bouras, Frayssé, and Giraud in 2000. Furthermore, assuming exact arithmetic, our analysis can be used to produce computable criteria to bound the inexactness of the matrixvector multiplication in such a way as to maintain the convergence of the Krylov subspace method. The theory developed is applied to several problems including the solution of Schur complement systems, linear systems which depend on a parameter, and eigenvalue problems. Numerical experiments for some of these scientific applications are reported.
Restarted GMRES preconditioned by deflation
 Journal of Computational and Applied Mathematics
, 1995
"... This paper presents a new preconditioning technique for the restarted GMRES algorithm. It is based on an invariant subspace approximation which is updated at each cycle. Numerical examples show that this deflation technique gives a more robust scheme than the restarted algorithm, at a low cost o ..."
Abstract

Cited by 70 (7 self)
 Add to MetaCart
(Show Context)
This paper presents a new preconditioning technique for the restarted GMRES algorithm. It is based on an invariant subspace approximation which is updated at each cycle. Numerical examples show that this deflation technique gives a more robust scheme than the restarted algorithm, at a low cost of operations and memory. Keywords: GMRES, preconditioning, invariant subspace, deflation. Subject Classification: 65F10, 65F15 1
Flexible conjugate gradients
 SIAM J. Sci. Comput
, 2000
"... Abstract. We analyze the conjugate gradient (CG) method with preconditioning slightly variable from one iteration to the next. To maintain the optimal convergence properties, we consider a variant proposed by Axelsson that performs an explicit orthogonalization of the search directions vectors. For ..."
Abstract

Cited by 64 (8 self)
 Add to MetaCart
(Show Context)
Abstract. We analyze the conjugate gradient (CG) method with preconditioning slightly variable from one iteration to the next. To maintain the optimal convergence properties, we consider a variant proposed by Axelsson that performs an explicit orthogonalization of the search directions vectors. For this method, which we refer to as flexible CG, we develop a theoretical analysis that shows that the convergence rate is essentially independent of the variations in the preconditioner as long as the latter are kept sufficiently small. We further discuss the real convergence rate on the basis of some heuristic arguments supported by numerical experiments. Depending on the eigenvalue distribution corresponding to the fixed reference preconditioner, several situations have to be distinguished. In some cases, the convergence is as fast with truncated versions of the algorithm or even with the standard CG method, whereas quite large variations are allowed without too much penalty. In other cases, the flexible variant effectively outperforms the standard method, while the need for truncation limits the size of the variations that can be reasonably allowed.
On a class of preconditioners for solving the Helmholtz equation
, 2004
"... In 1983, a preconditioner was proposed [J. Comput. Phys. 49 (1983) 443] based on the Laplace operator for solving the discrete Helmholtz equation efficiently with CGNR. The preconditioner is especially effective for low wavenumber cases where the linear system is slightly indefinite. Laird [Precondi ..."
Abstract

Cited by 62 (11 self)
 Add to MetaCart
In 1983, a preconditioner was proposed [J. Comput. Phys. 49 (1983) 443] based on the Laplace operator for solving the discrete Helmholtz equation efficiently with CGNR. The preconditioner is especially effective for low wavenumber cases where the linear system is slightly indefinite. Laird [Preconditioned iterative solution of the 2D Helmholtz equation, First Year’s Report, St. Hugh’s College, Oxford, 2001] proposed a preconditioner where an extra term is added to the Laplace operator. This term is similar to the zeroth order term in the Helmholtz equation but with reversed sign. In this paper, both approaches are further generalized to a new class of preconditioners, the socalled “shifted Laplace ” preconditioners of the form ∆φ − αk2φ with α ∈ C. Numerical experiments for various wavenumbers indicate the effectiveness of the preconditioner. The preconditioner is evaluated in combination with
GMRES with deflated restarting
 SIAM J. Sci. Comput
"... Abstract. A modification is given of the GMRES iterative method for nonsymmetric systems of linear equations. The new method deflates eigenvalues using Wu and Simon’s thick restarting approach. It has the efficiency of implicit restarting, but is simpler and does not have the same numerical concerns ..."
Abstract

Cited by 60 (8 self)
 Add to MetaCart
(Show Context)
Abstract. A modification is given of the GMRES iterative method for nonsymmetric systems of linear equations. The new method deflates eigenvalues using Wu and Simon’s thick restarting approach. It has the efficiency of implicit restarting, but is simpler and does not have the same numerical concerns. The deflation of small eigenvalues can greatly improve the convergence of restarted GMRES. Also, it is demonstrated that using harmonic Ritz vectors is important, because then the whole subspace is a Krylov subspace that contains certain important smaller subspaces.