Results 1  10
of
33
An efficient preconditioned CG method for the solution of a class of layered problems with extreme contrasts in the coefficients
 J. Comput. Phys
, 1999
"... Knowledge of fluid pressure is important to predict the presence of oil and gas in reservoirs. A mathematical model for the prediction of fluid pressures is given by a timedependent diffusion equation. Application of the finite element method leads to a system of linear equations. A complication is ..."
Abstract

Cited by 39 (17 self)
 Add to MetaCart
Knowledge of fluid pressure is important to predict the presence of oil and gas in reservoirs. A mathematical model for the prediction of fluid pressures is given by a timedependent diffusion equation. Application of the finite element method leads to a system of linear equations. A complication is that the underground consists of layers with very large differences in permeability. This implies that the symmetric and positive definite coefficient matrix has a very large condition number. Bad convergence behavior of the CG method has been observed; moreover, a classical termination criterion is not valid in this problem. After diagonal scaling of the matrix the number of extreme eigenvalues is reduced and it is proved to be equal to the number of layers with a high permeability. For the IC preconditioner the same behavior is observed. To annihilate the effect of the extreme eigenvalues a deflated CG method is used. The convergence rate improves considerably and the termination criterion becomes again reliable. Finally a cheap approximation of the eigenvectors is proposed. c ○ 1999 Academic Press Key Words: porous media; preconditioned conjugate gradients; deflation; Poisson equation; discontinuous coefficients across layers; eigenvectors; finite element method. 1.
The Idea Behind Krylov Methods
 American Mathematical Monthly
, 1997
"... We explain why Krylov methods make sense, and why it is natural to represent a solution to a linear system as a member of a Krylov space. In particular we show that the solution to a nonsingular linear system Ax = b lies in a Krylov space whose dimension is the degree of the minimal polynomial of A. ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
We explain why Krylov methods make sense, and why it is natural to represent a solution to a linear system as a member of a Krylov space. In particular we show that the solution to a nonsingular linear system Ax = b lies in a Krylov space whose dimension is the degree of the minimal polynomial of A. Therefore, if the minimal polynomial of A has low degree then the space in which a Krylov method searches for the solution is small. In this case a Krylov method has the opportunity to converge fast. When the matrix is singular, however, Krylov methods can fail. Even if the linear system does have a solution, it may not lie in a Krylov space. In this case we describe the class of righthand sides for which a solution lies in a Krylov space. As it happens, there is only a single solution that lies in a Krylov space, and it can be obtained from the Drazin inverse. Center for Research In Scientific Computation, Department of Mathematics, North Carolina State University, P. O. Box 8205, Rale...
Solving Some Large Scale Semidefinite Programs Via the Conjugate Residual Method
, 2000
"... Most current implementations of interiorpoint methods for semidefinite programming use a direct method to solve the Schur complement equation (SCE) M y = h in computing the search direction. When the number of constraints is large, the problem of having insufficient memory to store M can be avoided ..."
Abstract

Cited by 22 (11 self)
 Add to MetaCart
Most current implementations of interiorpoint methods for semidefinite programming use a direct method to solve the Schur complement equation (SCE) M y = h in computing the search direction. When the number of constraints is large, the problem of having insufficient memory to store M can be avoided if an iterative method is used instead. Numerical experiments have shown that the conjugate residual (CR) method typically takes a huge number of steps to generate a high accuracy solution. On the other hand, it is difficult to incorporate traditional preconditioners into the SCE, except for block diagonal preconditioners. We decompose the SCE into a 2 × 2 block system by decomposing y (similarly for h) into two orthogonal components with one lying in a certain subspace that is determined from the structure of M . Numerical experiments on semidefinite programming problems arising from Lovász function of graphs and MAXCUT problems show that high accuracy solutions can be obtained with moderate n...
Expressions And Bounds For The GMRES Residual
 BIT
, 1999
"... . Expressions and bounds are derived for the residual norm in GMRES. It is shown that the minimal residual norm is large as long as the Krylov basis is wellconditioned.For scaled Jordan blocks the minimal residual norm is expressed in terms of eigenvalues and departure from normality.For normal mat ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
. Expressions and bounds are derived for the residual norm in GMRES. It is shown that the minimal residual norm is large as long as the Krylov basis is wellconditioned.For scaled Jordan blocks the minimal residual norm is expressed in terms of eigenvalues and departure from normality.For normal matrices the minimal residual norm is expressed in terms of products of relative eigenvalue di#erences. Key words. linear system, Krylov methods, GMRES, MINRES, Vandermonde matrix, eigenvalues, departure from normality AMS subject classi#cation. 15A03, 15A06, 15A09, 15A12, 15A18, 15A60, 65F10, 65F15, 65F20, 65F35. 1. Introduction.. The generalised minimal residual method #GMRES# #31, 36# #and MINRES for Hermitian matrices #30## is an iterative method for solving systems of linear equations Ax = b. The approximate solution in iteration i minimises the twonorm of the residual b , Az over the Krylov space spanfb;Ab;:::;A i,1 bg. The goal of this paper is to express this minimal residual norm...
An immersed interface method for viscous incompressible flows involving rigid and flexible boundaries
 J. Comp. Phys
, 2006
"... We present an immersed interface method for the incompressible NavierStokes equations capable of handling rigid immersed boundaries. The immersed boundary is represented by a set of Lagrangian control points. In order to guarantee that the noslip condition on the boundary is satisfied, singular fo ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
We present an immersed interface method for the incompressible NavierStokes equations capable of handling rigid immersed boundaries. The immersed boundary is represented by a set of Lagrangian control points. In order to guarantee that the noslip condition on the boundary is satisfied, singular forces are applied on the fluid. The forces are related to the jumps in pressure and the jumps in the derivatives of both pressure and velocity, and are interpolated using cubic splines. The strength of the singular forces is determined by solving a small system of equations iteratively at each time step. The NavierStokes equations are discretized on a staggered Cartesian grid by a second order accurate projection method for pressure and velocity. Keywords: Immersed interface method, NavierStokes equations, Cartesian grid method, finite difference, fast Poisson solvers, irregular domains.
Breakdownfree GMRES for singular systems
 SIAM J. Matrix Anal. Appl
"... Abstract. GMRES is a popular iterative method for the solution of large linear systems of equations with a square nonsingular matrix. When the matrix is singular, GMRES may break down before an acceptable approximate solution has been determined. This paper discusses properties of GMRES solutions at ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
Abstract. GMRES is a popular iterative method for the solution of large linear systems of equations with a square nonsingular matrix. When the matrix is singular, GMRES may break down before an acceptable approximate solution has been determined. This paper discusses properties of GMRES solutions at breakdown and presents a modification of GMRES to overcome the breakdown.
Accurate Solution of Weighted Least Squares by Iterative Methods
 Tech. Rep. ANL/MCSP6440297, Mathematics and Computer Science Division, Argonne National Laboratory
, 1997
"... . We consider the weighted leastsquares (WLS) problem with a very illconditioned weight matrix. Weighted leastsquares problems arise in many applications including linear programming, electrical networks, boundary value problems, and structures. Because of roundoff errors, standard iterative meth ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
. We consider the weighted leastsquares (WLS) problem with a very illconditioned weight matrix. Weighted leastsquares problems arise in many applications including linear programming, electrical networks, boundary value problems, and structures. Because of roundoff errors, standard iterative methods for solving a WLS problem with illconditioned weights may not give the correct answer. Indeed, the difference between the true and computed solution (forward error) may be large. We propose an iterative algorithm, called MINRESL, for solving WLS problems. The MINRESL method is the application of MINRES, a Krylovspace method due to Paige and Saunders, to a certain layered linear system. Using a simplified model of the effects of roundoff error, we prove that MINRESL gives answers with small forward error. We present computational experiments for some applications. This work has been supported in part by an NSF Presidential Young Investigator grant, with matching funds received fr...
MODIFIED GRAM–SCHMIDT (MGS), LEAST SQUARES, AND BACKWARD STABILITY OF MGSGMRES
, 2006
"... The generalized minimum residual method (GMRES) [Y. Saad and M. Schultz, SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856–869] for solving linear systems Ax = b is implemented as a sequence of least squares problems involving Krylov subspaces of increasing dimensions. The most usual implementation ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
The generalized minimum residual method (GMRES) [Y. Saad and M. Schultz, SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856–869] for solving linear systems Ax = b is implemented as a sequence of least squares problems involving Krylov subspaces of increasing dimensions. The most usual implementation is modified Gram–Schmidt GMRES (MGSGMRES). Here we show that MGSGMRES is backward stable. The result depends on a more general result on the backward stability of a variant of the MGS algorithm applied to solving a linear least squares problem, and uses other new results on MGS and its loss of orthogonality, together with an important but neglected condition number, and a relation between residual norms and certain singular values.
Computable convergence bounds for GMRES
 SIAM Journal on Matrix Analysis and Applications
, 1998
"... The main purpose of this paper is the derivation of computable bounds on the residual norms of (full) GMRES. The new bounds depend on the initial guess and thus are conceptually different from standard 'worstcase' bounds. The analysis is valid for nonsingular linear systems and for any singular lin ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
The main purpose of this paper is the derivation of computable bounds on the residual norms of (full) GMRES. The new bounds depend on the initial guess and thus are conceptually different from standard 'worstcase' bounds. The analysis is valid for nonsingular linear systems and for any singular linear system, provided a certain condition on the initial residual is satisfied. It is shown that approximations to all factors in the new bounds can be obtained early in the GMRES run. The approximations serve to predict the convergence behavior of GMRES in later phases of the iteration. Numerical examples demonstrate that the new bounds are capable to describe the actual convergence behavior of GMRES for the given linear system and initial guess. Key words. linear systems, convergence analysis, GMRES method, Krylov subspace methods, iterative methods AMS Subject Classifications. 65F10, 65F15, 65F50, 65N12, 65N15 1 Introduction The GMRES algorithm by Saad and Schultz [22] is one of the mos...
Construction and Analysis of Polynomial Iterative Methods for NonHermitian Systems of Linear Equations
, 1998
"... apier nach 1 ISO 9706 Contents 1 Introduction 7 1.1 What is a PIM? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.2 Different types of PIMs . . . . . . . . . . . . . . . . . . . . . . . 8 1.3 Organization and summary of our results . . . . . . . . . . . . . 9 2 Background 13 2.1 Krylo ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
apier nach 1 ISO 9706 Contents 1 Introduction 7 1.1 What is a PIM? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.2 Different types of PIMs . . . . . . . . . . . . . . . . . . . . . . . 8 1.3 Organization and summary of our results . . . . . . . . . . . . . 9 2 Background 13 2.1 Krylov spaces and the Arnoldi process . . . . . . . . . . . . . . . 13 2.2 Exterior mapping functions and Faber polynomials . . . . . . . . 14 2.3 Inclusion sets and asymptotic analysis . . . . . . . . . . . . . . . 15 3 Inclusion sets generated by the conformal 'bratwurst' maps 19 3.1 Derivation of the maps . . . . . . . . . . . . . . . . . . . . . . . . 19 3.2 Definition and properties of the 'bratwurst' shape sets . . . . . . 23 3.3 Numerical examples . . . . . . . . . . . . . . . . . . . . . . . . . 25 4 The hybrid ABF method for nonhermitian linear systems 29 4.1 Faber polynomials for the inclusion sets