Results 1 
5 of
5
Accurate Solution of Weighted Least Squares by Iterative Methods
 Tech. Rep. ANL/MCSP6440297, Mathematics and Computer Science Division, Argonne National Laboratory
, 1997
"... . We consider the weighted leastsquares (WLS) problem with a very illconditioned weight matrix. Weighted leastsquares problems arise in many applications including linear programming, electrical networks, boundary value problems, and structures. Because of roundoff errors, standard iterative meth ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
. We consider the weighted leastsquares (WLS) problem with a very illconditioned weight matrix. Weighted leastsquares problems arise in many applications including linear programming, electrical networks, boundary value problems, and structures. Because of roundoff errors, standard iterative methods for solving a WLS problem with illconditioned weights may not give the correct answer. Indeed, the difference between the true and computed solution (forward error) may be large. We propose an iterative algorithm, called MINRESL, for solving WLS problems. The MINRESL method is the application of MINRES, a Krylovspace method due to Paige and Saunders, to a certain layered linear system. Using a simplified model of the effects of roundoff error, we prove that MINRESL gives answers with small forward error. We present computational experiments for some applications. This work has been supported in part by an NSF Presidential Young Investigator grant, with matching funds received fr...
Computable convergence bounds for GMRES
 SIAM Journal on Matrix Analysis and Applications
, 1998
"... The main purpose of this paper is the derivation of computable bounds on the residual norms of (full) GMRES. The new bounds depend on the initial guess and thus are conceptually different from standard 'worstcase' bounds. The analysis is valid for nonsingular linear systems and for any singular lin ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
The main purpose of this paper is the derivation of computable bounds on the residual norms of (full) GMRES. The new bounds depend on the initial guess and thus are conceptually different from standard 'worstcase' bounds. The analysis is valid for nonsingular linear systems and for any singular linear system, provided a certain condition on the initial residual is satisfied. It is shown that approximations to all factors in the new bounds can be obtained early in the GMRES run. The approximations serve to predict the convergence behavior of GMRES in later phases of the iteration. Numerical examples demonstrate that the new bounds are capable to describe the actual convergence behavior of GMRES for the given linear system and initial guess. Key words. linear systems, convergence analysis, GMRES method, Krylov subspace methods, iterative methods AMS Subject Classifications. 65F10, 65F15, 65F50, 65N12, 65N15 1 Introduction The GMRES algorithm by Saad and Schultz [22] is one of the mos...
On the influence of the orthogonalization scheme on the parallel performance of GMRES
, 1998
"... . In Krylovbased iterative methods, the computation of an orthonormal basis of the Krylov space is a key issue in the algorithms because the many scalar products are often a bottleneck in parallel distributed environments. Using GMRES, we present a comparison of four variants of the GramSchmidt pr ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
. In Krylovbased iterative methods, the computation of an orthonormal basis of the Krylov space is a key issue in the algorithms because the many scalar products are often a bottleneck in parallel distributed environments. Using GMRES, we present a comparison of four variants of the GramSchmidt process on distributed memory machines. Our experiments are carried on an application in astrophysics and on a convectiondiffusion example. We show that the iterative classical GramSchmidt method overcomes its three competitors in speed and in parallel scalability while keeping robust numerical properties. 1 Introduction Krylovbased iterative methods for solving linear systems are attractive because they can be rather easily integrated in a parallel distributed environment. This is mainly because they are free from matrix manipulations apart from matrixvector products which can often be parallelized. The difficulty is then to find an efficient preconditioner which is good at reducing the nu...
On the Role of Orthogonality in the GMRES Method
, 1996
"... . In the paper we deal with some computational aspects of the Generalized minimal residual method (GMRES) for solving systems of linear algebraic equations. The key question of the paper is the importance of the orthogonality of computed vectors and its influence on the rate of convergence, numerica ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
. In the paper we deal with some computational aspects of the Generalized minimal residual method (GMRES) for solving systems of linear algebraic equations. The key question of the paper is the importance of the orthogonality of computed vectors and its influence on the rate of convergence, numerical stability and accuracy of different implementations of the method. Practical impact on the efficiency in the parallel computer environment is considered. 1 Introduction Scientific and engineering research is becoming increasingly dependent upon development and implementation of efficient parallel algorithms on modern highperformance computers. Numerical linear algebra is an important part of such research and numerical linear algebra algorithms represent the most widely used computational tools in science and engineering. Matrix computations, including the solution of systems of linear equations, least squares problems, and algebraic eigenvalue problems, govern the performance of many app...
Implementation Aspects
"... e inner products, vector updates and matrix vector product are easily parallelized and vectorized. The more successful preconditionings, i.e, based upon incomplete LU decomposition, are not easily parallelizable. For that reason one is often satisfied with the use of only diagonal scaling as a preco ..."
Abstract
 Add to MetaCart
e inner products, vector updates and matrix vector product are easily parallelized and vectorized. The more successful preconditionings, i.e, based upon incomplete LU decomposition, are not easily parallelizable. For that reason one is often satisfied with the use of only diagonal scaling as a preconditioner on highly parallel computers, such as the CM2 [24]. On distributed memory computers we need large grained parallelism in order to reduce synchronization overhead. This can be achieved by combining the work required for a successive number of iteration steps. The idea is to construct first in parallel a straight forward Krylov basis for the search subspace in which an update for the current solution will be determined. Once this basis has been computed, the vectors are orthogonalized, as is done in Krylov subspace methods. The construction as well as the orthogonalization can be done with large grained parallelism, and has su#cient degree of parallelism in it. This approach has be