Results 1  10
of
70
Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods
, 1994
"... This document is the electronic version of the 2nd edition of the Templates book, which is available for purchase from the Society for Industrial and Applied ..."
Abstract

Cited by 169 (5 self)
 Add to MetaCart
This document is the electronic version of the 2nd edition of the Templates book, which is available for purchase from the Society for Industrial and Applied
Computing the Singular Value Decomposition with High Relative Accuracy
 Linear Algebra Appl
, 1997
"... We analyze when it is possible to compute the singular values and singular vectors of a matrix with high relative accuracy. This means that each computed singular value is guaranteed to have some correct digits, even if the singular values have widely varying magnitudes. This is in contrast to the a ..."
Abstract

Cited by 55 (12 self)
 Add to MetaCart
We analyze when it is possible to compute the singular values and singular vectors of a matrix with high relative accuracy. This means that each computed singular value is guaranteed to have some correct digits, even if the singular values have widely varying magnitudes. This is in contrast to the absolute accuracy provided by conventional backward stable algorithms, whichin general only guarantee correct digits in the singular values with large enough magnitudes. It is of interest to compute the tiniest singular values with several correct digits, because in some cases, such as #nite element problems and quantum mechanics, it is the smallest singular values that havephysical meaning, and should be determined accurately by the data. Many recent papers have identi#ed special classes of matrices where high relative accuracy is possible, since it is not possible in general. The perturbation theory and algorithms for these matrix classes have been quite di#erent, motivating us to seek a co...
T.: Accelerated projection methods for computing pseudoinverse solutions of systems of linear equations
 BIT
, 1979
"... Iterative methods are developed for computing the MoorePenrose pseudoinverse solution of a linear system Ax b, where A is an m x n sparse matrix. The methods do not require the explicit formation of AT A or AAT and therefore are advantageous to use when these matrices are much less sparse than A it ..."
Abstract

Cited by 41 (0 self)
 Add to MetaCart
Iterative methods are developed for computing the MoorePenrose pseudoinverse solution of a linear system Ax b, where A is an m x n sparse matrix. The methods do not require the explicit formation of AT A or AAT and therefore are advantageous to use when these matrices are much less sparse than A itself. The methods are based on solving the two related systems (i) x=ATy, AAly=b, and (ii) AT Ax=A 1 b. First it is shown how the SORand SSORmethods for these two systems can be implemented efficiently. Further, the acceleration of the SSORmethod by Chebyshev semiiteration and the conjugate gradient method is discussed. In particular it is shown that the SSORcg method for (i) and (ii) can be implemented in such a way that each step requires only two sweeps through successive rows and columns of A respectively. In the general rank deficient and inconsistent case it is shown how the pseudoinverse solution can be computed by a two step procedure. Some possible applications are mentioned and numerical results are given for some problems from picture reconstruction. 1. IntrodDction. Let A be a given m x n sparse matrix, b a given mvector and x = A + b the MoorePenrose pseudoinverse solution of the linear system of equations (1.1) Ax b. We denote the range and nullspace of a matrix A by R(A) and N(A) respectively. Convenient characterizations of the pseudoinverse solution are given in the following two lemmas. LEMMA 1.1. x=A+b is the unique solution of the problem: minimize IIxl1 2 when x E {x; IlbAxIl2=minimum}. LEMMA 1.2. x = A + b is the unique vector which satisfies x E R (AT) and (bAx)..LR(A), or equivalently x..LN(A) and (bAx) E N(A T). These lemmas are easily proved by using the singular value decomposition of A and the resulting expression for A + (see Stewart [32], pp. 317326).
Numerically Stable Generation of Correlation Matrices and Their Factors
 BIT
, 2000
"... . Correlation matricessymmetric positive semidefinite matrices with unit diagonal are important in statistics and in numerical linear algebra. For simulation and testing it is desirable to be able to generate random correlation matrices with specified eigenvalues (which must be nonnegative an ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
. Correlation matricessymmetric positive semidefinite matrices with unit diagonal are important in statistics and in numerical linear algebra. For simulation and testing it is desirable to be able to generate random correlation matrices with specified eigenvalues (which must be nonnegative and sum to the dimension of the matrix). A popular algorithm of Bendel and Mickey takes a matrix having the specified eigenvalues and uses a finite sequence of Given rotations to introduce 1s on the diagonal. We give improved formulae for computing the rotations and prove that the resulting algorithm is numerically stable. We show by example that the formulae originally proposed, which are used in certain existing Fortran implementations, can lead to serious instability. We also show how to modify the algorithm to generate a rectangular matrix with columns of unit 2norm. Such a matrix represents a correlation matrix in factored form, which can be preferable to representing the matrix itself, ...
Collinearity and Least Squares Regression
 Statistical Science
, 1987
"... this paper we introduce certain numbers, called collinearity indices, which are useful in detecting near collinearities in regression problems. The coefficients enter adversely into formulas concerning significance testing and the effects of errors in the regression variables. Thus they provide simp ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
this paper we introduce certain numbers, called collinearity indices, which are useful in detecting near collinearities in regression problems. The coefficients enter adversely into formulas concerning significance testing and the effects of errors in the regression variables. Thus they provide simple regression diagnostics, suitable for incorporation in regression packages. Keywords and phrases: collinearity, illconditioning, linear regression, errors in the variables, regression diagnostics. 1 Introduction
Perturbation Analyses for the QR Factorization
 SIAM J. Matrix Anal. Appl
, 1997
"... This paper gives perturbation analyses for Q 1 and R in the QR factorization A = Q 1 R, Q T 1 Q 1 = I, for a given real m \Theta n matrix A of rank n. The analyses more accurately reflect the sensitivity of the problem than previous normwise results. The condition numbers here are altered by any c ..."
Abstract

Cited by 16 (11 self)
 Add to MetaCart
This paper gives perturbation analyses for Q 1 and R in the QR factorization A = Q 1 R, Q T 1 Q 1 = I, for a given real m \Theta n matrix A of rank n. The analyses more accurately reflect the sensitivity of the problem than previous normwise results. The condition numbers here are altered by any column pivoting used in AP = Q1R, and the condition numbers for R are bounded for a fixed n when the standard column pivoting strategy is used. This strategy tends to improve the condition of Q 1 , so the computed Q 1 and R will probably both have greatest accuracy when we use the standard column pivoting strategy. First order normwise perturbation analyses are given for both Q 1 and R. It is seen that the analysis for R may be approached in two ways  a detailed "matrixvector equation" analysis which provides tight bounds and resulting true condition numbers, which unfortunately are costly to compute and not very intuitive, and a perhaps simpler "matrix equation" analysis which provides results that are usually weaker but easier to interpret, and which allow efficient computation of a satisfactory estimate for the true condition number. Key Words. QR factorization, perturbation analysis, condition estimation, matrix equations, pivoting AMS Subject Classifications: 15A23, 65F35 1.
Differences in the effects of rounding errors in Krylov solvers for symmetric indefinite linear systems
, 1999
"... The 3term Lanczos process leads, for a symmetric matrix, to bases for Krylov subspaces of increasing dimension. The Lanczos basis, together with the recurrence coefficients, can be used for the solution of symmetric indefinite linear systems, by solving the reduced system in one way or another. Thi ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
The 3term Lanczos process leads, for a symmetric matrix, to bases for Krylov subspaces of increasing dimension. The Lanczos basis, together with the recurrence coefficients, can be used for the solution of symmetric indefinite linear systems, by solving the reduced system in one way or another. This leads to wellknown methods: MINRES, GMRES, and SYMMLQ. We will discuss in what way and to what extent these approaches differ in their sensitivity to rounding errors. In our analysis we will assume that the Lanczos basis is generated in exactly the same way for the different methods, and we will not consider the errors in the Lanczos process itself. We will show that the method of solution may lead, under certain circumstances, to large additional errors, that are not corrected by continuing the iteration process. Our findings are supported and illustrated by numerical examples. 1 Introduction We will consider iterative methods for the construction of approximate solutions, starting with...
A Survey of Componentwise Perturbation Theory in Numerical Linear Algebra
 in Mathematics of Computation 19431993: A Half Century of Computational Mathematics
, 1994
"... . Perturbation bounds in numerical linear algebra are traditionally derived and expressed using norms. Norm bounds cannot reflect the scaling or sparsity of a problem and its perturbation, and so can be unduly weak. If the problem data and its perturbation are measured componentwise, much smaller an ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
. Perturbation bounds in numerical linear algebra are traditionally derived and expressed using norms. Norm bounds cannot reflect the scaling or sparsity of a problem and its perturbation, and so can be unduly weak. If the problem data and its perturbation are measured componentwise, much smaller and more revealing bounds can be obtained. A survey is given of componentwise perturbation theory in numerical linear algebra, covering linear systems, the matrix inverse, matrix factorizations, the least squares problem, and the eigenvalue and singular value problems. Most of the results described have been published in the last five years. Our hero is the intrepid, yet sensitive matrix A. Our villain is E, who keeps perturbing A. When A is perturbed he puts on a crumpled hat: e A = A+E. G. W. Stewart and J.G. Sun, Matrix Perturbation Theory (1990) 1. Introduction Matrix analysis would not have developed into the vast subject it is today without the concept of representing a matrix by ...