Results 1  10
of
36
LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares
 ACM Trans. Math. Software
, 1982
"... An iterative method is given for solving Ax ~ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable numerica ..."
Abstract

Cited by 337 (18 self)
 Add to MetaCart
An iterative method is given for solving Ax ~ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable numerical properties. Reliable stopping criteria are derived, along with estimates of standard errors for x and the condition number of A. These are used in the FORTRAN implementation of the method, subroutine LSQR. Numerical tests are described comparing I~QR with several other conjugategradient algorithms, indicating that I~QR is the most reliable algorithm when A is illconditioned. Categories and Subject Descriptors: G.1.2 [Numerical Analysis]: ApprorJmationleast squares approximation; G.1.3 [Numerical Analysis]: Numerical Linear Algebralinear systems (direct and
SVDPACKC (Version 1.0) User's Guide
, 1993
"... SVDPACKC comprises four numerical (iterative) methods for computing the singular value decomposition (SVD) of large sparse matrices using ANSI C. This software package implements Lanczos and subspace iterationbased methods for determining several of the largest singular triplets (singular values an ..."
Abstract

Cited by 63 (4 self)
 Add to MetaCart
SVDPACKC comprises four numerical (iterative) methods for computing the singular value decomposition (SVD) of large sparse matrices using ANSI C. This software package implements Lanczos and subspace iterationbased methods for determining several of the largest singular triplets (singular values and corresponding left and rightsingular vectors) for large sparse matrices. The package has been ported to a variety of machines ranging from supercomputers to workstations: CRAY YMP, IBM RS/6000550, DEC 5000100, HP 9000750, SPARCstation 2, and Macintosh II/fx. This document (i) explains each algorithm in some detail, (ii) explains the input parameters for each program, (iii) explains how to compile/execute each program, and (iv) illustrates the performance of each method when we compute lower rank approximations to sparse termdocument matrices from information retrieval applications. A userfriendly software interface to the package for UNIXbased systems and the Macintosh II/fx is als...
Estimates in Quadratic Formulas
, 1994
"... Let A be a real symmetric positive definite matrix. We consider three particular questions, namely estimates for the error in linear systems Ax = b, minimizing quadratic functional min x (x T Ax \Gamma 2b T x) subject to the constraint k x k= ff, ff !k A \Gamma1 b k, and estimates for the e ..."
Abstract

Cited by 26 (7 self)
 Add to MetaCart
Let A be a real symmetric positive definite matrix. We consider three particular questions, namely estimates for the error in linear systems Ax = b, minimizing quadratic functional min x (x T Ax \Gamma 2b T x) subject to the constraint k x k= ff, ff !k A \Gamma1 b k, and estimates for the entries of the matrix inverse A \Gamma1 . All of these questions can be formulated as a problem of finding an estimate or an upper and lower bound on u T F (A)u, where F (A) = A \Gamma1 resp. F (A) = A \Gamma2 , u is a real vector. This problem can be considered in terms of estimates in the Gaußtype quadrature formulas which can be effectively computed exploiting the underlying Lanczos process. Using this approach, we first recall the exact arithmetic solution of the questions formulated above and then analyze the effect of rounding errors in the quadrature calculations. It is proved that the basic relation between the accuracy of Gauß quadrature for f() = \Gamma1 and the rate of ...
Low Rank Matrix Approximation Using The Lanczos Bidiagonalization Process With Applications
 SIAM J. Sci. Comput
, 2000
"... Low rank approximation of large and/or sparse matrices is important in many applications. We show that good low rank matrix approximations can be directly obtained from the Lanczos bidiagonalization process without computing singular value decomposition. We also demonstrate that a socalled oneside ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
Low rank approximation of large and/or sparse matrices is important in many applications. We show that good low rank matrix approximations can be directly obtained from the Lanczos bidiagonalization process without computing singular value decomposition. We also demonstrate that a socalled onesided reorthogonalization process can be used to maintain adequate level of orthogonality among the Lanczos vectors and produce accurate low rank approximations. This technique reduces the computational cost of the Lanczos bidiagonalization process. We illustrate the efficiency and applicability of our algorithm using numerical examples from several applications areas.
Error analysis of the Lanczos algorithm for the nonsymmetric eigenvalue problem
 In R
, 1994
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
An Implicit Shift Bidiagonalization Algorithm For IllPosed Systems
 BIT
, 1994
"... . Iterative methods based on Lanczos bidiagonalization with full reorthogonalization (LBDR) are considered for solving large scale discrete illposed linear least squares problems of the form min x kAx \Gamma bk 2 . Methods for regularization in the Krylov subspaces are discussed which use generali ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
. Iterative methods based on Lanczos bidiagonalization with full reorthogonalization (LBDR) are considered for solving large scale discrete illposed linear least squares problems of the form min x kAx \Gamma bk 2 . Methods for regularization in the Krylov subspaces are discussed which use generalized cross validation (GCV) for determining the regularization parameter. These methods have the advantage that no a priori information about the noise level is required. To improve convergence of the Lanczos process we apply a variant of the implicitly restarted Lanczos algorithm by Sorenson using zero shifts. Although this restarted method simply corresponds to using LBDR with a starting vector (AA T ) p b, it is shown that carrying out the process implicitly is essential for numerical stability. An LBDR algorithm is presented which incorporates implicit restarts to ensure that the global minimum of the CGV curve corresponds to a minimum on the curve for the truncated SVD solution. Nume...
Differences in the effects of rounding errors in Krylov solvers for symmetric indefinite linear systems
, 1999
"... The 3term Lanczos process leads, for a symmetric matrix, to bases for Krylov subspaces of increasing dimension. The Lanczos basis, together with the recurrence coefficients, can be used for the solution of symmetric indefinite linear systems, by solving the reduced system in one way or another. Thi ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
The 3term Lanczos process leads, for a symmetric matrix, to bases for Krylov subspaces of increasing dimension. The Lanczos basis, together with the recurrence coefficients, can be used for the solution of symmetric indefinite linear systems, by solving the reduced system in one way or another. This leads to wellknown methods: MINRES, GMRES, and SYMMLQ. We will discuss in what way and to what extent these approaches differ in their sensitivity to rounding errors. In our analysis we will assume that the Lanczos basis is generated in exactly the same way for the different methods, and we will not consider the errors in the Lanczos process itself. We will show that the method of solution may lead, under certain circumstances, to large additional errors, that are not corrected by continuing the iteration process. Our findings are supported and illustrated by numerical examples. 1 Introduction We will consider iterative methods for the construction of approximate solutions, starting with...
Large Scale Sparse Singular Value Computations
 International Journal of Supercomputer Applications
, 1992
"... . In this paper, we present four numerical methods for computing the singular value decomposition (SVD) of large sparse matrices on a multiprocessor architecture. We particularly emphasize Lanczos and subspace iterationbased methods for determining several of the largest singular triplets (singular ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
. In this paper, we present four numerical methods for computing the singular value decomposition (SVD) of large sparse matrices on a multiprocessor architecture. We particularly emphasize Lanczos and subspace iterationbased methods for determining several of the largest singular triplets (singular values and corresponding left and rightsingular vectors) for sparse matrices arising from two practical applications: information retrieval and seismic reflection tomography. The target architectures for our implementations of such methods are the Cray2S/4128 and Alliant FX/80. The sparse SVD problem is well motivated by recent informationretrieval techniques in which dominant singular values and their corresponding singular vectors of large sparse termdocument matrices are desired, and by nonlinear inverse problems from seismic tomography applications in which approximate pseudoinverses of large sparse Jacobian matrices are needed. It is hoped that this research will advance the dev...
Analysis of the finite precision BiConjugate Gradient algorithm for nonsymmetric linear systems
 Math. Comp
, 1995
"... Abstract. In this paper we analyze the biconjugate gradient algorithm in finite precision arithmetic, and suggest reasons for its often observed robustness. By using a tridiagonal structure, which is preserved by the finite precision biconjugate gradient iteration, we are able to bound its residua ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
Abstract. In this paper we analyze the biconjugate gradient algorithm in finite precision arithmetic, and suggest reasons for its often observed robustness. By using a tridiagonal structure, which is preserved by the finite precision biconjugate gradient iteration, we are able to bound its residual norm by a minimum polynomial of a perturbed matrix (i.e. the residual norm of the exact GMRES applied to a perturbed matrix) multiplied by an amplification factor. This shows that occurrence of nearbreakdowns or loss of biorthogonality does not necessarily deter convergence of the residuals provided that the amplification factor remains bounded. Numerical examples are given to gain insights into these bounds. 1.