Results 1 
6 of
6
Methods For Large Scale Total Least Squares Problems
, 1999
"... For solving the total least squares problems, minE;f k(E; f)kF subject to (A+E)x = b+f , where A is large and sparse or structured Björck suggested a method based on Rayleigh quotient iteration. This method reduces the problem to the solution of a sequence of symmetric, positive definite linear syst ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
For solving the total least squares problems, minE;f k(E; f)kF subject to (A+E)x = b+f , where A is large and sparse or structured Björck suggested a method based on Rayleigh quotient iteration. This method reduces the problem to the solution of a sequence of symmetric, positive definite linear systems of the form (A T A \Gamma ¯ oe 2 I)z = g, where ¯ oe is an approximation to the smallest singular value of (A; b). A preconditioned conjugate gradient method, using a sparse, possibly incomplete, Cholesky factor of A T A can be used for solving these systems. In this paper the method is further developed. The choice of initial approximation and termination criteria are discussed. Numerical results confirm that the method achieves rapid convergence and good accuracy for problems which are not too illconditioned.
Multifrontal Computation with the Orthogonal Factors of Sparse Matrices
 SIAM Journal on Matrix Analysis and Applications
, 1994
"... . This paper studies the solution of the linear least squares problem for a large and sparse m by n matrix A with m n by QR factorization of A and transformation of the righthand side vector b to Q T b. A multifrontalbased method for computing Q T b using Householder factorization is presented ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
. This paper studies the solution of the linear least squares problem for a large and sparse m by n matrix A with m n by QR factorization of A and transformation of the righthand side vector b to Q T b. A multifrontalbased method for computing Q T b using Householder factorization is presented. A theoretical operation count for the K by K unbordered grid model problem and problems defined on graphs with p nseparators shows that the proposed method requires O(NR ) storage and multiplications to compute Q T b, where NR = O(n log n) is the number of nonzeros of the upper triangular factor R of A. In order to introduce BLAS2 operations, Schreiber and Van Loan's StorageEfficientWY Representation [SIAM J. Sci. Stat. Computing, 10(1989),pp. 5557] is applied for the orthogonal factor Q i of each frontal matrix F i . If this technique is used, the bound on storage increases to O(n(logn) 2 ). Some numerical results for the grid model problems as well as HarwellBoeing problems...
Improved error bounds for underdetermined system solvers
 SIAM J. Matrix Anal. Appl
, 1993
"... The minimal 2norm solution to an underdetermined system Ax = b of full rank can be computed using a QR factorization of A T in two di erent ways. One requires storage and reuse of the orthogonal matrix Q while the method of seminormal equations does not. Existing error analyses show that both me ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
The minimal 2norm solution to an underdetermined system Ax = b of full rank can be computed using a QR factorization of A T in two di erent ways. One requires storage and reuse of the orthogonal matrix Q while the method of seminormal equations does not. Existing error analyses show that both methods produce computed solutions whose normwise relative error is bounded to rst order by c 2(A)u, where c is a constant depending on the dimensions of A, 2(A) = kA + k2kAk2 is the 2norm condition number, and u is the unit roundo. We show that these error bounds can be strengthened by replacing 2(A) by the potentially much smaller quantity cond2(A) = kjA + j jAjk2, which isinvariant under row scaling of A. We also show that cond2(A) re ects the sensitivity of the minimum norm solution x to rowwise relative perturbations in the data A and b. For square linear systems Ax = b row equilibration is shown to endow
Dealing with Dense Rows in the Solution of Sparse Linear Least Squares Problems
, 1995
"... Sparse linear least squares problems containing a few relatively dense rows occur frequently in practice. Straightforward solution of these problems could cause catastrophic fill and delivers extremely poor performance. This paper studies a scheme for solving such problems efficiently by handling de ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Sparse linear least squares problems containing a few relatively dense rows occur frequently in practice. Straightforward solution of these problems could cause catastrophic fill and delivers extremely poor performance. This paper studies a scheme for solving such problems efficiently by handling dense rows and sparse rows separately. How a sparse matrix is partitioned into dense rows and sparse rows determines the efficiency of the overall solution process. A new algorithm is proposed to find a partition of a sparse matrix which leads to satisfactory or even optimal performance. Extensive numerical experiments are performed to demonstrate the effectiveness of the proposed scheme. A MATLAB implementation is included. 1 This work was supported in part by the Cornell Theory Center which receives funding from members of its Corporate Research Institute, the National Science Foundation (NSF), the Advanced Research Projects Agency (ARPA), the National Institutes of Health (NIH), New York S...
Stability of Fast Algorithms for Structured Linear Systems
, 1997
"... . We survey the numerical stability of some fast algorithms for solving systems of linear equations and linear least squares problems with a low displacementrank structure. For example, the matrices involved may be Toeplitz or Hankel. We consider algorithms which incorporate pivoting without destro ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
. We survey the numerical stability of some fast algorithms for solving systems of linear equations and linear least squares problems with a low displacementrank structure. For example, the matrices involved may be Toeplitz or Hankel. We consider algorithms which incorporate pivoting without destroying the structure, and describe some recent results on the stability of these algorithms. We also compare these results with the corresponding stability results for the well known algorithms of Schur/Bareiss and Levinson, and for algorithms based on the seminormal equations. Key words. Bareiss algorithm, Levinson algorithm, Schur algorithm, Toeplitz matrices, displacement rank, generalized Schur algorithm, numerical stability. AMS subject classifications. 65F05, 65G05, 47B35, 65F30 1. Motivation. The standard direct method for solving dense n \Theta n systems of linear equations is Gaussian elimination with partial pivoting. The usual implementation requires of order n 3 arithmetic op...
Parallel Multifrontal Solution Of Sparse Linear Least Squares Problems On DistributedMemory Multiprocessors
 Advanced Computing Research Institute, Center for Theory and Simulation in Science and Engineering, Cornell
, 1994
"... . We describe the issues involved in the design and implementation of efficient parallel algorithms for solving sparse linear least squares problems on distributedmemory multiprocessors. We consider both the QR factorization method due to Golub and the method of corrected seminormal equations due ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
. We describe the issues involved in the design and implementation of efficient parallel algorithms for solving sparse linear least squares problems on distributedmemory multiprocessors. We consider both the QR factorization method due to Golub and the method of corrected seminormal equations due to Bj¨orck. The major tasks involved are sparse QR factorization, sparse triangular solution and sparse matrixvector multiplication. The sparse QR factorization is accomplished by a parallel multifrontal scheme recently introduced. New parallel algorithms for solving the related sparse triangular systems and for performing sparse matrixvector multiplications are proposed. The arithmetic and communication complexities of our algorithms on regular grid problems are presented. Experimental results on an Intel iPSC/860 machine are described. Key words. parallel algorithms, sparse matrix, orthogonal factorization, multifrontal method, least squares problems, triangular solution, distributedme...