Results 1 
9 of
9
Computing RankRevealing QR Factorizations of Dense Matrices
 Argonne Preprint ANLMCSP5590196, Argonne National Laboratory
, 1996
"... this paper, and we give only a brief synopsis here. For details, the reader is referred to the code. Test matrices 1 through 5 were designed to exercise column pivoting. Matrix 6 was designed to test the behavior of the condition estimation in the presence of clusters for the smallest singular value ..."
Abstract

Cited by 39 (2 self)
 Add to MetaCart
this paper, and we give only a brief synopsis here. For details, the reader is referred to the code. Test matrices 1 through 5 were designed to exercise column pivoting. Matrix 6 was designed to test the behavior of the condition estimation in the presence of clusters for the smallest singular value. For the other cases, we employed the LAPACK matrix generator xLATMS, which generates random symmetric matrices by multiplying a diagonal matrix with prescribed singular values by random orthogonal matrices from the left and right. For the break1 distribution, all singular values are 1.0 except for one. In the arithmetic and geometric distributions, they decay from 1.0 to a specified smallest singular value in an arithmetic and geometric fashion, respectively. In the "reversed" distributions, the order of the diagonal entries was reversed. For test cases 7 though 12, we used xLATMS to generate a matrix of order
A BLAS3 version of the QR factorization with column pivoting
 SIAM J. SCI. COMPUT
, 1995
"... The QR factorization with column pivoting (QRP), originally suggested by Golub and Businger in 1965, is a popular approach to computing rankrevealing factorizations. Using BLAS Level 1, it was implemented in LINPACK, and, using BLAS Level 2, in LAPACK. While the BLAS Level2version delivers, in gen ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
The QR factorization with column pivoting (QRP), originally suggested by Golub and Businger in 1965, is a popular approach to computing rankrevealing factorizations. Using BLAS Level 1, it was implemented in LINPACK, and, using BLAS Level 2, in LAPACK. While the BLAS Level2version delivers, in general, superior performance, it may result in worse performance for large matrix sizes due to cache e ects. We introduce a modi cation of the QRP algorithm which allows the use of BLAS Level 3 kernels while maintaining the numerical behavior of the LINPACK and LAPACK implementations. Experimental comparisons of this approach with the LINPACK and LAPACK implementations on IBM RS/6000, SGI R8000, and DEC Alpha platforms show considerable performance improvements.
On Iterative Algorithms for Linear Least Squares Problems With Bound Constraints
, 1995
"... Three new iterative methods for the solution of the linear least squares problem with bound constraints are presented and their performance analyzed. The first is a modification of a method proposed by Lotstedt, while the two others are characterized by a technique allowing for fast active set chang ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
Three new iterative methods for the solution of the linear least squares problem with bound constraints are presented and their performance analyzed. The first is a modification of a method proposed by Lotstedt, while the two others are characterized by a technique allowing for fast active set changes resulting in noticeable improvements on the speed at which constraints active at the solution are identified. The numerical efficiency of these algorithms is experimentally studied, with particular emphasis on dependence of the starting point choice and the use of preconditioning for illconditioned problems.
Mathematical Models for Transportation Demand Analysis
, 1996
"... this paper, we will concentrate on the overspeci#cation arising from the ASCs and will not consider other possible errors sources. ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
this paper, we will concentrate on the overspeci#cation arising from the ASCs and will not consider other possible errors sources.
Conjugate Gradient Bundle Adjustment
"... Abstract. Bundle adjustment for multiview reconstruction is traditionally done using the LevenbergMarquardt algorithm with a direct linear solver, which is computationally very expensive. An alternative to this approach is to apply the conjugate gradients algorithm in the inner loop. This is appe ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. Bundle adjustment for multiview reconstruction is traditionally done using the LevenbergMarquardt algorithm with a direct linear solver, which is computationally very expensive. An alternative to this approach is to apply the conjugate gradients algorithm in the inner loop. This is appealing since the main computational step of the CG algorithm involves only a simple matrixvector multiplication with the Jacobian. In this work we improve on the latest published approaches to bundle adjustment with conjugate gradients by making full use of the least squares nature of the problem. We employ an easytocompute QR factorization based block preconditioner and show how a certain property of the preconditioned system allows us to reduce the work per iteration to roughly half of the standard CG algorithm. 1
par
"... Résolution de problèmes de moindres carrés linéaires denses de grande taille sur des calculateurs parallèles distribués. Application au calcul de champ de gravité terrestre. Solving large dense linear least squares problems on parallel distributed computers. Application to the Earth’s gravity field ..."
Abstract
 Add to MetaCart
(Show Context)
Résolution de problèmes de moindres carrés linéaires denses de grande taille sur des calculateurs parallèles distribués. Application au calcul de champ de gravité terrestre. Solving large dense linear least squares problems on parallel distributed computers. Application to the Earth’s gravity field computation. Thèse présentée le 21 Mars 2006 à Toulouse devant le jury composé de:
par
"... Resolution de problemes de moindres carres lineaires denses de grande taille sur des calculateurs paralleles distribues. Application au calcul de champ de gravite terrestre. Solving large dense linear least squares problems on parallel distributed computers. Application to the Earth's gravity e ..."
Abstract
 Add to MetaCart
(Show Context)
Resolution de problemes de moindres carres lineaires denses de grande taille sur des calculateurs paralleles distribues. Application au calcul de champ de gravite terrestre. Solving large dense linear least squares problems on parallel distributed computers. Application to the Earth's gravity eld computation. These presentee le 21 Mars 2006 a Toulouse devant le jury compose de:
BlockPartitioned Algorithms for Solving the Linear Least Squares Problem
"... The linear least squares problem arises in many areas of sciences and engineerings. When the coe cient matrix has full rank, the solution can be obtained in a fast way by using QR factorization with BLAS3. In contrast, when the matrix is rankdeficient, or the rank is unknown, other slower methods ..."
Abstract
 Add to MetaCart
(Show Context)
The linear least squares problem arises in many areas of sciences and engineerings. When the coe cient matrix has full rank, the solution can be obtained in a fast way by using QR factorization with BLAS3. In contrast, when the matrix is rankdeficient, or the rank is unknown, other slower methods should be applied: the SVD or the complete orthogonal decompositions. The SVD gives more reliable determination of rank but is computationally more expensive. On the other hand, the complete orthogonal decomposition is faster and in practice works well. We present several new implementations for solving the linear least squares problem by means of the complete orthogonal decomposition that are faster than the algorithms currently included in LAPACK. Experimental comparison of our methods with the LAPACK implementations on a wide range of platforms (such as IBM RS/6000370, SUN HyperSPARC, SGI R8000, DEC Alpha/AXP, HP 9000/715, etc.) show considerable performance improvements. Some of the new code has been already included in the latest release of LAPACK (3.0). In addition, for fullrank matrices the performances of the new methods are very close to the performance of the fast method based on QR factorization with BLAS3, thus providing a valuable general tool for fullrank matrices and rankdeficient matrices, as well as those matrices with unknown rank.
A block algorithm for computing . . .
 NUMERICAL ALGORITHMS 2(34):371392, 1992.
, 1992
"... We present a block algorithm for computing rankrevealing QR factorizations (RRQR factorizations) of rankdecient matrices. The algorithm is a block generalization of the RRQRalgorithm of Foster and Chan. While the unblocked algorithm reveals the rank by peeling off small singular values one by one ..."
Abstract
 Add to MetaCart
We present a block algorithm for computing rankrevealing QR factorizations (RRQR factorizations) of rankdecient matrices. The algorithm is a block generalization of the RRQRalgorithm of Foster and Chan. While the unblocked algorithm reveals the rank by peeling off small singular values one by one, our algorithm identies groups of small singular values. In our block algorithm, we use incremental condition estimation to compute approximations to the nullvectors of the matrix. By applying another (in essence also rankrevealing) orthogonal factorization to the nullspace matrix such created, we can then generate triangular blocks with small norm in the lower right part of R. This scheme is applied in an iterative fashion until the rank has been revealed in the (updated) QR factorization. We show that the algorithm produces the correct solution, under very weak assumptions for the orthogonal factorization used for the nullspace matrix. We then discuss issues concerning an ecient implementation of the algorithm and present some numerical experiments. Our experiments show that the block algorithm is reliable and successfully captures several small singular values,