Results 1 
5 of
5
Multifrontal QR factorization in a multiprocessor environment
, 1994
"... We describe the design and implementation of a parallel QR decomposition algorithm for a large sparse matrix A. The algorithm is based on the multifrontal approach and makes use of Householder transformations. The tasks are distributed among processors according to an assembly tree which is built ..."
Abstract

Cited by 29 (9 self)
 Add to MetaCart
We describe the design and implementation of a parallel QR decomposition algorithm for a large sparse matrix A. The algorithm is based on the multifrontal approach and makes use of Householder transformations. The tasks are distributed among processors according to an assembly tree which is built from the symbolic factorization of the matrix A T A. Uniprocessor issues are first addressed. We then discuss the multiprocessor implementation of the method. Parallelization of both the factorization phase and the solve phase are considered. We use relaxation of the sparsity structure of both the original matrix and the frontal matrices to improve the performance. We show that, in this case, the use of Level 3 BLAS can lead to very significant performance improvement. The eight processor Alliant FX/80 is used to illustrate our discussion. 1 ENSEEIHTIRIT (Toulouse, France), amestoy@enseeiht.fr. 2 CERFACS (Toulouse, France) also Rutherford App leton Lab., (England), duff@cerfac...
Sparse Multifrontal Rank Revealing QR Factorization
 SIAM J. Matrix Anal. Appl
, 1995
"... We describe an algorithm to compute a rank revealing sparse QR factorization. We augment a basic sparse multifrontal QR factorization with an incremental condition estimator to provide an estimate of the least singular value and vector for each successive column of R. We remove a column from R as ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
(Show Context)
We describe an algorithm to compute a rank revealing sparse QR factorization. We augment a basic sparse multifrontal QR factorization with an incremental condition estimator to provide an estimate of the least singular value and vector for each successive column of R. We remove a column from R as soon as the condition estimate exceeds a tolerance, using the approximate singular vector to select a suitable column. Removing columns, or pivoting, requires a dynamic data structure and necessarily degrades sparsity. But most of the additional work fits naturally into the multifrontal factorization's use of efficient dense vector kernels, minimizing overall cost. Further, pivoting as soon as possible reduces the cost of pivot selection and data access. We present a theoretical analysis that shows that our use of approximate singular vectors does not degrade the quality of our rankrevealing factorization; we achieve an exponential bound like methods that use exact singular vectors. We prov...
Multifrontal Computation with the Orthogonal Factors of Sparse Matrices
 SIAM Journal on Matrix Analysis and Applications
, 1994
"... . This paper studies the solution of the linear least squares problem for a large and sparse m by n matrix A with m n by QR factorization of A and transformation of the righthand side vector b to Q T b. A multifrontalbased method for computing Q T b using Householder factorization is presented ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
. This paper studies the solution of the linear least squares problem for a large and sparse m by n matrix A with m n by QR factorization of A and transformation of the righthand side vector b to Q T b. A multifrontalbased method for computing Q T b using Householder factorization is presented. A theoretical operation count for the K by K unbordered grid model problem and problems defined on graphs with p nseparators shows that the proposed method requires O(NR ) storage and multiplications to compute Q T b, where NR = O(n log n) is the number of nonzeros of the upper triangular factor R of A. In order to introduce BLAS2 operations, Schreiber and Van Loan's StorageEfficientWY Representation [SIAM J. Sci. Stat. Computing, 10(1989),pp. 5557] is applied for the orthogonal factor Q i of each frontal matrix F i . If this technique is used, the bound on storage increases to O(n(logn) 2 ). Some numerical results for the grid model problems as well as HarwellBoeing problems...
Computing sparse orthogonal factors in MATLAB
, 1998
"... In this report a new version of the multifrontal sparse QR factorization routine sqr, originally by Matstoms, for general sparse matrices is described and evaluated. In the previous version the orthogonal factor Q is discarded due to storage considerations. The new version provides Q and uses the mu ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
In this report a new version of the multifrontal sparse QR factorization routine sqr, originally by Matstoms, for general sparse matrices is described and evaluated. In the previous version the orthogonal factor Q is discarded due to storage considerations. The new version provides Q and uses the multifrontal structure to store this orthogonal factor in a compact way. A new data class with overloaded operators is implemented in Matlab to provide an easy usage of the compact orthogonal factors. This implicit way of storing the orthogonal factor also results in faster computation and application of Q and Q T . Examples are given, where the new version is up to four times faster when computing only R and up to 1000 times faster when computing both Q and R, than the builtin function qr in Matlab. The sqr package is available at URL: http://www.mai.liu.se/~milun/sls/. Key words: QR factorization, sparse problems, multifrontal method, orthogonal factorization. 1 Introduction. Let A 2 IR...
SPARSE LINEAR ALGEBRA in and around the APOENSEEIHTIRIT group
"... We describe the work done in sparse linear algebra, in the "Algorithmique Parall`ele et Optimization" group of the ENSEEIHTIRIT laboratory. The research activities, described in this paper, result from collaborations with CERFACS, RAL and University of FLorida. These include work on com ..."
Abstract
 Add to MetaCart
We describe the work done in sparse linear algebra, in the "Algorithmique Parall`ele et Optimization" group of the ENSEEIHTIRIT laboratory. The research activities, described in this paper, result from collaborations with CERFACS, RAL and University of FLorida. These include work on computational kernels for linear algebra, the solution of sparse systems by both direct and iterative methods, the study of elementbyelement preconditionners. The objective of this paper is to describe the principal research themes explored in these area. We also comment on likely future developments. 1 Introduction We consider the solution of Ax = b; (1) where A is a large sparse matrix. If the matrix A is structured then it may be written as A = p X i=1 A i : (2) Sparse structured linear systems arise in many applications. The elementary matrices A i are usually full or nearly full matrices. Both classes of matrices (structured and unstructured) are being considered in our research studies...