Results 1  10
of
42
A supernodal approach to sparse partial pivoting
 SIAM Journal on Matrix Analysis and Applications
, 1999
"... We investigate several ways to improve the performance of sparse LU factorization with partial pivoting, as used to solve unsymmetric linear systems. To perform most of the numerical computation in dense matrix kernels, we introduce the notion of unsymmetric supernodes. To better exploit the memory ..."
Abstract

Cited by 209 (24 self)
 Add to MetaCart
We investigate several ways to improve the performance of sparse LU factorization with partial pivoting, as used to solve unsymmetric linear systems. To perform most of the numerical computation in dense matrix kernels, we introduce the notion of unsymmetric supernodes. To better exploit the memory hierarchy, weintroduce unsymmetric supernodepanel updates and twodimensional data partitioning. To speed up symbolic factorization, we use Gilbert and Peierls's depth rst search with Eisenstat and Liu's symmetric structural reductions. We have implemented a sparse LU code using all these ideas. We present experiments demonstrating that it is signi cantly faster than earlier partial pivoting codes. We also compare performance with Umfpack, which uses a multifrontal approach; our code is usually faster.
A sparse approximate inverse preconditioner for nonsymmetric linear systems
 SIAM J. SCI. COMPUT
, 1998
"... This paper is concerned with a new approach to preconditioning for large, sparse linear systems. A procedure for computing an incomplete factorization of the inverse of a nonsymmetric matrix is developed, and the resulting factorized sparse approximate inverse is used as an explicit preconditioner f ..."
Abstract

Cited by 171 (22 self)
 Add to MetaCart
This paper is concerned with a new approach to preconditioning for large, sparse linear systems. A procedure for computing an incomplete factorization of the inverse of a nonsymmetric matrix is developed, and the resulting factorized sparse approximate inverse is used as an explicit preconditioner for conjugate gradient–type methods. Some theoretical properties of the preconditioner are discussed, and numerical experiments on test matrices from the Harwell–Boeing collection and from Tim Davis’s collection are presented. Our results indicate that the new preconditioner is cheaper to construct than other approximate inverse preconditioners. Furthermore, the new technique insures convergence rates of the preconditioned iteration which are comparable with those obtained with standard implicit preconditioners.
Sparse matrices in Matlab: Design and implementation
, 1991
"... We have extended the matrix computation language and environment Matlab to include sparse matrix storage and operations. The only change to the outward appearance of the Matlab language is a pair of commands to create full or sparse matrices. Nearly all the operations of Matlab now apply equally to ..."
Abstract

Cited by 147 (21 self)
 Add to MetaCart
We have extended the matrix computation language and environment Matlab to include sparse matrix storage and operations. The only change to the outward appearance of the Matlab language is a pair of commands to create full or sparse matrices. Nearly all the operations of Matlab now apply equally to full or sparse matrices, without any explicit action by the user. The sparse data structure represents a matrix in space proportional to the number of nonzero entries, and most of the operations compute sparse results in time proportionaltothenumber of arithmetic operations on nonzeros.
Preconditioning techniques for large linear systems: A survey
 J. COMPUT. PHYS
, 2002
"... This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization i ..."
Abstract

Cited by 118 (5 self)
 Add to MetaCart
This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization issues, and block and multilevel extensions. Some of the challenges ahead are also discussed. An extensive bibliography completes the paper.
Conditions For Unique Graph Realizations
 SIAM J. Comput
, 1992
"... . The graph realization problem is that of computing the relative locations of a set of vertices placed in Euclidean space, relying only upon some set of intervertex distance measurements. This paper is concerned with the closely related problem of determining whether or not a graph has a unique re ..."
Abstract

Cited by 115 (1 self)
 Add to MetaCart
. The graph realization problem is that of computing the relative locations of a set of vertices placed in Euclidean space, relying only upon some set of intervertex distance measurements. This paper is concerned with the closely related problem of determining whether or not a graph has a unique realization. Both these problems are NPhard, but the proofs rely upon special combinations of edge lengths. If we assume the vertex locations are unrelated then the uniqueness question can be approached from a purely graph theoretic angle that ignores edge lengths. This paper identifies three necessary graph theoretic conditions for a graph to have a unique realization in any dimension. Efficient sequential and NC algorithms are presented for each condition, although these algorithms have very different flavors in different dimensions. 1. Introduction. Consider a graph G = (V; E) consisting of a set of n vertices and m edges, along with a real number associated with each edge. Now try to assi...
SuperLU DIST: A scalable distributedmemory sparse direct solver for unsymmetric linear systems
 ACM Trans. Mathematical Software
, 2003
"... We present the main algorithmic features in the software package SuperLU DIST, a distributedmemory sparse direct solver for large sets of linear equations. We give in detail our parallelization strategies, with a focus on scalability issues, and demonstrate the software’s parallel performance and sc ..."
Abstract

Cited by 105 (19 self)
 Add to MetaCart
We present the main algorithmic features in the software package SuperLU DIST, a distributedmemory sparse direct solver for large sets of linear equations. We give in detail our parallelization strategies, with a focus on scalability issues, and demonstrate the software’s parallel performance and scalability on current machines. The solver is based on sparse Gaussian elimination, with an innovative static pivoting strategy proposed earlier by the authors. The main advantage of static pivoting over classical partial pivoting is that it permits a priori determination of data structures and communication patterns, which lets us exploit techniques used in parallel sparse Cholesky algorithms to better parallelize both LU decomposition and triangular solution on largescale distributed machines.
A Priori Sparsity Patterns For Parallel Sparse Approximate Inverse Preconditioners
, 1998
"... . Parallel algorithms for computing sparse approximations to the inverse of a sparse matrix either use a prescribed sparsity pattern for the approximate inverse, or attempt to generate a good pattern as part of the algorithm. This paper demonstrates that for PDE problems, the patterns of powers of s ..."
Abstract

Cited by 56 (6 self)
 Add to MetaCart
. Parallel algorithms for computing sparse approximations to the inverse of a sparse matrix either use a prescribed sparsity pattern for the approximate inverse, or attempt to generate a good pattern as part of the algorithm. This paper demonstrates that for PDE problems, the patterns of powers of sparsied matrices (PSM's) can be used a priori as eective approximate inverse patterns, and that the additional eort of adaptive sparsity pattern calculations may not be required. PSM patterns are related to various other approximate inverse sparsity patterns through matrix graph theory and heuristics about the PDE's Green's function. A parallel implementation shows that PSMpatterned approximate inverses are signicantly faster to construct than approximate inverses constructed adaptively, while often giving preconditioners of comparable quality. Key words. preconditioned iterative methods, sparse approximate inverses, graph theory, parallel computing AMS subject classications. 65F10, ...
Modifying a Sparse Cholesky Factorization
, 1997
"... Given a sparse symmetric positive definite matrix AA T and an associated sparse Cholesky factorization LL T , we develop sparse techniques for obtaining the new factorization associated with either adding a column to A or deleting a column from A. Our techniques are based on an analysis and mani ..."
Abstract

Cited by 45 (15 self)
 Add to MetaCart
Given a sparse symmetric positive definite matrix AA T and an associated sparse Cholesky factorization LL T , we develop sparse techniques for obtaining the new factorization associated with either adding a column to A or deleting a column from A. Our techniques are based on an analysis and manipulation of the underlying graph structure and on ideas of Gill, Golub, Murray, and Saunders for modifying a dense Cholesky factorization. Our algorithm involves a new sparse matrix concept, the multiplicity of an entry in L. The multiplicity is essentially a measure of the number of times an entry is modified during symbolic factorization. We show that our methods extend to the general case where an arbitrary sparse symmetric positive definite matrix is modified. Our methods are optimal in the sense that they take time proportional to the number of nonzero entries in L that change. This work was supported by National Science Foundation grants DMS9404431 and DMS9504974. y davis@cise.uf...
Elimination Structures For Unsymmetric Sparse LU Factors
 SIAM J. Matrix Analysis and Applications
, 1993
"... . The elimination tree is central to the study of Cholesky factorization of sparse symmetric positive definite matrices. In this paper, we generalize the elimination tree to a structure appropriate for the sparse LU factorization of unsymmetric matrices. We define a pair of directed acyclic graphs c ..."
Abstract

Cited by 39 (2 self)
 Add to MetaCart
. The elimination tree is central to the study of Cholesky factorization of sparse symmetric positive definite matrices. In this paper, we generalize the elimination tree to a structure appropriate for the sparse LU factorization of unsymmetric matrices. We define a pair of directed acyclic graphs called elimination dags, and use them to characterize the zerononzero structures of the lower and upper triangular factors. We apply these elimination structures in a new algorithm to compute fill for sparse LU factorization. Our experimental results indicate that the new algorithm is usually faster than earlier methods. Key words. sparse matrix algorithms, Gaussian elimination, LU factorization, elimination tree, elimination dag. AMS(MOS) subject classifications. 05C20, 05C75, 65F05, 65F50. 1. Introduction. The elimination tree [10, 14] is central to the study of symmetric factorization of sparse positive definite matrices. Liu [11] surveys the use of this tree structure in many aspects o...
Sparse Gaussian Elimination on High Performance Computers
, 1996
"... This dissertation presents new techniques for solving large sparse unsymmetric linear systems on high performance computers, using Gaussian elimination with partial pivoting. The efficiencies of the new algorithms are demonstrated for matrices from various fields and for a variety of high performan ..."
Abstract

Cited by 36 (6 self)
 Add to MetaCart
This dissertation presents new techniques for solving large sparse unsymmetric linear systems on high performance computers, using Gaussian elimination with partial pivoting. The efficiencies of the new algorithms are demonstrated for matrices from various fields and for a variety of high performance machines. In the first part we discuss optimizations of a sequential algorithm to exploit the memory hierarchies that exist in most RISCbased superscalar computers. We begin with the leftlooking supernodecolumn algorithm by Eisenstat, Gilbert and Liu, which includes Eisenstat and Liu's symmetric structural reduction for fast symbolic factorization. Our key contribution is to develop both numeric and symbolic schemes to perform supernodepanel updates to achieve better data reuse in cache and floatingpoint register...