Results 1  10
of
20
Sparse matrices in Matlab: Design and implementation
, 1991
"... We have extended the matrix computation language and environment Matlab to include sparse matrix storage and operations. The only change to the outward appearance of the Matlab language is a pair of commands to create full or sparse matrices. Nearly all the operations of Matlab now apply equally to ..."
Abstract

Cited by 131 (20 self)
 Add to MetaCart
We have extended the matrix computation language and environment Matlab to include sparse matrix storage and operations. The only change to the outward appearance of the Matlab language is a pair of commands to create full or sparse matrices. Nearly all the operations of Matlab now apply equally to full or sparse matrices, without any explicit action by the user. The sparse data structure represents a matrix in space proportional to the number of nonzero entries, and most of the operations compute sparse results in time proportionaltothenumber of arithmetic operations on nonzeros.
An UnsymmetricPattern Multifrontal Method for Sparse LU Factorization
 SIAM J. MATRIX ANAL. APPL
, 1994
"... Sparse matrix factorization algorithms for general problems are typically characterized by irregular memory access patterns that limit their performance on parallelvector supercomputers. For symmetric problems, methods such as the multifrontal method avoid indirect addressing in the innermost loops ..."
Abstract

Cited by 118 (29 self)
 Add to MetaCart
Sparse matrix factorization algorithms for general problems are typically characterized by irregular memory access patterns that limit their performance on parallelvector supercomputers. For symmetric problems, methods such as the multifrontal method avoid indirect addressing in the innermost loops by using dense matrix kernels. However, no efficient LU factorization algorithm based primarily on dense matrix kernels exists for matrices whose pattern is very unsymmetric. We address this deficiency and present a new unsymmetricpattern multifrontal method based on dense matrix kernels. As in the classical multifrontal method, advantage is taken of repetitive structure in the matrix by factorizing more than one pivot in each frontal matrix thus enabling the use of Level 2 and Level 3 BLAS. The performance is compared with the classical multifrontal method and other unsymmetric solvers on a CRAY YMP.
Efficient MATLAB computations with sparse and factored tensors
 SIAM JOURNAL ON SCIENTIFIC COMPUTING
, 2007
"... In this paper, the term tensor refers simply to a multidimensional or $N$way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose stori ..."
Abstract

Cited by 45 (13 self)
 Add to MetaCart
In this paper, the term tensor refers simply to a multidimensional or $N$way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more basic components. We consider two specific types: A Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a Kruskal tensor can be expressed as the sum of rank1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB.
Predicting Structure In Sparse Matrix Computations
 SIAM J. Matrix Anal. Appl
, 1994
"... . Many sparse matrix algorithmsfor example, solving a sparse system of linear equationsbegin by predicting the nonzero structure of the output of a matrix computation from the nonzero structure of its input. This paper is a catalog of ways to predict nonzero structure. It contains known result ..."
Abstract

Cited by 40 (4 self)
 Add to MetaCart
. Many sparse matrix algorithmsfor example, solving a sparse system of linear equationsbegin by predicting the nonzero structure of the output of a matrix computation from the nonzero structure of its input. This paper is a catalog of ways to predict nonzero structure. It contains known results for problems including various matrix factorizations, and new results for problems including some eigenvector computations. Key words. sparse matrix algorithms, graph theory, matrix factorization, systems of linear equations, eigenvectors AMS(MOS) subject classifications. 15A18, 15A23, 65F50, 68R10 1. Introduction. A sparse matrix algorithm is an algorithm that performs a matrix computation in such a way as to take advantage of the zero/nonzero structure of the matrices involved. Usually this means not explicitly storing or manipulating some or all of the zero elements; sometimes sparsity can also be exploited to work on different parts of a matrix problem in parallel. Large sparse matr...
On the implementation of an algorithm for largescale equality constrained optimization
 SIAM Journal on Optimization
, 1998
"... Abstract. This paper describes a software implementation of Byrd and Omojokun’s trust region algorithm for solving nonlinear equality constrained optimization problems. The code is designed for the efficient solution of large problems and provides the user with a variety of linear algebra techniques ..."
Abstract

Cited by 38 (11 self)
 Add to MetaCart
Abstract. This paper describes a software implementation of Byrd and Omojokun’s trust region algorithm for solving nonlinear equality constrained optimization problems. The code is designed for the efficient solution of large problems and provides the user with a variety of linear algebra techniques for solving the subproblems occurring in the algorithm. Second derivative information can be used, but when it is not available, limited memory quasiNewton approximations are made. The performance of the code is studied using a set of difficult test problems from the CUTE collection.
Predicting Structure In Nonsymmetric Sparse Matrix Factorizations
 GRAPH THEORY AND SPARSE MATRIX COMPUTATION
, 1992
"... Many computations on sparse matrices have a phase that predicts the nonzero structure of the output, followed by a phase that actually performs the numerical computation. We study structure prediction for computations that involve nonsymmetric row and column permutations and nonsymmetric or nonsqu ..."
Abstract

Cited by 30 (8 self)
 Add to MetaCart
Many computations on sparse matrices have a phase that predicts the nonzero structure of the output, followed by a phase that actually performs the numerical computation. We study structure prediction for computations that involve nonsymmetric row and column permutations and nonsymmetric or nonsquare matrices. Our tools are bipartite graphs, matchings, and alternating paths. Our main new result concerns LU factorization with partial pivoting. We show that if a square matrix A has the strong Hall property (i.e., is fully indecomposable) then an upper bound due to George and Ng on the nonzero structure of L + U is as tight as possible. To show this, we prove a crucial result about alternating paths in strong Hall graphs. The alternatingpaths theorem seems to be of independent interest: it can also be used to prove related results about structure prediction for QR factorization that are due to Coleman, Edenbrandt, Gilbert, Hare, Johnson, Olesky, Pothen, and van den Driessche.
MA48  a Fortran code for direct solution of sparse unsymmetric linear systems of equations
, 1993
"... We describe the design of a new code that supersedes the Harwell Subroutine Library (HSL) code MA28 for the direct solution of sparse unsymmetric linear systems of equations. The principal differences lie in a new factorization entry that includes row permutations for stability without an overhe ..."
Abstract

Cited by 26 (5 self)
 Add to MetaCart
We describe the design of a new code that supersedes the Harwell Subroutine Library (HSL) code MA28 for the direct solution of sparse unsymmetric linear systems of equations. The principal differences lie in a new factorization entry that includes row permutations for stability without an overhead of greater complexity than that of the factorization itself, switching to full processing including the use of all three levels of BLAS, better treatment of rectangular or rankdeficient matrices, partial refactorization, and integrated facilities for iterative refinement and error estimation.
The design of MA48, a code for the direct solution of sparse unsymmetric linear systems of equations
, 1995
"... We describe the design of a new code for the direct solution of sparse unsymmetric linear systems of equations. The new code utilizes a novel restructuring of the symbolic and numerical phases, which increases speed and saves storage without sacrifice of numerical stability. Other features inclu ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
We describe the design of a new code for the direct solution of sparse unsymmetric linear systems of equations. The new code utilizes a novel restructuring of the symbolic and numerical phases, which increases speed and saves storage without sacrifice of numerical stability. Other features include switching to full matrix processing in all phases of the computation enabling the use of all three levels of BLAS, treatment of rectangular or rankdeficient matrices, partial factorization, and integrated facilities for iterative refinement and error estimation.
Users' Guide for the Unsymmetricpattern MultiFrontal Package (UMFPACK)
, 1993
"... Introduction The UnsymmetricPattern MultiFrontal Package (UMFPACK) is a set of subroutines designed to solve linear systems of the form Ax = b, where A is an nbyn general unsymmetric sparse matrix, and x and b are nby1 vectors. It uses LU factorization, and performs pivoting for numerical purp ..."
Abstract

Cited by 22 (8 self)
 Add to MetaCart
Introduction The UnsymmetricPattern MultiFrontal Package (UMFPACK) is a set of subroutines designed to solve linear systems of the form Ax = b, where A is an nbyn general unsymmetric sparse matrix, and x and b are nby1 vectors. It uses LU factorization, and performs pivoting for numerical purposes and to maintain sparsity. UMFPACK is based on the unsymmetricpattern multifrontal method [3]. The method relies on dense matrix kernels [4] to factorize frontal matrices, which are dense submatrices of the sparse matrix being factorized. In contrast to the classical multifrontal method [2, 8], frontal matrices are rectangular instead of square, and the assembly tree is replaced with a directed acyclic graph. As in the classical multifrontal method, advantage is taken of repetitive structure in the matrix
On Automatic Data Structure Selection and Code Generation for Sparse Computations
 Lecture Notes in Computer Science
, 1993
"... Traditionally restructuring compilers were only able to apply program transformations in order to exploit certain characteristics of the target architecture. Adaptation of data structures was limited to e.g. linearization or transposing of arrays. However, as more complex data structures are require ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
Traditionally restructuring compilers were only able to apply program transformations in order to exploit certain characteristics of the target architecture. Adaptation of data structures was limited to e.g. linearization or transposing of arrays. However, as more complex data structures are required to exploit characteristics of the data operated on, current compiler support appears to be inappropriate. In this paper we present the implementation issues of a restructuring compiler that automatically converts programs operating on dense matrices into sparse code, i.e. after a suited data structure has been selected for every dense matrix that in fact is sparse, the original code is adapted to operate on these data structures. This simplifies the task of the programmer and, in general, enables the compiler to apply more optimizations. Index Terms: Restructuring Compilers, Sparse Computations, Sparse Matrices. 1 Introduction Development and maintenance of sparse codes is a complex tas...