## Predicting Structure In Sparse Matrix Computations (1994)

Venue: | SIAM J. Matrix Anal. Appl |

Citations: | 42 - 5 self |

### BibTeX

@ARTICLE{Gilbert94predictingstructure,

author = {John R. Gilbert},

title = {Predicting Structure In Sparse Matrix Computations},

journal = {SIAM J. Matrix Anal. Appl},

year = {1994},

volume = {15},

pages = {62--79}

}

### Years of Citing Articles

### OpenURL

### Abstract

. Many sparse matrix algorithms---for example, solving a sparse system of linear equations---begin by predicting the nonzero structure of the output of a matrix computation from the nonzero structure of its input. This paper is a catalog of ways to predict nonzero structure. It contains known results for problems including various matrix factorizations, and new results for problems including some eigenvector computations. Key words. sparse matrix algorithms, graph theory, matrix factorization, systems of linear equations, eigenvectors AMS(MOS) subject classifications. 15A18, 15A23, 65F50, 68R10 1. Introduction. A sparse matrix algorithm is an algorithm that performs a matrix computation in such a way as to take advantage of the zero/nonzero structure of the matrices involved. Usually this means not explicitly storing or manipulating some or all of the zero elements; sometimes sparsity can also be exploited to work on different parts of a matrix problem in parallel. Large sparse matr...

### Citations

1259 |
Graph Theory
- Harary
- 1969
(Show Context)
Citation Context ...s paper is based on an earlier technical report [19]. 2. Definitions. We assume that the reader is familiar with such basic graph theoretic terms as directed graph, undirected graph, and path. Harary =-=[26]-=- is a good general reference. 2.1. Directed graphs and matrix structures. Let A be an n by n matrix. The structure of A is its directed graph struct(A) = G(A); whose vertices are the integers 1; : : :... |

550 |
Matching Theory
- Lov́asz, Plummer
- 1986
(Show Context)
Citation Context ...diagonal is irreducible if and only if H(A) has the strong Hall property; an arbitrary square matrix A is fully indecomposable if and only if H(A) has the strong Hall property. See Lovasz and Plummer =-=[30]-=- for background on bipartite matching. Our terminology is from Coleman, Edenbrandt, and Gilbert [4]. 6 Incidentally, although permuting the rows and the columns of A independently can change the direc... |

534 |
Computer Solution of Large Sparse Positive Definite Systems
- George, Liu
- 1981
(Show Context)
Citation Context ... dynamics, geophysical reservoir analysis, and many other areas. It is common for problems to be so large that they could not be solved at all without sparse techniques. Many sparse matrix algorithms =-=[6, 15, 16, 17, 21]-=- have a phase that predicts the nonzero structure of the solution from the nonzero structure of the problem, followed by a phase that does the numerical computation in a static data structure. This sa... |

309 |
Algorithmic aspects of vertex elimination on graphs
- Rose, Tarjan, et al.
- 1976
(Show Context)
Citation Context ...r triangular with positive diagonal. Then A has a nonzero diagonal because it is positive definite, and the directed graph of A corresponds to an undirected graph because A is symmetric. Theorem 4.3 (=-=[37]-=-). Let a symmetric structure G(A) be given, with nonzero diagonal elements. (i) No matter what values A has, if A has a Cholesky factorization A = LL T then G(L) ` G + (A). (ii) There exist symmetric ... |

106 |
A graph-theoretic study of the numerical solution of sparse positive definite systems of linear equations
- ROSE
- 1972
(Show Context)
Citation Context ...ed graph depends on the numbering of the vertices of A, whereas the transitive closure and the closure of a vertex are preserved under renumbering (that is, under graph isomorphism). Remark 2.1. Rose =-=[35]-=- introduced the notation G (A) for the filled graph of A, but that notation is also widely used for transitive closure. Since we want to refer to both transitive closures and filled graphs, we use G +... |

61 |
Sparse partial pivoting in time proportional to arithmetic operations
- Gilbert, Peierls
- 1988
(Show Context)
Citation Context ...ositive definite matrix are those of the Yale Sparse Matrix Package [12] and Sparspak [15], which predict the structure of the triangular factor by a version of Theorem 4.3 below. Gilbert and Peierls =-=[24]-=- have used prediction of the structure of the solution of a triangular system of equations, a special case of Theorem 5.1 below, to develop the first algorithm that performs sparse LU factorization wi... |

53 |
The use of linear graphs in Gauss elimination
- Parter
- 1961
(Show Context)
Citation Context ...ts. One reason for this is that the structural effect of a matrix computation often depends on path structure, which is easier to describe in terms of graphs than in terms of matrices. Seymour Parter =-=[33]-=- was among the first to use graph theory as a tool to Xerox Palo Alto Research Center, 3333 Coyote Hill Road, Palo Alto, California 94304. This work was done while the author was visiting the Universi... |

52 |
On algorithms for obtaining a maximum transversal
- Duff
- 1981
(Show Context)
Citation Context ... if P and Q are row and column permutation matrices, then H(PAQ T ) is isomorphic to H(A). Several structure prediction problems use matchings and alternating paths in the bipartite graph of a matrix =-=[4, 6, 7, 20, 23, 21, 32]-=-. This paper does not consider such problems in detail, but we include enough definitions here to state some of these results in later sections. Let A be an m by n matrix with msn. We say that A has t... |

50 |
Solution of sparse linear least squares problems using Givens rotations
- George, Heath
- 1980
(Show Context)
Citation Context ...rge and Heath observed that, since this R is the same as the Cholesky factor of A T A, the structure of R can be predicted by forming G " (A) and doing structural Cholesky factorization. Theorem =-=4.7 ([14]-=-). Let the structure H(A) be given for a rectangular matrix A with at least as many rows as columns. Whatever values A has, if A has full column rank then its orthogonal factorization A = QR satisfies... |

40 |
Symbolic factorization for sparse gaussian elimination with partial pivoting
- George, Ng
- 1987
(Show Context)
Citation Context ... dynamics, geophysical reservoir analysis, and many other areas. It is common for problems to be so large that they could not be solved at all without sparse techniques. Many sparse matrix algorithms =-=[6, 15, 16, 17, 21]-=- have a phase that predicts the nonzero structure of the solution from the nonzero structure of the problem, followed by a phase that does the numerical computation in a static data structure. This sa... |

39 | Elimination structures for unsymmetric sparse LU factors
- Gilbert, Liu
- 1993
(Show Context)
Citation Context ...y the same as that to compute G + (A), so a faster algorithm to compute G + (A) would give a faster algorithm to compute transitive closures than the best currently known. Eisenstat, Gilbert, and Liu =-=[11, 22]-=- give algorithms to compute G + (A) that are more efficient in practice than transitive closure, by using various transitively reduced graphs. 7 Remark 4.2. A nonsingular square matrix may have an LU ... |

33 |
An implementation of Gaussian elimination with partial pivoting for sparse systems
- George, Ng
- 1985
(Show Context)
Citation Context ... dynamics, geophysical reservoir analysis, and many other areas. It is common for problems to be so large that they could not be solved at all without sparse techniques. Many sparse matrix algorithms =-=[6, 15, 16, 17, 21]-=- have a phase that predicts the nonzero structure of the solution from the nonzero structure of the problem, followed by a phase that does the numerical computation in a static data structure. This sa... |

27 |
Computational models and task scheduling for parallel sparse Cholesky factorization
- Liu
- 1986
(Show Context)
Citation Context ...with the same nonzero structure must be solved, and the structural phase can be done just once. The structural phase may also be used to schedule the numerical phase efficiently on a parallel machine =-=[20, 29]-=-. Structure prediction can be used to save time as well as space in sparse Gaussian elimination. The asymptotically fastest algorithms to compute the Cholesky factorization of a symmetric positive def... |

26 |
Predicting fill for sparse orthogonal factorization
- Coleman, Edenbrandt, et al.
- 1986
(Show Context)
Citation Context ... if P and Q are row and column permutation matrices, then H(PAQ T ) is isomorphic to H(A). Several structure prediction problems use matchings and alternating paths in the bipartite graph of a matrix =-=[4, 6, 7, 20, 23, 21, 32]-=-. This paper does not consider such problems in detail, but we include enough definitions here to state some of these results in later sections. Let A be an m by n matrix with msn. We say that A has t... |

26 |
Some design features of a sparse matrix code
- Duff, Reid
- 1979
(Show Context)
Citation Context ...geous to begin by partitioning the matrix into strong 10 components, and then to factor only the irreducible blocks of the partition. This approach is taken, for example, in Duff and Reid's MA28 code =-=[9]-=-. Curiously enough, the most important applications of the results in this section are at the opposite extreme, for triangular systems. Structure prediction for sparse triangular systems is used in ef... |

23 |
Algorithms and data structures for sparse symmetric Gaussian elimination
- Eisenstat, Sherman, et al.
- 1981
(Show Context)
Citation Context ...s space in sparse Gaussian elimination. The asymptotically fastest algorithms to compute the Cholesky factorization of a symmetric positive definite matrix are those of the Yale Sparse Matrix Package =-=[12]-=- and Sparspak [15], which predict the structure of the triangular factor by a version of Theorem 4.3 below. Gilbert and Peierls [24] have used prediction of the structure of the solution of a triangul... |

19 |
Computing a sparse basis for the null space
- Gilbert, Heath
- 1986
(Show Context)
Citation Context |

16 |
Exploiting Structural Symmetry in Unsymmetric Sparse Symbolic Factorization
- Eisenstat, Liu
- 1992
(Show Context)
Citation Context ...y the same as that to compute G + (A), so a faster algorithm to compute G + (A) would give a faster algorithm to compute transitive closures than the best currently known. Eisenstat, Gilbert, and Liu =-=[11, 22]-=- give algorithms to compute G + (A) that are more efficient in practice than transitive closure, by using various transitively reduced graphs. 7 Remark 4.2. A nonsingular square matrix may have an LU ... |

14 |
Tarjan and Mihalis Yannakakis. Simple linear-time algorithms to test chordality of graphs, test acyclicity of hypergraphs, and selectively reduce acyclic hypergraphs
- Robert
- 1984
(Show Context)
Citation Context ...+ (A) is a chordal graph. (ii) Conversely, if G(A) is a chordal graph, then there is a permutation matrix P such that G + (PAP T ) = G(PAP T ). Rose, Tarjan, and Lueker [37] and Tarjan and Yannakakis =-=[39] gave line-=-ar-time algorithms to determine whether a G(A) is chordal and, if so, to reorder its vertices so that G + (PAP T ) = G(PAP T ). Such a reordering is called a "perfect elimination order." 4.3... |

13 |
The null space problem II: Algorithms
- Coleman, Pothen
- 1987
(Show Context)
Citation Context |

12 |
The null space problem I: complexity
- Coleman, Pothen
- 1986
(Show Context)
Citation Context ...heory plays a central role in two other structural problems that we have not described here: finding the sparsest basis for the range space (McCormick [32]) and for the null space (Coleman and Pothen =-=[5, 6]-=-, Gilbert and Heath [21]) of a rectangular matrix. It turns out that the structural range space problem can be solved in polynomial time, but the null space problem is NP-complete. Several application... |

12 |
On the complexity of sparse QR and LU factorization on finite-element matrices
- George, Ng
- 1988
(Show Context)
Citation Context ...lic column rank. Pothen [34] then proved that Johnson et al.'s prediction was in fact tight in the "all-at-once" sense, thus finishing off the problem for both Q and R in the Hall case. Geor=-=ge and Ng [18]-=- studied another representation of the structure of the orthogonal factor Q in the case that A is square and has nonzero diagonal elements. They showed that a suitable representation of Q has a struct... |

12 |
Endre Tarjan. A unified approach to path problems
- Robert
- 1981
(Show Context)
Citation Context ...ve not appeared before in this form, but the upper bounds in Theorem 5.1 and Corollary 5.4 are straightforward consequences of Tarjan's work on elimination methods for solving path problems in graphs =-=[40]-=-, and are closely related to work of Fiedler [13]. The proofs here are somewhat different. The extremes of path structure in directed graphs are a strongly connected graph (which corresponds to an irr... |

11 |
An efficient parallel sparse partial pivoting algorithm
- Gilbert
- 1988
(Show Context)
Citation Context ...with the same nonzero structure must be solved, and the structural phase can be done just once. The structural phase may also be used to schedule the numerical phase efficiently on a parallel machine =-=[20, 29]-=-. Structure prediction can be used to save time as well as space in sparse Gaussian elimination. The asymptotically fastest algorithms to compute the Cholesky factorization of a symmetric positive def... |

11 |
A combinatorial approach to some sparse matrix problems
- MCCORMICK
- 1983
(Show Context)
Citation Context ... if P and Q are row and column permutation matrices, then H(PAQ T ) is isomorphic to H(A). Several structure prediction problems use matchings and alternating paths in the bipartite graph of a matrix =-=[4, 6, 7, 20, 23, 21, 32]-=-. This paper does not consider such problems in detail, but we include enough definitions here to state some of these results in later sections. Let A be an m by n matrix with msn. We say that A has t... |

10 |
den Driessche, Sparsity analysis of the QR factorization
- Hare, Johnson, et al.
- 1991
(Show Context)
Citation Context ... If H(A) has the strong Hall property, then there exist values for the nonzeros of A such that G(R) = G + " (A), where R is the orthogonal factor of A as above. Johnson, Olesky, and van den Dries=-=sche [27] gave a mo-=-re complicated prediction of the structure of both R and the orthogonal factor Q. They showed that their prediction was tight in the "one-at-a-time" sense for all Hall structures, that is, f... |

7 |
Predicting the structure of sparse orthogonal factors
- Pothen
- 1993
(Show Context)
Citation Context ...oth R and the orthogonal factor Q. They showed that their prediction was tight in the "one-at-a-time" sense for all Hall structures, that is, for all structures with full symbolic column ran=-=k. Pothen [34] then prov-=-ed that Johnson et al.'s prediction was in fact tight in the "all-at-once" sense, thus finishing off the problem for both Q and R in the Hall case. George and Ng [18] studied another represe... |

5 |
Some results on sparse matrices
- Brayton, Gustavson, et al.
- 1970
(Show Context)
Citation Context .... 2 2.2. Predicting structure in a computation. To say more precisely what we mean by the structural effect of a computation, we make some remarks based on those of Brayton, Gustavson, and Willoughby =-=[3]-=- and Edenbrandt [10]. Let f be a function from one or more matrices or vectors to a matrix or vector. The structure of A may not determine the structure of f(A); for example, in general the sum of two... |

4 |
and Robert Endre Tarjan. Algorithmic aspects of vertex elimination
- Rose
- 1975
(Show Context)
Citation Context ...amma I represents the entire factorization. (Not all nonsingular matrices have LU factorizations without pivoting. In a later subsection we consider factorization with partial pivoting.) Theorem 4.1 (=-=[36]-=-). Let a structure G(A) be given, with nonzero diagonal elements. (i) If values are chosen for which A has an LU factorization as above, then G(L + U \Gamma I) ` G + (A). (ii) There exist values for t... |

3 |
Some remarks on inverses of sparse matrices
- Duff, Erisman, et al.
- 1985
(Show Context)
Citation Context ...en if the right-hand side entries are all zeros and ones. Corollary 5.4 implies that if A is irreducible with nonzero diagonal, then A \Gamma1 is full unless numerical coincidence occurs. Duff et al. =-=[8]-=- gave another proof of this special case. The case where A is allowed to have zeros on the diagonal is a straightforward extension. First, for A \Gamma1 to exist, H(A) must be Hall. That implies that ... |

2 |
Inversion of bigraphs and connection with the Gauss elimination
- FIEDLER
- 1976
(Show Context)
Citation Context ...artially supported by the National Science Foundation under grant DCR-8451385. Copyright c fl 1991 by Xerox Corporation. All rights reserved. 1 investigate sparse matrix computation; Miroslav Fiedler =-=[13]-=- also pioneered many of these ideas. This paper is a catalog of the effects on nonzero structure of several common matrix computations. It includes arithmetic, linear systems, various factorizations, ... |

1 |
Optimal parallel solution of triangular sparse systems
- Alvarado, Schreiber
- 1992
(Show Context)
Citation Context ...extreme, for triangular systems. Structure prediction for sparse triangular systems is used in efficient algorithms for LU factorization with partial pivoting [24] and in parallel triangular solution =-=[1]-=-. Throughout this section A is an n by n matrix with nonzero diagonal, and the graph in question is the directed graph G(A). Theorem 5.1. Let the structures of A and b be given. (i) Whatever the value... |

1 |
den Driessche. Inherited matrix entries: Principal submatrices of the inverse
- Barrett, Johnson, et al.
- 1987
(Show Context)
Citation Context ... paper. A related question is, given a matrix and a matrix function, which entries of the matrix are unchanged in value by application of the function. Barrett, Johnson, Olesky, and van den Driessche =-=[2, 28]-=- have given such characterizations for functions including LU factorization and Schur complement. A few open problems in structure prediction, some of which have already been mentioned, are as follows... |

1 |
Gunnar Edenbrandt. Combinatorial Problems in Matrix Computation
- Anders
- 1985
(Show Context)
Citation Context ... structure in a computation. To say more precisely what we mean by the structural effect of a computation, we make some remarks based on those of Brayton, Gustavson, and Willoughby [3] and Edenbrandt =-=[10]-=-. Let f be a function from one or more matrices or vectors to a matrix or vector. The structure of A may not determine the structure of f(A); for example, in general the sum of two full vectors is ful... |

1 |
Notes on the prediction of fill in some sparse matrix factorizations
- Gilbert, Ng
- 1992
(Show Context)
Citation Context |

1 |
den Driessche. Inherited matrix entries: LU factorization
- Johnson, Olesky, et al.
(Show Context)
Citation Context ... paper. A related question is, given a matrix and a matrix function, which entries of the matrix are unchanged in value by application of the function. Barrett, Johnson, Olesky, and van den Driessche =-=[2, 28]-=- have given such characterizations for functions including LU factorization and Schur complement. A few open problems in structure prediction, some of which have already been mentioned, are as follows... |

1 |
Predicting structure in eigenvector computation
- Mascarenhas
- 1992
(Show Context)
Citation Context ...he transitive closure. However, the eigenspace of 2 is also one-dimensional and consists of multiples of (0; 0; 1; 1; 1) T , which is not a subset of any column of the transitive closure. Mascarenhas =-=[31]-=- has recently extended Theorem 6.1 to the case where A has multiple eigenvalues, provided that no two diagonal blocks of the block upper triangular form of A share an eigenvalue. He has also proved a ... |

1 |
Sparse matrix techniques in geothermal reservoir modelling
- Sigursson
- 1991
(Show Context)
Citation Context ... definite linear systems with the same coefficient matrix. All the systems have very sparse right-hand sides, and in addition only a few of the unknown values are required for each system. Sigur@sson =-=[38]-=- has used structure prediction with a simpler version of Theorem 5.1 to speed up the Sparspak triangular solver for this problem. We have been concerned exclusively with predicting nonzero structure i... |