Results 1  10
of
38
LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares
 ACM Trans. Math. Software
, 1982
"... An iterative method is given for solving Ax ~ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable numerica ..."
Abstract

Cited by 336 (18 self)
 Add to MetaCart
An iterative method is given for solving Ax ~ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable numerical properties. Reliable stopping criteria are derived, along with estimates of standard errors for x and the condition number of A. These are used in the FORTRAN implementation of the method, subroutine LSQR. Numerical tests are described comparing I~QR with several other conjugategradient algorithms, indicating that I~QR is the most reliable algorithm when A is illconditioned. Categories and Subject Descriptors: G.1.2 [Numerical Analysis]: ApprorJmationleast squares approximation; G.1.3 [Numerical Analysis]: Numerical Linear Algebralinear systems (direct and
Conditions For Unique Graph Realizations
 SIAM J. Comput
, 1992
"... . The graph realization problem is that of computing the relative locations of a set of vertices placed in Euclidean space, relying only upon some set of intervertex distance measurements. This paper is concerned with the closely related problem of determining whether or not a graph has a unique re ..."
Abstract

Cited by 113 (1 self)
 Add to MetaCart
. The graph realization problem is that of computing the relative locations of a set of vertices placed in Euclidean space, relying only upon some set of intervertex distance measurements. This paper is concerned with the closely related problem of determining whether or not a graph has a unique realization. Both these problems are NPhard, but the proofs rely upon special combinations of edge lengths. If we assume the vertex locations are unrelated then the uniqueness question can be approached from a purely graph theoretic angle that ignores edge lengths. This paper identifies three necessary graph theoretic conditions for a graph to have a unique realization in any dimension. Efficient sequential and NC algorithms are presented for each condition, although these algorithms have very different flavors in different dimensions. 1. Introduction. Consider a graph G = (V; E) consisting of a set of n vertices and m edges, along with a real number associated with each edge. Now try to assi...
Predicting Structure In Sparse Matrix Computations
 SIAM J. Matrix Anal. Appl
, 1994
"... . Many sparse matrix algorithmsfor example, solving a sparse system of linear equationsbegin by predicting the nonzero structure of the output of a matrix computation from the nonzero structure of its input. This paper is a catalog of ways to predict nonzero structure. It contains known result ..."
Abstract

Cited by 40 (4 self)
 Add to MetaCart
. Many sparse matrix algorithmsfor example, solving a sparse system of linear equationsbegin by predicting the nonzero structure of the output of a matrix computation from the nonzero structure of its input. This paper is a catalog of ways to predict nonzero structure. It contains known results for problems including various matrix factorizations, and new results for problems including some eigenvector computations. Key words. sparse matrix algorithms, graph theory, matrix factorization, systems of linear equations, eigenvectors AMS(MOS) subject classifications. 15A18, 15A23, 65F50, 68R10 1. Introduction. A sparse matrix algorithm is an algorithm that performs a matrix computation in such a way as to take advantage of the zero/nonzero structure of the matrices involved. Usually this means not explicitly storing or manipulating some or all of the zero elements; sometimes sparsity can also be exploited to work on different parts of a matrix problem in parallel. Large sparse matr...
Predicting Structure In Nonsymmetric Sparse Matrix Factorizations
 GRAPH THEORY AND SPARSE MATRIX COMPUTATION
, 1992
"... Many computations on sparse matrices have a phase that predicts the nonzero structure of the output, followed by a phase that actually performs the numerical computation. We study structure prediction for computations that involve nonsymmetric row and column permutations and nonsymmetric or nonsqu ..."
Abstract

Cited by 30 (8 self)
 Add to MetaCart
Many computations on sparse matrices have a phase that predicts the nonzero structure of the output, followed by a phase that actually performs the numerical computation. We study structure prediction for computations that involve nonsymmetric row and column permutations and nonsymmetric or nonsquare matrices. Our tools are bipartite graphs, matchings, and alternating paths. Our main new result concerns LU factorization with partial pivoting. We show that if a square matrix A has the strong Hall property (i.e., is fully indecomposable) then an upper bound due to George and Ng on the nonzero structure of L + U is as tight as possible. To show this, we prove a crucial result about alternating paths in strong Hall graphs. The alternatingpaths theorem seems to be of independent interest: it can also be used to prove related results about structure prediction for QR factorization that are due to Coleman, Edenbrandt, Gilbert, Hare, Johnson, Olesky, Pothen, and van den Driessche.
An Efficient Algorithm to Compute Row and Column Counts for Sparse Cholesky Factorization
 SIAM J. Matrix Anal. Appl
, 1994
"... Let an undirected graph G be given, along with a specified depthfirst spanning tree T . We give almostlineartime algorithms to solve the following two problems: First, for every vertex v, compute the number of descendants w of v for which some descendant of w is adjacent (in G) to v. Second, f ..."
Abstract

Cited by 26 (6 self)
 Add to MetaCart
Let an undirected graph G be given, along with a specified depthfirst spanning tree T . We give almostlineartime algorithms to solve the following two problems: First, for every vertex v, compute the number of descendants w of v for which some descendant of w is adjacent (in G) to v. Second, for every vertex v, compute the number of ancestors of v that are adjacent (in G) to at least one descendant of v. These problems arise in Cholesky and QR factorizations of sparse matrices. Our algorithms can be used to determine the number of nonzero entries in each row and column of the triangular factor of a matrix from the zero/nonzero structure of the matrix. Such a prediction makes storage allocation for sparse matrix factorizations more efficient. Our algorithms run in time linear in the size of the input times a slowlygrowing inverse of Ackermann's function. The best previously known algorithms for these problems ran in time linear in the sum of the nonzero counts, which is...
Sparse Multifrontal Rank Revealing QR Factorization
 SIAM J. Matrix Anal. Appl
, 1995
"... We describe an algorithm to compute a rank revealing sparse QR factorization. We augment a basic sparse multifrontal QR factorization with an incremental condition estimator to provide an estimate of the least singular value and vector for each successive column of R. We remove a column from R as ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
We describe an algorithm to compute a rank revealing sparse QR factorization. We augment a basic sparse multifrontal QR factorization with an incremental condition estimator to provide an estimate of the least singular value and vector for each successive column of R. We remove a column from R as soon as the condition estimate exceeds a tolerance, using the approximate singular vector to select a suitable column. Removing columns, or pivoting, requires a dynamic data structure and necessarily degrades sparsity. But most of the additional work fits naturally into the multifrontal factorization's use of efficient dense vector kernels, minimizing overall cost. Further, pivoting as soon as possible reduces the cost of pivot selection and data access. We present a theoretical analysis that shows that our use of approximate singular vectors does not degrade the quality of our rankrevealing factorization; we achieve an exponential bound like methods that use exact singular vectors. We prov...
Incomplete Factorization Preconditioning For Linear Least Squares Problems
, 1994
"... this paper is the modified version of GramSchmidt orthogonalization with a rejection test applied right after the formation of the offdiagonal elements of the factor R. For a given rejection parameter 0 / 1, the rejection test is: if r ij ! /= k a ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
this paper is the modified version of GramSchmidt orthogonalization with a rejection test applied right after the formation of the offdiagonal elements of the factor R. For a given rejection parameter 0 / 1, the rejection test is: if r ij ! /= k a
Finding Good Column Orderings for Sparse QR Factorization
 In Second SIAM Conference on Sparse Matrices
, 1996
"... For sparse QR factorization, finding a good column ordering of the matrix to be factorized, is essential. Both the amount of fill in the resulting factors, and the number of floatingpoint operations required by the factorization, are highly dependent on this ordering. A suitable column ordering of ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
For sparse QR factorization, finding a good column ordering of the matrix to be factorized, is essential. Both the amount of fill in the resulting factors, and the number of floatingpoint operations required by the factorization, are highly dependent on this ordering. A suitable column ordering of the matrix A is usually obtained by minimum degree analysis on A T A. The objective of this analysis is to produce low fill in the resulting triangular factor R. We observe that the efficiency of sparse QR factorization is also dependent on other criteria, like the size and the structure of intermediate fill, and the size and the structure of the frontal matrices for the multifrontal method, in addition to the amount of fill in R. An important part of this information is lost when A T A is formed. However, the structural information from A is important to consider in order to find good column orderings. We show how a suitable equivalent reordering of an initial fillreducing ordering can...
Sparse Numerical Linear Algebra: Direct Methods and Preconditioning
, 1996
"... Most of the current techniques for the direct solution of linear equations are based on supernodal or multifrontal approaches. An important feature of these methods is that arithmetic is performed on dense submatrices and Level 2 and Level 3 BLAS (matrixvector and matrixmatrix kernels) can be us ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
Most of the current techniques for the direct solution of linear equations are based on supernodal or multifrontal approaches. An important feature of these methods is that arithmetic is performed on dense submatrices and Level 2 and Level 3 BLAS (matrixvector and matrixmatrix kernels) can be used. Both sparse LU and QR factorizations can be implemented within this framework. Partitioning and ordering techniques have seen major activity in recent years. We discuss bisection and multisection techniques, extensions to orderings to block triangular form, and recent improvements and modifications to standard orderings such as minimum degree. We also study advances in the solution of indefinite systems and sparse leastsquares problems. The desire to exploit parallelism has been responsible for many of the developments in direct methods for sparse matrices over the last ten years. We examine this aspect in some detail, illustrating how current techniques have been developed or ...
INTERIOR POINT METHODS FOR COMBINATORIAL OPTIMIZATION
, 1995
"... Research on using interior point algorithms to solve combinatorial optimization and integer programming problems is surveyed. This paper discusses branch and cut methods for integer programming problems, a potential reduction method based on transforming an integer programming problem to an equivale ..."
Abstract

Cited by 14 (9 self)
 Add to MetaCart
Research on using interior point algorithms to solve combinatorial optimization and integer programming problems is surveyed. This paper discusses branch and cut methods for integer programming problems, a potential reduction method based on transforming an integer programming problem to an equivalent nonconvex quadratic programming problem, interior point methods for solving network flow problems, and methods for solving multicommodity flow problems, including an interior point column generation algorithm.