Results 1 
7 of
7
Predicting Structure In Sparse Matrix Computations
 SIAM J. Matrix Anal. Appl
, 1994
"... . Many sparse matrix algorithmsfor example, solving a sparse system of linear equationsbegin by predicting the nonzero structure of the output of a matrix computation from the nonzero structure of its input. This paper is a catalog of ways to predict nonzero structure. It contains known result ..."
Abstract

Cited by 40 (4 self)
 Add to MetaCart
. Many sparse matrix algorithmsfor example, solving a sparse system of linear equationsbegin by predicting the nonzero structure of the output of a matrix computation from the nonzero structure of its input. This paper is a catalog of ways to predict nonzero structure. It contains known results for problems including various matrix factorizations, and new results for problems including some eigenvector computations. Key words. sparse matrix algorithms, graph theory, matrix factorization, systems of linear equations, eigenvectors AMS(MOS) subject classifications. 15A18, 15A23, 65F50, 68R10 1. Introduction. A sparse matrix algorithm is an algorithm that performs a matrix computation in such a way as to take advantage of the zero/nonzero structure of the matrices involved. Usually this means not explicitly storing or manipulating some or all of the zero elements; sometimes sparsity can also be exploited to work on different parts of a matrix problem in parallel. Large sparse matr...
On the solution of equality constrained quadratic programming problems arising . . .
, 1998
"... ..."
Computing a Search Direction for LargeScale LinearlyConstrained Nonlinear Optimization Calculations
, 1993
"... . We consider the computation of Newtonlike search directions that are appropriate when solving largescale linearlyconstrained nonlinear optimization problems. We investigate the use of both direct and iterative methods and consider efficient ways of modifying the Newton equations in order to ens ..."
Abstract

Cited by 12 (8 self)
 Add to MetaCart
. We consider the computation of Newtonlike search directions that are appropriate when solving largescale linearlyconstrained nonlinear optimization problems. We investigate the use of both direct and iterative methods and consider efficient ways of modifying the Newton equations in order to ensure global convergence of the underlying optimization methods. 1 Parallel Algorithms Team, CERFACS, 42 Ave. G. Coriolis, 31057 Toulouse Cedex, France 2 IANCNR, c/o Dipartimento di Matematica, 209, via Abbiategrasso 27100 Pavia, Italy 3 Department of Mathematics, University of California, 405 Hilgard Avenue, Los Angeles, CA 900241555, USA 4 Central Computing Department, Rutherford Appleton Laboratory, Chilton, Oxfordshire, OX11 0QX, England 5 Current reports available by anonymous ftp from the directory "pub/reports" on camelot.cc.rl.ac.uk (internet 130.246.8.61) Keywords: Largescale problems, unconstrained optimization, linearly constrained optimization, direct methods, iterative...
Separators and Structure Prediction in Sparse Orthogonal Factorization
, 1993
"... In the factorization A = QR of a matrix A, the orthogonal matrix Q can be represented either explicitly (as a matrix) or implicitly (as a matrix H of Householder vectors). We derive both upper and lower bounds on the number of nonzeros in H and the number of nonzeros in Q, in the case where the ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
In the factorization A = QR of a matrix A, the orthogonal matrix Q can be represented either explicitly (as a matrix) or implicitly (as a matrix H of Householder vectors). We derive both upper and lower bounds on the number of nonzeros in H and the number of nonzeros in Q, in the case where the graph of A T A has "good" separators and A need not be square. We also derive an upper bound on the number of nonzeros in the nullbasis part of Q in the case where A is the edgevertex incidence matrix of a planar graph. The significance of these results is that they both illuminate and amplify a folk theorem of sparse QR factorization, which holds that the matrix H of Householder vectors represents the orthogonal factor of A much more compactly than Q itself. To facilitate discussion of this and related issues, we review several related results which have appeared previously. Keywords: Sparse matrix algorithms, QR factorization, separators, column intersection graph, strong Hall...
Decomposition of Complex Reaction Networks into Reactons
, 803
"... The analysis of complex reaction networks is of great importance in several chemical and biochemical fields (interstellar chemistry, prebiotic chemistry, reaction mechanism, etc). In this article, we propose to simultaneously refine and extend for general chemical reaction systems the formalism init ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
The analysis of complex reaction networks is of great importance in several chemical and biochemical fields (interstellar chemistry, prebiotic chemistry, reaction mechanism, etc). In this article, we propose to simultaneously refine and extend for general chemical reaction systems the formalism initially introduced for the description of metabolic networks. The classical approaches through the computation of the right null space leads to the decomposition of the network into complex “cycles ” of reactions concerned with all metabolites. We show how, departing from the left null space computation, the flux analysis can be decoupled into linear fluxes and single loops, allowing a more refine qualitative analysis as a function of the antagonisms and connections among these local fluxes. This analysis is made possible by the decomposition of the molecules into elementary subunits, called "reactons " and the consequent decomposition of the whole network into simple first order unary partial reactions related with simple transfers of reactons from one molecule to another. This article explains and justifies the algorithmic steps leading to the total decomposition of the reaction network into its constitutive elementary subpart. The dynamical analysis of complex reaction networks such as the ones characterizing interstellar chemistry (1), complex reaction mechanism (2) or metabolic functions (3) should be facilitated by algorithmic means to decouple these networks. It is conceivable that the presence of interesting dynamical phenomena like bifurcation or symmetry breaking
A New Class of Preconditioners for LargeScale Linear Systems from Interior Point Methods for Linear Programming
, 1997
"... A New Class of Preconditioners for LargeScale Linear Systems from Interior Point Methods for Linear Programming by Aurelio Ribeiro Leite de Oliveira A new class of preconditioners for the iterative solution of the linear systems arising from interior point methods is proposed. For many of these met ..."
Abstract
 Add to MetaCart
A New Class of Preconditioners for LargeScale Linear Systems from Interior Point Methods for Linear Programming by Aurelio Ribeiro Leite de Oliveira A new class of preconditioners for the iterative solution of the linear systems arising from interior point methods is proposed. For many of these methods, the linear systems come from applying Newton's method on the perturbed KarushKuhnTucker optimality conditions for the linear programming problem. This leads to a symmetric indefinite linear system called the augmented system. This system can be reduced to the Schur complement system which is positive definite. After the reduction, the solution for the linear system is usually computed via the Cholesky factorization. This factorization can be dense for some classes of problems. Therefore, the solution of these systems by iterative methods must be considered. Since these systems are very illconditioned near a solution of the linear programming problem, it is crucial to develop efficie...
Algorithm xxx: Reliable Calculation of Numerical Rank, Null Space Bases, Pseudoinverse Solutions, and Basic Solutions using SuiteSparseQR
"... The SPQR RANK package contains routines that calculate the numerical rank of large, sparse, numerically rankdeficient matrices. The routines can also calculate orthonormal bases for numerical null spaces, approximate pseudoinverse solutions to least squares problems involving rankdeficient matrices ..."
Abstract
 Add to MetaCart
The SPQR RANK package contains routines that calculate the numerical rank of large, sparse, numerically rankdeficient matrices. The routines can also calculate orthonormal bases for numerical null spaces, approximate pseudoinverse solutions to least squares problems involving rankdeficient matrices, and basic solutions to these problems. The algorithms are based on SPQR from SuiteSparseQR (ACM Transactions on Mathematical Software 38, Article 8, 2011). SPQR is a highperformance routine for forming QR factorizations of large, sparse matrices. It returns an estimate for the numerical rank that is usually, but not always, correct. The new routines improve the accuracy of the numerical rank calculated by SPQR and reliably determine the numerical rank in the sense that, based on extensive testing with matrices from applications, the numerical rank is almost always accurately determined when our methods report that the numerical rank should be correct. Reliable determination of numerical rank is critical to the other calculations in the package. The routines work well for matrices with either small or large null space dimensions.