Results 1  10
of
25,582
Finding the Distance to Instability of a Large Sparse Matrix
"... Abstract — The distance to instability of a matrix A is a robust measure for the stability of the corresponding dynamical system ˙x = Ax, known to be far more reliable than checking the eigenvalues of A. In this paper, a new algorithm for computing such a distance is sketched. Built on existing appr ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
approaches, its computationally most expensive part involves a usually modest number of shiftandinvert Arnoldi iterations. This makes it possible to address large sparse matrices, such as those arising from discretized partial differential equations. I.
LQSchur Projection on Large Sparse Matrix Equations
"... A new paradigm for the solution of nonsymmetric large sparse systems of linear equations is proposed. The paradigm is based on an LQ factorization of the matrix of coecients, i.e. factoring the matrix of coecients into the product of a lower triangular matrix and an orthogonal matrix. We show how ..."
Abstract
 Add to MetaCart
A new paradigm for the solution of nonsymmetric large sparse systems of linear equations is proposed. The paradigm is based on an LQ factorization of the matrix of coecients, i.e. factoring the matrix of coecients into the product of a lower triangular matrix and an orthogonal matrix. We show
Large Sparse Matrix Problems in Scientific and Industrial Applications
"... Sponsored by In collaboration with Soci'et'e de Math'ematique Appliqu'ees et Industrielles / Groupe pourl'Avancement des M'ethodes Num'eriques de l'Ing'enieur (SMAI/GAMNI) ..."
Abstract
 Add to MetaCart
Sponsored by In collaboration with Soci'et'e de Math'ematique Appliqu'ees et Industrielles / Groupe pourl'Avancement des M'ethodes Num'eriques de l'Ing'enieur (SMAI/GAMNI)
Parallel Multilevel Sparse Approximate Inverse Preconditioners in Large Sparse Matrix Computations
 In proceedings of Supercomputing 2003: Igniting Innovation. November 15  21, 2003
"... Abstract. We investigate the use of the multistep successive preconditioning strategies (MSP) to construct a class of parallel multilevel sparse approximate inverse (SAI) preconditioners. We do not use independent set ordering, but a diagonal dominance based matrix permutation to build a multilevel ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract. We investigate the use of the multistep successive preconditioning strategies (MSP) to construct a class of parallel multilevel sparse approximate inverse (SAI) preconditioners. We do not use independent set ordering, but a diagonal dominance based matrix permutation to build a multilevel
2007 International Conference on Preconditioning Techniques for Large Sparse Matrix Problems in Scientific and Industrial Applications
, 2007
"... The 2007 International Conference on Preconditioning Techniques for Large Sparse Matrix
Problems in Scientific and Industrial Applications, Preconditioning 2007, is the fifth in a series
of conferences that focus on preconditioning techniques in sparse matrix computation. ..."
Abstract
 Add to MetaCart
The 2007 International Conference on Preconditioning Techniques for Large Sparse Matrix
Problems in Scientific and Industrial Applications, Preconditioning 2007, is the fifth in a series
of conferences that focus on preconditioning techniques in sparse matrix computation.
A SuperProgramming Technique for Large Sparse Matrix Multiplication on PC Clusters
 on PC clusters, IEICE Trans. Info. Systems E87D
, 2004
"... The multiplication of large spare matrices is a basic operation for many scientific and engineering applications. There exist some highperformance library routines for this operation. They are often optimized based on the target architecture. The PC cluster computing paradigm has recently emerged a ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
as a viable alternative for highperformance, lowcost computing. In this paper, we apply our superprogramming approach [24] to study the load balance and runtime management overhead for implementing parallel large matrix multiplication on PC clusters. For a parallel environment, it is essential
A Survey of Methods for Computing Large Sparse Matrix Exponentials Arising in Markov Chains
 in Markov Chains, Computational Statistics and Data Analysis 29
, 1996
"... Krylov subspace techniques have been shown to yield robust methods for the numerical computation of large sparse matrix exponentials and especially the transient solutions of Markov Chains. The attractiveness of these methods results from the fact that they allow us to compute the action of a matrix ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Krylov subspace techniques have been shown to yield robust methods for the numerical computation of large sparse matrix exponentials and especially the transient solutions of Markov Chains. The attractiveness of these methods results from the fact that they allow us to compute the action of a
The University of Florida sparse matrix collection
 NA DIGEST
, 1997
"... The University of Florida Sparse Matrix Collection is a large, widely available, and actively growing set of sparse matrices that arise in real applications. Its matrices cover a wide spectrum of problem domains, both those arising from problems with underlying 2D or 3D geometry (structural enginee ..."
Abstract

Cited by 536 (17 self)
 Add to MetaCart
The University of Florida Sparse Matrix Collection is a large, widely available, and actively growing set of sparse matrices that arise in real applications. Its matrices cover a wide spectrum of problem domains, both those arising from problems with underlying 2D or 3D geometry (structural
LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares
 ACM Trans. Math. Software
, 1982
"... An iterative method is given for solving Ax ~ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable numerica ..."
Abstract

Cited by 653 (21 self)
 Add to MetaCart
An iterative method is given for solving Ax ~ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable
Nonnegative matrix factorization with sparseness constraints,”
 Journal of Machine Learning Research,
, 2004
"... Abstract Nonnegative matrix factorization (NMF) is a recently developed technique for finding partsbased, linear representations of nonnegative data. Although it has successfully been applied in several applications, it does not always result in partsbased representations. In this paper, we sho ..."
Abstract

Cited by 498 (0 self)
 Add to MetaCart
Abstract Nonnegative matrix factorization (NMF) is a recently developed technique for finding partsbased, linear representations of nonnegative data. Although it has successfully been applied in several applications, it does not always result in partsbased representations. In this paper, we
Results 1  10
of
25,582