Results 1  10
of
1,826,475
LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares
 ACM Trans. Math. Software
, 1982
"... An iterative method is given for solving Ax ~ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable numerica ..."
Abstract

Cited by 649 (21 self)
 Add to MetaCart
An iterative method is given for solving Ax ~ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable
Solving Sparse Linear Equations Over
"... AhstructA “coordinate recurrence ” method for solving sparse systems of linear equations over finite fields is described. The algorithms discussed all require O ( n, ( w + nl) logkn,) field operations, where nI is the maximum dimension of the coefficient matrix, w is approximately the number of fi ..."
Abstract
 Add to MetaCart
AhstructA “coordinate recurrence ” method for solving sparse systems of linear equations over finite fields is described. The algorithms discussed all require O ( n, ( w + nl) logkn,) field operations, where nI is the maximum dimension of the coefficient matrix, w is approximately the number
Parallel Preconditioning for Sparse Linear Equations
"... A popular class of preconditioners is known as incomplete factorizations. They can be thought of as approximating the exact LU factorization of a given matrix A (e.g. computed via Gaussian elimination) by disallowing certain llins. As opposed to other PDEbased preconditioners such asmultigrid and ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
and domain decomposition, this class of preconditioners are primarily algebraic in nature and can in principle be applied to any sparse matrices. In this paper we will discuss some new viewpoints for the construction of e ective preconditioners. In particular, we will discuss parallelization aspects
DISTURBED SPARSE LINEAR EQUATIONS OVER THE 01 FINITE FIELD ∗1)
"... In this paper, disturbed sparse linear equations over the 01 finite field are considered. Due to the special structure of the problem, the standard alternating coordinate method can be implemented in such a way to yield a fast and efficient algorithm. Our alternating coordinate algorithm makes use ..."
Abstract
 Add to MetaCart
In this paper, disturbed sparse linear equations over the 01 finite field are considered. Due to the special structure of the problem, the standard alternating coordinate method can be implemented in such a way to yield a fast and efficient algorithm. Our alternating coordinate algorithm makes use
A Parallel Hierarchical Algorithm For Module Placement Based On Sparse Linear Equations
 In Proceedings of the 1996 International Conference on Circuits and Systems
, 1996
"... We present a fast and effective module placement algorithm which is based on the PROUD algorithm. The PROUD algorithm uses a hierarchical decomposition technique and the solution of sparse linear systems of equations based on a resistive network analogy. It has been shown that the PROUD algorithm is ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We present a fast and effective module placement algorithm which is based on the PROUD algorithm. The PROUD algorithm uses a hierarchical decomposition technique and the solution of sparse linear systems of equations based on a resistive network analogy. It has been shown that the PROUD algorithm
A PARALLEL HIERARCHICAL ALGORITHM FOR MODULE PLACEMENT BASED ON SPARSE LINEAR EQUATIONS
"... We present a fast and effective module placement algorithm which is based on the PROUD algorithm. The PROUD algorithm uses a hierarchical decomposition technique and the solution of sparse linear systems of equations based on a resistive network analogy. It has been shown that the PROUD algorithm ..."
Abstract
 Add to MetaCart
We present a fast and effective module placement algorithm which is based on the PROUD algorithm. The PROUD algorithm uses a hierarchical decomposition technique and the solution of sparse linear systems of equations based on a resistive network analogy. It has been shown that the PROUD algorithm
PartitionedInverses For The Solution Of Large Sparse Linear Equations On The Intel Paragon
"... This paper investigates the performance of the partitioned incomplete inverses and computationfree partitioned inverses on the Intel Paragon and reports the results. ..."
Abstract
 Add to MetaCart
This paper investigates the performance of the partitioned incomplete inverses and computationfree partitioned inverses on the Intel Paragon and reports the results.
Scalable Hierarchical Parallel Algorithm for the Solution of Super LargeScale Sparse Linear Equations
"... The parallel linear equations solver capable of effectively using 1000þ processors becomes the bottleneck of largescale implicit engineering simulations. In this paper, we present a new hierarchical parallel masterslavestructural iterative algorithm for the solution of super largescale sparse li ..."
Abstract
 Add to MetaCart
The parallel linear equations solver capable of effectively using 1000þ processors becomes the bottleneck of largescale implicit engineering simulations. In this paper, we present a new hierarchical parallel masterslavestructural iterative algorithm for the solution of super largescale sparse
PVMImplementation of Sparse Approximate Inverse Preconditioners for Solving Large Sparse Linear Equations
, 1996
"... to convergent iterates if the spectral radius ae(M \Gamma1 K) ! 1. For many important iterative methods the convergence depends heavily on the position of the eigenvalues of A. Therefore, the original system Ax = b is often replaced by an equivalent system MAx = Mb or the system AMz = b, x = Mz . ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
to convergent iterates if the spectral radius ae(M \Gamma1 K) ! 1. For many important iterative methods the convergence depends heavily on the position of the eigenvalues of A. Therefore, the original system Ax = b is often replaced by an equivalent system MAx = Mb or the system AMz = b, x = Mz . Here, the matrix M is called a preconditioner and has to satisfy a few conditions:  AM (or MA) should have a `clustered' spectrum,  M should be fast to compute in parallel,  M \ThetaVektor should be fast to compute in parallel. Often used preconditioners are (i) BlockJacobiprec..: M = inv(blockdiag(A)) , is ea
Results 1  10
of
1,826,475