Results 1  10
of
477,762
Parallel Sparse MatrixVector Multiplication
, 1997
"... In this paper we describe an algorithm for unstructured sparse matrixvector multiplication on distributed memory parallel computers. We focus on both local and global computational efficiency, i.e. single processor computational performance and interprocessor communication efficiency. Numerical ex ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In this paper we describe an algorithm for unstructured sparse matrixvector multiplication on distributed memory parallel computers. We focus on both local and global computational efficiency, i.e. single processor computational performance and interprocessor communication efficiency. Numerical
Sparse matrixvector multiplication on FPGAs
 In Proceedings of the ACM International Symposium on Field Programmable Gate Arrays
, 2005
"... Sparse matrixvector multiplication (SpMXV) is a key computational kernel widely used in scientific applications and signal processing applications. However, the performance of SpMXV on most modern processors is poor due to the irregular sparsity structure in the matrices. Applicationspecific proce ..."
Abstract

Cited by 57 (7 self)
 Add to MetaCart
Sparse matrixvector multiplication (SpMXV) is a key computational kernel widely used in scientific applications and signal processing applications. However, the performance of SpMXV on most modern processors is poor due to the irregular sparsity structure in the matrices. Application
“Implementing Sparse MatrixVector Multiplication on
"... Minimize memory traffic Maximize coalesced memory access ..."
On Improving the Performance of Sparse MatrixVector Multiplication
 In Proceedings of the International Conference on HighPerformance Computing
, 1997
"... We analyze singlenode performance of sparse matrixvector multiplication by investigating issues of data locality and finegrained parallelism. We examine the datalocality characteristics of the compressedsparse row representation and consider improvements in locality through matrix permutation. ..."
Abstract

Cited by 28 (0 self)
 Add to MetaCart
of the core operations of iterative sparse solvers is sparse matrixvector multiplication. In order to achieve high performance, a parallel implementation of sparse matrixvector multiplication must maintain scalability. This scalability comes from a balanced mapping of the matrix and vectors among
Implementing sparse matrixvector multiplication on throughputoriented processors
 In SC ’09: Proceedings of the 2009 ACM/IEEE conference on Supercomputing
, 2009
"... Sparse matrixvector multiplication (SpMV) is of singular importance in sparse linear algebra. In contrast to the uniform regularity of dense linear algebra, sparse operations encounter a broad spectrum of matrices ranging from the regular to the highly irregular. Harnessing the tremendous potential ..."
Abstract

Cited by 137 (6 self)
 Add to MetaCart
Sparse matrixvector multiplication (SpMV) is of singular importance in sparse linear algebra. In contrast to the uniform regularity of dense linear algebra, sparse operations encounter a broad spectrum of matrices ranging from the regular to the highly irregular. Harnessing the tremendous
A library for parallel sparse matrixvector multiplies
, 2005
"... We provide parallel matrixvector multiply routines for 1D and 2D partitioned sparse square and rectangular matrices. We clearly give pseudocodes that perform necessary initializations for parallel execution. We show how to maximize overlapping between communication and computation through the pro ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
We provide parallel matrixvector multiply routines for 1D and 2D partitioned sparse square and rectangular matrices. We clearly give pseudocodes that perform necessary initializations for parallel execution. We show how to maximize overlapping between communication and computation through
LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares
 ACM Trans. Math. Software
, 1982
"... An iterative method is given for solving Ax ~ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable numerica ..."
Abstract

Cited by 649 (21 self)
 Add to MetaCart
An iterative method is given for solving Ax ~ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable
Improving MemorySystem Performance of Sparse MatrixVector Multiplication
 IBM Journal of Research and Development
, 1997
"... Sparse MatrixVector Multiplication is an important kernel that often runs inefficiently on superscalar RISC processors. This paper describe techniques that increase instructionlevel parallelism and improve performance. The techniques include reordering to reduce cache misses originally due to Das ..."
Abstract

Cited by 93 (0 self)
 Add to MetaCart
Sparse MatrixVector Multiplication is an important kernel that often runs inefficiently on superscalar RISC processors. This paper describe techniques that increase instructionlevel parallelism and improve performance. The techniques include reordering to reduce cache misses originally due to Das
Efficient sparse matrixvector multiplication on CUDA
, 2008
"... The massive parallelism of graphics processing units (GPUs) offers tremendous performance in many highperformance computing applications. While dense linear algebra readily maps to such platforms, harnessing this potential for sparse matrix computations presents additional challenges. Given its rol ..."
Abstract

Cited by 109 (2 self)
 Add to MetaCart
role in iterative methods for solving sparse linear systems and eigenvalue problems, sparse matrixvector multiplication (SpMV) is of singular importance in sparse linear algebra. In this paper we discuss data structures and algorithms for SpMV that are efficiently implemented on the CUDA platform
An Efficient Sparse MatrixVector Multiplication on Distributed Memory Parallel Computers
, 2006
"... The matrixvector product is one of the most important computational components of Krylov methods. This kernel is an irregular problem, which has led to the development of several compressed storage formats. We design a data structure for distributed matrix to compute the matrixvector product effic ..."
Abstract
 Add to MetaCart
efficiently on distributed memory parallel computers using MPI. We conduct numerical experiments on several different sparse matrices and show the parallel performance of our sparse matrixvector product routines. Key words: Sparse matrices, matrixvector product, sparse storage formats, distributed computing
Results 1  10
of
477,762