Results 1  10
of
446
Sparse matrixvector multiplication on FPGAs
 In Proceedings of the ACM International Symposium on Field Programmable Gate Arrays
, 2005
"... Sparse matrixvector multiplication (SpMXV) is a key computational kernel widely used in scientific applications and signal processing applications. However, the performance of SpMXV on most modern processors is poor due to the irregular sparsity structure in the matrices. Applicationspecific proce ..."
Abstract

Cited by 60 (7 self)
 Add to MetaCart
Sparse matrixvector multiplication (SpMXV) is a key computational kernel widely used in scientific applications and signal processing applications. However, the performance of SpMXV on most modern processors is poor due to the irregular sparsity structure in the matrices. Application
Efficient sparse matrixvector multiplication on CUDA
, 2008
"... The massive parallelism of graphics processing units (GPUs) offers tremendous performance in many highperformance computing applications. While dense linear algebra readily maps to such platforms, harnessing this potential for sparse matrix computations presents additional challenges. Given its rol ..."
Abstract

Cited by 113 (2 self)
 Add to MetaCart
role in iterative methods for solving sparse linear systems and eigenvalue problems, sparse matrixvector multiplication (SpMV) is of singular importance in sparse linear algebra. In this paper we discuss data structures and algorithms for SpMV that are efficiently implemented on the CUDA platform
Optimization of Sparse Matrixvector Multiplication on Emerging Multicore Platforms
 In Proc. SC2007: High performance computing, networking, and storage conference
, 2007
"... We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore spec ..."
Abstract

Cited by 153 (20 self)
 Add to MetaCart
specific optimization methodologies for important scientific computations. In this work, we examine sparse matrixvector multiply (SpMV) – one of the most heavily used kernels in scientific computing – across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD
Parallel Sparse MatrixVector Multiplication
, 1997
"... In this paper we describe an algorithm for unstructured sparse matrixvector multiplication on distributed memory parallel computers. We focus on both local and global computational efficiency, i.e. single processor computational performance and interprocessor communication efficiency. Numerical ex ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In this paper we describe an algorithm for unstructured sparse matrixvector multiplication on distributed memory parallel computers. We focus on both local and global computational efficiency, i.e. single processor computational performance and interprocessor communication efficiency. Numerical
Improving Performance of Sparse MatrixVector Multiplication
, 1999
"... Sparse matrixvector multiplication (SpMxV) is one of the most important computational kernels in scientific computing. It often suffers from poor cache utilization and extra load operations because of memory indirections used to exploit sparsity. We propose alternative data structures, along with r ..."
Abstract

Cited by 65 (3 self)
 Add to MetaCart
Sparse matrixvector multiplication (SpMxV) is one of the most important computational kernels in scientific computing. It often suffers from poor cache utilization and extra load operations because of memory indirections used to exploit sparsity. We propose alternative data structures, along
Sparse MatrixVector Multiplication on FPGAs
, 2007
"... Floatingpoint Sparse MatrixVector Multiplication (SpMXV) is a key computational kernel in scientific and engineering applications. The poor data locality of sparse matrices significantly reduces the performance of SpMXV on generalpurpose processors, which rely heavily on the cache hierarchy to ac ..."
Abstract
 Add to MetaCart
Floatingpoint Sparse MatrixVector Multiplication (SpMXV) is a key computational kernel in scientific and engineering applications. The poor data locality of sparse matrices significantly reduces the performance of SpMXV on generalpurpose processors, which rely heavily on the cache hierarchy
Improving MemorySystem Performance of Sparse MatrixVector Multiplication
 IBM Journal of Research and Development
, 1997
"... Sparse MatrixVector Multiplication is an important kernel that often runs inefficiently on superscalar RISC processors. This paper describe techniques that increase instructionlevel parallelism and improve performance. The techniques include reordering to reduce cache misses originally due to Das ..."
Abstract

Cited by 93 (0 self)
 Add to MetaCart
superscalar RISC processors as well and have improved performance on a Sun UltraSparc I workstation, for example. 1 Introduction Sparse matrixvector multiplication is an important computational kernel in many iterative linear solvers (see [5], for example). Unfortunately, on many computers this kernel runs
Vector ISA Extension for Sparse MatrixVector Multiplication
"... . In this paper we introduce a vector ISA extension to facilitate sparse matrix manipulation on vector processors (VPs). First we introduce a new Block Based Compressed Storage (BBCS) format for sparse matrix representation and a Blockwise Sparse MatrixVector Multiplication approach. Additionally, ..."
Abstract
 Add to MetaCart
, we propose two vector instructions, Multiple Inner Product and Accumulate (MIPA) and LoaD Section (LDS), specially tuned to increase the VP performance when executing sparse matrixvector multiplications. 1 Introduction In many areas of scientic computing the manipulation of sparse matrices
Direct and Transposed Sparse MatrixVector
 in Proceedings of the 2002 Euromicro conference on Massivelyparallel computing systems, MPCS2002
, 2002
"... In this paper we investigate the execution of Ab and A^T b, where A is a sparse matrix and b a dense vector, using the Blocked Based Compression Storage (BBCS) scheme and an Augmented Vector Architecture (AVA). In particular, we demonstrate that by using the BBCS format, we can represent both the di ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
the direct and the transposed matrix for the purposes of matrixvector multiplication with no additional costs in storage, access time and computation performance. To achieve this, we propose a new instruction and a hardware modification for the AVA. Subsequently we evaluate the performance of the transposed
Vector ISA Extension for Sparse MatrixVector Multiplication
"... . In this paper we introduce a vector ISA extension to facilitate sparse matrix manipulation on vector processors (VPs). First we introduce a new Block Based Compressed Storage (BBCS) format for sparse matrix representation and a Blockwise Sparse MatrixVector Multiplication approach. Additionally, ..."
Abstract
 Add to MetaCart
, we propose two vector instructions, Multiple Inner Product and Accumulate (MIPA) and LoaD Section (LDS), specially tuned to increase the VP performance when executing sparse matrixvector multiplications. 1 Introduction In many areas of scientific computing the manipulation of sparse matrices
Results 1  10
of
446