Results 1 
2 of
2
A TwoDimensional Data Distribution Method For Parallel Sparse MatrixVector Multiplication
 SIAM REVIEW
"... A new method is presented for distributing data in sparse matrixvector multiplication. The method is twodimensional, tries to minimise the true communication volume, and also tries to spread the computation and communication work evenly over the processors. The method starts with a recursive bipar ..."
Abstract

Cited by 68 (9 self)
 Add to MetaCart
A new method is presented for distributing data in sparse matrixvector multiplication. The method is twodimensional, tries to minimise the true communication volume, and also tries to spread the computation and communication work evenly over the processors. The method starts with a recursive bipartitioning of the sparse matrix, each time splitting a rectangular matrix into two parts with a nearly equal number of nonzeros. The communication volume caused by the split is minimised. After the matrix partitioning, the input and output vectors are partitioned with the objective of minimising the maximum communication volume per processor. Experimental results of our implementation, Mondriaan, for a set of sparse test matrices show a reduction in communication compared to onedimensional methods, and in general a good balance in the communication work.
Encapsulating Multiple CommunicationCost Metrics in Partitioning Sparse Rectangular Matrices for Parallel MatrixVector Multiplies
"... This paper addresses the problem of onedimensional partitioning of structurally unsymmetricsquare and rectangular sparse matrices for parallel matrixvector and matrixtransposevector multiplies. The objective is to minimize the communication cost while maintaining the balance on computational load ..."
Abstract

Cited by 35 (22 self)
 Add to MetaCart
This paper addresses the problem of onedimensional partitioning of structurally unsymmetricsquare and rectangular sparse matrices for parallel matrixvector and matrixtransposevector multiplies. The objective is to minimize the communication cost while maintaining the balance on computational loads of processors. Most of the existing partitioning models consider only the total message volume hoping that minimizing this communicationcost metric is likely to reduce other metrics. However, the total message latency (startup time) may be more important than the total message volume. Furthermore, the maximum message volume and latency handled by a single processor are also important metrics. We propose a twophase approach that encapsulates all these four communicationcost metrics. The objective in the first phase is to minimize the total message volume while maintainingthe computationalload balance. The objective in the second phase is to encapsulate the remaining three communicationcost metrics. We propose communicationhypergraph and partitioning models for the second phase. We then present several methods for partitioning communication hypergraphs. Experiments on a wide range of test matrices show that the proposed approach yields very effective partitioning results. A parallel implementation on a PC cluster verifies that the theoretical improvements shown by partitioning results hold in practice.