Results 1 
4 of
4
On sharedmemory parallelization of a sparse matrix scaling algorithm
"... Abstract—We discuss efficient shared memory parallelization of sparse matrix computations whose main traits resemble to those of the sparse matrixvector multiply operation. Such computations are difficult to parallelize because of the relatively small computational granularity characterized by smal ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract—We discuss efficient shared memory parallelization of sparse matrix computations whose main traits resemble to those of the sparse matrixvector multiply operation. Such computations are difficult to parallelize because of the relatively small computational granularity characterized by small number of operations per each data access. Our main application is a sparse matrix scaling algorithm which is more memory bound than the sparse matrix vector multiplication operation. We take the application and parallelize it using the standard OpenMP programming principles. Apart from the common race condition avoiding constructs, we do not reorganize the algorithm. Rather, we identify associated performance metrics and describe models to optimize them. By using these models, we implement parallel matrix scaling algorithms for two wellknown sparse matrix storage formats. Experimental results show that simple parallelization attempts which leave data/work partitioning to the runtime scheduler can suffer from the overhead of avoiding race conditions especially when the number of threads increases. The proposed algorithms perform better than these algorithms by optimizing the identified performance metrics and reducing the overhead. KeywordsSharedmemory parallelization, sparse matrices, hypergraphs, matrix scaling I.
GRENOBLE – RHÔNEALPES
"... Abstract: We present an iterative algorithm which asymptotically scales the ∞norm of each row and each column of a matrix to one. This scaling algorithm preserves symmetry of the original matrix and shows fast linear convergence with an asymptotic rate of 1/2. We discuss extensions of the algorithm ..."
Abstract
 Add to MetaCart
Abstract: We present an iterative algorithm which asymptotically scales the ∞norm of each row and each column of a matrix to one. This scaling algorithm preserves symmetry of the original matrix and shows fast linear convergence with an asymptotic rate of 1/2. We discuss extensions of the algorithm to the onenorm, and by inference to other norms. For the 1norm case, we show again that convergence is linear, with the rate dependent on the spectrum of the scaled matrix. We demonstrate experimentally that the scaling algorithm improves the conditioning of the matrix and that it helps direct solvers by reducing the need for pivoting. In particular, for symmetric matrices the theoretical and experimental results highlight the potential of the proposed algorithm over existing alternatives. Keywords: Sparse matrices, matrix scaling, equilibration