19166

DBCSR: A Library for Dense Matrix Multiplications on Distributed GPU-Accelerated Systems

Ilia Sivkov, Alfio Lazzaro, Jurg Hutter
Department of Chemistry, University of Zurich, Zurich, Switzerland
arXiv:1910.04796 [cs.DC], (10 Oct 2019)

@misc{sivkov2019dbcsr,

   title={DBCSR: A Library for Dense Matrix Multiplications on Distributed GPU-Accelerated Systems},

   author={Ilia Sivkov and Alfio Lazzaro and Juerg Hutter},

   year={2019},

   eprint={1910.04796},

   archivePrefix={arXiv},

   primaryClass={cs.DC}

}

Most, if not all the modern scientific simulation packages utilize matrix algebra operations. Among the operation of the linear algebra, one of the most important kernels is the multiplication of matrices, dense and sparse. Examples of application of such a kernel are in electronic structure calculations, machine learning, data mining, graph processing, and digital signal processing. Several optimized libraries exist that can achieve high-performance on distributed systems. Only a few of them target distributed GPU-accelerated systems. In most of the cases, these libraries are provided and optimized by system vendors for their specific computer systems. In this paper, we present the DBCSR library (Distributed Block Compressed Sparse Row) for the distributed dense matrix-matrix multiplications. Although the library is specifically designed for block-sparse matrix-matrix multiplications, we optimized it for the dense case on GPU-accelerated systems. We show that the DBCSR outperforms the multiplication of matrices of different sizes and shapes provided by a vendor optimized GPU version of the ScaLAPACK library up to 2.5x (1.4x on average).
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: