3663

Automatically Tuning Sparse Matrix-Vector Multiplication for GPU Architectures

Alexander Monakov, Anton Lokhmotov, Arutyun Avetisyan
Institute for Systems Programming of RAS, 25 Solzhenitsyna street, Moscow, 109004, Russian Federation
In Proceedings of the 5th International Conferences on High Performance Embedded Architectures and Compilers(HiPEAC 2010), Vol. 5952 (2010), pp. 111-125

@article{monakov2010automatically,

   title={Automatically tuning sparse matrix-vector multiplication for GPU architectures},

   author={Monakov, A. and Lokhmotov, A. and Avetisyan, A.},

   journal={High Performance Embedded Architectures and Compilers},

   pages={111–125},

   year={2010},

   publisher={Springer}

}

Source Source   

1332

views

Graphics processors are increasingly used in scientific applications due to their high computational power, which comes from hardware with multiple-level parallelism and memory hierarchy. Sparse matrix computations frequently arise in scientific applications, for example, when solving PDEs on unstructured grids. However, traditional sparse matrix algorithms are difficult to efficiently parallelize for GPUs due to irregular patterns of memory references. In this paper we present a new storage format for sparse matrices that better employs locality, has low memory footprint and enables automatic specialization for various matrices and future devices via parameter tuning. Experimental evaluation demonstrates significant speedups compared to previously published results.
Rating: 5.0/5. From 1 vote.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: