4399

Auto-tuning Dense Matrix Multiplication for GPGPU with Cache

Xiang Cui, Yifeng Chen, Changyou Zhang, Hong Mei
Key Lab. of High Confidence Software Technol., Peking Univ., Beijing, China
IEEE 16th International Conference on Parallel and Distributed Systems (ICPADS), 2010

@inproceedings{cui2010auto,

   title={Auto-tuning Dense Matrix Multiplication for GPGPU with Cache},

   author={Cui, X. and Chen, Y. and Zhang, C. and Mei, H.},

   booktitle={2010 IEEE 16th International Conference on Parallel and Distributed Systems},

   pages={237–242},

   year={2010},

   organization={IEEE}

}

Source Source   

1564

views

In this paper we discuss about our experiences in improving the performance of GEMM (both single and double precision) on Fermi architecture using CUDA, and how the new features of Fermi such as cache affect performance. It is found that the addition of cache in GPU on one hand helps the processers take advantage of data locality occurred in runtime but on the other hand renders the dependency of performance on algorithmic parameters less predictable. Auto tuning then becomes a useful technique to address this issue. Our auto-tuned SGEMM and DGEMM reach 563 GFlops and 253 GFlops respectively on Tesla C2050. The design and implementation entirely use CUDA and C and have not benefited from tuning at the level of binary code.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: