897

Understanding the efficiency of GPU algorithms for matrix-matrix multiplication

K. Fatahalian,J. Sugerman,P. Hanrahan
Stanford University
HWWS ’04: Proceedings of the ACM SIGGRAPH/EUROGRAPHICS conference on Graphics hardware (2004), pp. 133-137

@conference{fatahalian2004understanding,

   title={Understanding the efficiency of GPU algorithms for matrix-matrix multiplication},

   author={Fatahalian, K. and Sugerman, J. and Hanrahan, P.},

   booktitle={Proceedings of the ACM SIGGRAPH/EUROGRAPHICS conference on Graphics hardware},

   pages={133–137},

   year={2004},

   organization={ACM}

}

Download Download (PDF)   View View   Source Source   

2529

views

Utilizing graphics hardware for general purpose numerical computations has become a topic of considerable interest. The implementation of streaming algorithms, typified by highly parallel computations with little reuse of input data, has been widely explored on GPUs. We relax the streaming model’s constraint on input reuse and perform an in-depth analysis of dense matrix-matrix multiplication, which reuses each element of input matrices O(n) times. Its regular data access pattern and highly parallel computational requirements suggest matrix-matrix multiplication as an obvious candidate for efficient evaluation on GPUs but, surprisingly we find even near-optimal GPU implementations are pronouncedly less efficient than current cache-aware CPU approaches. We find the key cause of this inefficiency is that the GPU can fetch less data and yet execute more arithmetic operations per clock than the CPU when both are operating out of their closest caches. The lack of high bandwidth access to cached data will impair the performance of GPU implementations of any computation featuring significant input reuse.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: