14409

Optimizing strassen matrix multiply on GPUs

Ayaz ul Hasan Khan, Mayez Al-Mouhamed, Allam Fatayer
Department of Computer Engineering, College of Computer Science & Engineering, KFUPM, Dhahran, 31261, Saudi Arabia
16th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD), 2015

@inproceedings{al2015optimizing,

   title={Optimizing strassen matrix multiply on GPUs},

   author={Al-Mouhamed, Mayez and Fatayer, Allam and others},

   booktitle={Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD), 2015 16th IEEE/ACIS International Conference on},

   pages={1–6},

   year={2015},

   organization={IEEE}

}

Download Download (PDF)   View View   Source Source   

2147

views

Many core systems are basically designed for applications having large data parallelism. Strassen Matrix Multiply (MM) can be formulated as a depth first (DFS) traversal of a recursion tree where all cores work in parallel on computing each of the NxN sub-matrices that reduces storage at the detriment of large data motion to gather and aggregate the results. We propose Strassen and Winograd algorithms (S-MM and W-MM) based on three optimizations: a set of basic algebra functions to reduce overhead, invoking efficient library (CUBLAS 5.5), and parameter-tuning of parametric kernel to improve resource occupancy. On GPUs, W-MM and S-MM with one recursion level outperform CUBLAS 5.5 Library with up to twice as faster for large arrays satisfying N>=2048 and N>=3072, respectively. Compared to NVIDIA SDK library, S-MM and W-MM achieved a speedup between 20x to 80x for the above arrays. The proposed approach can be used to enhance the performance of CUBLAS and MKL libraries.
Rating: 2.5/5. From 1 vote.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: