27591

Fast convolution kernels on pascal GPU with high memory efficiency

Qiong Chang, Masaki Onishi, Tsutomu Maruyama
University of Tsukuba, Tsukuba, Japan, 305-8577
arXiv:2212.00404 [cs.DC], (1 Dec 2022)

@misc{https://doi.org/10.48550/arxiv.2212.00404,

   doi={10.48550/ARXIV.2212.00404},

   url={https://arxiv.org/abs/2212.00404},

   author={Chang, Qiong and Onishi, Masaki and Maruyama, Tsutomu},

   keywords={Distributed, Parallel, and Cluster Computing (cs.DC), FOS: Computer and information sciences, FOS: Computer and information sciences},

   title={Fast convolution kernels on pascal GPU with high memory efficiency},

   publisher={arXiv},

   year={2022},

   copyright={Creative Commons Attribution 4.0 International}

}

Download Download (PDF)   View View   Source Source   

302

views

The convolution computation is widely used in many fields, especially in CNNs. Because of the rapid growth of the training data in CNNs, GPUs have been used for the acceleration, and memory-efficient algorithms are focused because of thier high performance. In this paper, we propose two convolution kernels for single-channel convolution and multi-channel convolution respectively. Our two methods achieve high performance by hiding the access delay of the global memory efficiently, and achieving high ratio of floating point Fused Multiply-Add operations per fetched data from the global memory. In comparison to the latest Cudnn library developed by Nvidia aimed to accelerate the deep-learning computation, the average performance improvement by our research is 2.6X for the single-channel, and 1.4X for the multi-channel.
No votes yet.
Please wait...

* * *

* * *

* * *

HGPU group © 2010-2023 hgpu.org

All rights belong to the respective authors

Contact us: