16989

Improving the Performance of Fully Connected Neural Networks by Out-of-Place Matrix Transpose

Shaohuai Shi, Pengfei Xu, Xiaowen Chu
Department of Computer Science, Hong Kong Baptist University
arXiv:1702.03192 [cs.DC], (10 Feb 2017)

@article{shi2017improving,

   title={Improving the Performance of Fully Connected Neural Networks by Out-of-Place Matrix Transpose},

   author={Shi, Shaohuai and Xu, Pengfei and Chu, Xiaowen},

   year={2017},

   month={feb},

   archivePrefix={"arXiv"},

   primaryClass={cs.DC}

}

Download Download (PDF)   View View   Source Source   Source codes Source codes

Package:

1894

views

Fully connected network has been widely used in deep learning, and its computation efficiency is highly benefited from the matrix multiplication algorithm with cuBLAS on GPU. However, We found that, there exist some drawbacks of cuBLAS in calculating matrix $textbf{A}$ multiplies the transpose of matrix $textbf{B}$ (i.e., NT operation). To reduce the impact of NT operation by cuBLAS, we exploit the out-of-place transpose of matrix $textbf{B}$ to avoid using NT operation, and then we apply our method to Caffe, which is a popular deep learning tool. Our contribution is two-fold. First, we propose a naive method (TNN) and model-based method (MTNN) to increase the performance in calculating $textbf{A}times textbf{B}^T$, and it achieves about 4.7 times performance enhancement in our tested cases on GTX1080 card. Second, we integrate MTNN method into Caffe to enhance the efficiency in training fully connected networks, which achieves about 70% speedup compared to the original Caffe in our configured fully connected networks on GTX1080 card.
Rating: 1.8/5. From 3 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: