18166

mu-cuDNN: Accelerating Deep Learning Frameworks with Micro-Batching

Yosuke Oyama, Tal Ben-Nun, Torsten Hoefler, Satoshi Matsuoka
Department of Mathematical and Computing Science, Tokyo Institute of Technology, Tokyo, Japan
arXiv:1804.04806 [cs.LG], (13 Apr 2018)

@article{oyama2018cudnn,

   title={mu-cuDNN: Accelerating Deep Learning Frameworks with Micro-Batching},

   author={Oyama, Yosuke and Ben-Nun, Tal and Hoefler, Torsten and Matsuoka, Satoshi},

   year={2018},

   month={apr},

   archivePrefix={"arXiv"},

   primaryClass={cs.LG}

}

Download Download (PDF)   View View   Source Source   

1635

views

NVIDIA cuDNN is a low-level library that provides GPU kernels frequently used in deep learning. Specifically, cuDNN implements several equivalent convolution algorithms, whose performance and memory footprint may vary considerably, depending on the layer dimensions. When an algorithm is automatically selected by cuDNN, the decision is performed on a per-layer basis, and thus it often resorts to slower algorithms that fit the workspace size constraints. We present {mu}-cuDNN, a transparent wrapper library for cuDNN, which divides layers’ mini-batch computation into several micro-batches. Based on Dynamic Programming and Integer Linear Programming, {mu}-cuDNN enables faster algorithms by decreasing the workspace requirements. At the same time, {mu}-cuDNN keeps the computational semantics unchanged, so that it decouples statistical efficiency from the hardware efficiency safely. We demonstrate the effectiveness of {mu}-cuDNN over two frameworks, Caffe and TensorFlow, achieving speedups of 1.63x for AlexNet and 1.21x for ResNet-18 on P100-SXM2 GPU. These results indicate that using micro-batches can seamlessly increase the performance of deep learning, while maintaining the same memory footprint.
Rating: 3.0/5. From 1 vote.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: