17742

ChainerMN: Scalable Distributed Deep Learning Framework

Takuya Akiba, Keisuke Fukuda, Shuji Suzuki
Preferred Networks, Inc.
arXiv:1710.11351 [cs.DC], (31 Oct 2017)

@article{akiba2017chainermn,

   title={ChainerMN: Scalable Distributed Deep Learning Framework},

   author={Akiba, Takuya and Fukuda, Keisuke and Suzuki, Shuji},

   year={2017},

   month={oct},

   archivePrefix={"arXiv"},

   primaryClass={cs.DC}

}

One of the keys for deep learning to have made a breakthrough in various fields was to utilize high computing powers centering around GPUs. Enabling the use of further computing abilities by distributed processing is essential not only to make the deep learning bigger and faster but also to tackle unsolved challenges. We present the design, implementation, and evaluation of ChainerMN, the distributed deep learning framework we have developed. We demonstrate that ChainerMN can scale the learning process of the ResNet-50 model to the ImageNet dataset up to 128 GPUs with the parallel efficiency of 90%.
Rating: 3.7. From 3 votes.
Please wait...

Recent source codes

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: