19135

Performance and Power Evaluation of AI Accelerators for Training Deep Learning Models

Yuxin Wang, Qiang Wang, Shaohuai Shi, Xin He, Zhenheng Tang, Kaiyong Zhao, Xiaowen Chu
Department of Computer Science, Hong Kong Baptist University
arXiv:1909.06842 [cs.DC], (15 Sep 2019)

@misc{wang2019performance,

   title={Performance and Power Evaluation of AI Accelerators for Training Deep Learning Models},

   author={Yuxin Wang and Qiang Wang and Shaohuai Shi and Xin He and Zhenheng Tang and Kaiyong Zhao and Xiaowen Chu},

   year={2019},

   eprint={1909.06842},

   archivePrefix={arXiv},

   primaryClass={cs.DC}

}

Download Download (PDF)   View View   Source Source   

2191

views

Deep neural networks (DNNs) have become widely used in many AI applications. Yet, training a DNN requires a huge amount of calculations and it takes a long time and energy to train a satisfying model. Nowadays, many-core AI accelerators (e.g., GPUs and TPUs) play a key role in training DNNs. However, different many-core processors from different vendors perform very differently in terms of performance and power consumption. To investigate the differences among several popular off-the-shelf processors (i.e., Intel CPU, Nvidia GPU, AMD GPU and Google TPU) in training DNNs, we carry out a detailed performance and power evaluation on these processors by training multiple types of benchmark DNNs including convolutional neural networks (CNNs), recurrent neural networks (LSTM), Deep Speech and transformers. Our evaluation results make two valuable directions for end-users and vendors. For the end-users, the evaluation results provide a guide for selecting a proper accelerator for training DNN models. For the vendors, some advantage and disadvantage revealed in our evaluation results could be useful for future architecture design and software library optimization.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: