19011

Benchmarking TPU, GPU, and CPU Platforms for Deep Learning

Yu (Emma)Wang, Gu-Yeon Wei, David Brooks
John A. Paulson School of Engineering and Applied Sciences, Harvard University
arXiv:1907.10701 [cs.LG], (24 Jul 2019)

@misc{yu2019benchmarking,

   title={Benchmarking TPU, GPU, and CPU Platforms for Deep Learning},

   author={Yu and Wang and Gu-Yeon Wei and David Brooks},

   year={2019},

   eprint={1907.10701},

   archivePrefix={arXiv},

   primaryClass={cs.LG}

}

Download Download (PDF)   View View   Source Source   

617

views

Training deep learning models is compute-intensive and there is an industry-wide trend towards hardware specialization to improve performance. To systematically benchmark deep learning platforms, we introduce ParaDnn, a parameterized benchmark suite for deep learning that generates end-to-end models for fully connected (FC), convolutional (CNN), and recurrent (RNN) neural networks. Along with six real-world models, we benchmark Google’s Cloud TPU v2/v3, NVIDIA’s V100 GPU, and an Intel Skylake CPU platform. We take a deep dive into TPU architecture, reveal its bottlenecks, and highlight valuable lessons learned for future specialized system design. We also provide a thorough comparison of the platforms and find that each has unique strengths for some types of models. Finally, we quantify the rapid performance improvements that specialized software stacks provide for the TPU and GPU platforms.
Rating: 2.0/5. From 1 vote.
Please wait...

* * *

* * *

HGPU group © 2010-2019 hgpu.org

All rights belong to the respective authors

Contact us: