Benchmarking TPU, GPU, and CPU Platforms for Deep Learning
John A. Paulson School of Engineering and Applied Sciences, Harvard University
arXiv:1907.10701 [cs.LG], (24 Jul 2019)
@misc{yu2019benchmarking,
title={Benchmarking TPU, GPU, and CPU Platforms for Deep Learning},
author={Yu and Wang and Gu-Yeon Wei and David Brooks},
year={2019},
eprint={1907.10701},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
Training deep learning models is compute-intensive and there is an industry-wide trend towards hardware specialization to improve performance. To systematically benchmark deep learning platforms, we introduce ParaDnn, a parameterized benchmark suite for deep learning that generates end-to-end models for fully connected (FC), convolutional (CNN), and recurrent (RNN) neural networks. Along with six real-world models, we benchmark Google’s Cloud TPU v2/v3, NVIDIA’s V100 GPU, and an Intel Skylake CPU platform. We take a deep dive into TPU architecture, reveal its bottlenecks, and highlight valuable lessons learned for future specialized system design. We also provide a thorough comparison of the platforms and find that each has unique strengths for some types of models. Finally, we quantify the rapid performance improvements that specialized software stacks provide for the TPU and GPU platforms.
July 28, 2019 by hgpu