26950

Deep Learning Models on CPUs: A Methodology for Efficient Training

Quchen Fu, Ramesh Chukka, Keith Achorn, Thomas Atta-fosu, Deepak R. Canchi, Zhongwei Teng, Jules White, Douglas C. Schmidt
Dept. of Computer Science, Vanderbilt University, Nashville, TN, USA
arXiv:2206.10034 [cs.LG], (20 Jun 2022)

@misc{https://doi.org/10.48550/arxiv.2206.10034,

   doi={10.48550/ARXIV.2206.10034},

   url={https://arxiv.org/abs/2206.10034},

   author={Fu, Quchen and Chukka, Ramesh and Achorn, Keith and Atta-fosu, Thomas and Canchi, Deepak R. and Teng, Zhongwei and White, Jules and Schmidt, Douglas C.},

   keywords={Machine Learning (cs.LG), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},

   title={Deep Learning Models on CPUs: A Methodology for Efficient Training},

   publisher={arXiv},

   year={2022},

   copyright={arXiv.org perpetual, non-exclusive license}

}

Download Download (PDF)   View View   Source Source   

716

views

GPUs have been favored for training deep learning models due to their highly parallelized architecture. As a result, most studies on training optimization focus on GPUs. There is often a trade-off, however, between cost and efficiency when deciding on how to choose the proper hardware for training. In particular, CPU servers can be beneficial if training on CPUs was more efficient, as they incur fewer hardware update costs and better utilizing existing infrastructure. This paper makes several contributions to research on training deep learning models using CPUs. First, it presents a method for optimizing the training of deep learning models on Intel CPUs and a toolkit called ProfileDNN, which we developed to improve performance profiling. Second, we describe a generic training optimization method that guides our workflow and explores several case studies where we identified performance issues and then optimized the Intel Extension for PyTorch, resulting in an overall 2x training performance increase for the RetinaNet-ResNext50 model. Third, we show how to leverage the visualization capabilities of ProfileDNN, which enabled us to pinpoint bottlenecks and create a custom focal loss kernel that was two times faster than the official reference PyTorch implementation.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: