Performance Evaluation of Deep Learning Tools in Docker Containers
Department of Computer Science, Hong Kong Baptist University
arXiv:1711.03386 [cs.DC], (9 Nov 2017)
@article{xu2017performance,
title={Performance Evaluation of Deep Learning Tools in Docker Containers},
author={Xu, Pengfei and Shi, Shaohuai and Chu, Xiaowen},
year={2017},
month={nov},
archivePrefix={"arXiv"},
primaryClass={cs.DC}
}
With the success of deep learning techniques in a broad range of application domains, many deep learning software frameworks have been developed and are being updated frequently to adapt to new hardware features and software libraries, which bring a big challenge for end users and system administrators. To address this problem, container techniques are widely used to simplify the deployment and management of deep learning software. However, it remains unknown whether container techniques bring any performance penalty to deep learning applications. The purpose of this work is to systematically evaluate the impact of docker container on the performance of deep learning applications. We first benchmark the performance of system components (IO, CPU and GPU) in a docker container and the host system and compare the results to see if there’s any difference. According to our results, we find that computational intensive jobs, either running on CPU or GPU, have small overhead indicating docker containers can be applied to deep learning programs. Then we evaluate the performance of some popular deep learning tools deployed in a docker container and the host system. It turns out that the docker container will not cause noticeable drawbacks while running those deep learning tools. So encapsulating deep learning tool in a container is a feasible solution.
November 12, 2017 by hgpu