TensorFlow Doing HPC
KTH Royal Institute of Technology, Stockholm, Sweden
arXiv:1903.04364 [cs.DC], (11 Mar 2019)
@misc{chien2019tensorflow,
title={TensorFlow Doing HPC},
author={Steven W. D. Chien and Stefano Markidis and Vyacheslav Olshevsky and Yaroslav Bulatov and Erwin Laure and Jeffrey S. Vetter},
year={2019},
eprint={1903.04364},
archivePrefix={arXiv},
primaryClass={cs.DC}
}
TensorFlow is a popular emerging open-source programming framework supporting the execution of distributed applications on heterogeneous hardware. While TensorFlow has been initially designed for developing Machine Learning (ML) applications, in fact TensorFlow aims at supporting the development of a much broader range of application kinds that are outside the ML domain and can possibly include HPC applications. However, very few experiments have been conducted to evaluate TensorFlow performance when running HPC workloads on supercomputers. This work addresses this lack by designing four traditional HPC benchmark applications: STREAM, matrix-matrix multiply, Conjugate Gradient (CG) solver and Fast Fourier Transform (FFT). We analyze their performance on two supercomputers with accelerators and evaluate the potential of TensorFlow for developing HPC applications. Our tests show that TensorFlow can fully take advantage of high performance networks and accelerators on supercomputers. Running our TensorFlow STREAM benchmark, we obtain over 50% of theoretical communication bandwidth on our testing platform. We find an approximately 2x, 1.7x and 1.8x performance improvement when increasing the number of GPUs from two to four in the matrix-matrix multiply, CG and FFT applications respectively. All our performance results demonstrate that TensorFlow has high potential of emerging also as HPC programming framework for heterogeneous supercomputers.
March 17, 2019 by hgpu