FusionStitching: Deep Fusion and Code Generation for Tensorflow Computations on GPUs
Alibaba Inc.
arXiv:1811.05213 [cs.DC], (13 Nov 2018)
@article{long2018fusionstitching,
title={FusionStitching: Deep Fusion and Code Generation for Tensorflow Computations on GPUs},
author={Long, Guoping and Yang, Jun and Zhu, Kai and Lin, Wei},
year={2018},
month={nov},
archivePrefix={"arXiv"},
primaryClass={cs.DC}
}
In recent years, there is a surge on machine learning applications in industry. Many of them are based on popular AI frameworks like Tensorflow, Torch, Caffe, or MxNet, etc, and are enpowered by accelerator platforms such as GPUs. One important challenge of running Tensorflow computations on GPUs is the fine granularity problem, namely, FLOPS of individual ops are far from enough to fully exploit the computing power of underlying accelerators. The XLA framework provides a solid foundation to explore this problem further. In this paper, we propose FusionStitching, a novel, comprehensive Op fusion and code generation system to stitch computations into large GPU kernels. Experimental results on four public models and two of our large inhouse applications show another 55% (geometric mean) reduction of GPU kernel launches, compared to the XLA fusion baseline. This increases the E2E performance of both of our latency critical inhouse applications up to 20%.
November 18, 2018 by hgpu