DISC: A Dynamic Shape Compiler for Machine Learning Workloads
Alibaba Group
arXiv:2103.05288 [cs.DC], (9 Mar 2021)
@misc{zhu2021disc,
title={DISC: A Dynamic Shape Compiler for Machine Learning Workloads},
author={Kai Zhu and Wenyi Zhao and Zhen Zheng and Tianyou Guo and Pengzhan Zhao and Junjie Bai and Jun Yang and Xiaoyong Liu and Lansong Diao and Wei Lin},
year={2021},
eprint={2103.05288},
archivePrefix={arXiv},
primaryClass={cs.DC}
}
Many recent machine learning models show dynamic shape characteristics. However, existing AI compiler optimization systems suffer a lot from problems brought by dynamic shape models, including compilation overhead, memory usage, optimization pipeline and deployment complexity. This paper provides a compiler system to natively support optimization for dynamic shape workloads, named DISC. DISC enriches a set of IR to form a fully dynamic shape representation. It generates the runtime flow at compile time to support processing dynamic shape based logic, which avoids the interpretation overhead at runtime and enlarges the opportunity of host-device co-optimization. It addresses the kernel fusion problem of dynamic shapes with shape propagation and constraints collecting methods. This is the first work to demonstrate how to build an end-to-end dynamic shape compiler based on MLIR infrastructure. Experiments show that DISC achieves up to 3.3x speedup than TensorFlow/PyTorch, and 1.8x than Nimble.
March 14, 2021 by hgpu