27446

Enabling Data Movement and Computation Pipelining in Deep Learning Compiler

Guyue Huang, Yang Bai, Liu Liu, Yuke Wang, Bei Yu, Yufei Ding, Yuan Xie
University of California, Santa Barbara
arXiv:2210.16691 [cs.DC], (29 Oct 2022)

@misc{https://doi.org/10.48550/arxiv.2210.16691,

   doi={10.48550/ARXIV.2210.16691},

   url={https://arxiv.org/abs/2210.16691},

   author={Huang, Guyue and Bai, Yang and Liu, Liu and Wang, Yuke and Yu, Bei and Ding, Yufei and Xie, Yuan},

   keywords={Distributed, Parallel, and Cluster Computing (cs.DC), FOS: Computer and information sciences, FOS: Computer and information sciences},

   title={Enabling Data Movement and Computation Pipelining in Deep Learning Compiler},

   publisher={arXiv},

   year={2022},

   copyright={Creative Commons Attribution 4.0 International}

}

Download Download (PDF)   View View   Source Source   

605

views

Pipelining between data loading and computation is a critical tensor program optimization for GPUs. Multi-stage pipelining across the multi-level buffer hierarchy of GPU is particularly indispensable on the latest NVIDIA Ampere GPUs to reduce resource idleness and guarantee kernel performance. Currently, people rely on libraries written by experts such as cuBLAS to access the pipelining optimization instead of through a tensor program transformation, which is inextensible to new operators and un-composable with prior tensor compiler optimizations. We present ALCOP, an automatic pipelining framework based on TVM infrastructure that overcomes three critical obstacles in generating code for pipelining: detection of pipelining-applicable buffers, program transformation for multi-level multi-stage pipelining, and efficient schedule parameter search by incorporating static analysis. Experiments show that ALCOP can generate programs with 1.23x speedup on average (up to 1.73x) over vanilla TVM. On end-to-end models, ALCOP can improve upon TVM by up to 1.18x, and XLA by up to 1.64x. Besides, our performance model significantly improves the efficiency of the schedule tuning process and can find schedules with 99% the performance given by exhaustive search while costing 40x fewer trials.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: