28299

ACRoBat: Optimizing Auto-batching of Dynamic Deep Learning at Compile Time

Pratik Fegade, Tianqi Chen, Phillip B. Gibbons, Todd C. Mowry
Computer Science Department, Carnegie Mellon University, USA
arXiv:2305.10611 [cs.LG], (17 May 2023)

@misc{fegade2023acrobat,

   title={ACRoBat: Optimizing Auto-batching of Dynamic Deep Learning at Compile Time},

   author={Pratik Fegade and Tianqi Chen and Phillip B. Gibbons and Todd C. Mowry},

   year={2023},

   eprint={2305.10611},

   archivePrefix={arXiv},

   primaryClass={cs.LG}

}

Download Download (PDF)   View View   Source Source   

412

views

Dynamic control flow is an important technique often used to design expressive and efficient deep learning computations for applications such as text parsing, machine translation, exiting early out of deep models and so on. However, the resulting control flow divergence makes batching, an important performance optimization, difficult to perform manually. In this paper, we present ACRoBat, a framework that enables efficient automatic batching for dynamic deep learning computations by performing hybrid static+dynamic compiler optimizations and end-to-end tensor code generation. ACRoBat performs up to 8.5X better than DyNet, a state-of-the-art framework for automatic batching, on an Nvidia GeForce RTX 3070 GPU.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: