The OoO VLIW JIT Compiler for GPU Inference
UC Berkeley, MIT
arXiv:1901.10008 [cs.DC], (31 Jan 2019)
@article{jain2019ooo,
title={The OoO VLIW JIT Compiler for GPU Inference},
author={Jain, Paras and Mo, Xiangxi and Jain, Ajay and Tumanov, Alexey and Gonzalez, Joseph E and Stoica, Ion},
journal={arXiv preprint arXiv:1901.10008},
year={2019}
}
Current trends in Machine Learning (ML) inference on hardware accelerated devices (e.g., GPUs, TPUs) point to alarmingly low utilization. As ML inference is increasingly time-bounded by tight latency SLOs, increasing data parallelism is not an option. The need for better efficiency motivates GPU multiplexing. Furthermore, existing GPU programming abstractions force programmers to micro-manage GPU resources in an early-binding, context-free fashion. We propose a VLIW-inspired Out-of-Order (OoO) Just-in-Time (JIT) compiler that coalesces and reorders execution kernels at runtime for throughput-optimal device utilization while satisfying latency SLOs. We quantify the inefficiencies of space-only and time-only multiplexing alternatives and demonstrate an achievable 7.7x opportunity gap through spatial coalescing.
February 3, 2019 by hgpu