30097

Luthier: Bridging Auto-Tuning and Vendor Libraries for Efficient Deep Learning Inference

Yongin Kwon, Joohyoung Cha, Sehyeon Oh, Misun Yu, Jeman Park, Jemin Lee
Electronics and Telecommunications Research Institute, Yuseong-gu, Korea (the Republic of)
ACM Transactions on Embedded Computing Systems, 2025

@article{kwon2025luthier,

   title={Luthier: Bridging Auto-Tuning and Vendor Libraries for Efficient Deep Learning Inference},

   author={Kwon, Yongin and Cha, JooHyoung and Oh, Sehyeon and Yu, Misun and Park, Jeman and Lee, Jemin},

   journal={ACM Transactions on Embedded Computing Systems},

   year={2025},

   publisher={ACM New York, NY}

}

Recent deep learning compilers commonly adopt auto-tuning approaches that search for the optimal kernel configuration in tensor programming from scratch, requiring tens of hours per operation and neglecting crucial optimization factors for parallel computing on asymmetric multicore processors. Meanwhile, hand-optimized inference libraries from hardware vendors provide high performance but lack the flexibility and automation needed for emerging models. To close this gap, we propose Luthier, which significantly narrows the search space by selecting the best kernel from existing inference libraries, and also employs cost model-based profiling to quickly determine the most efficient workload distribution for parallel computing. As a result, Luthier achieves up to 2.0x faster execution on convolution-based vision models and transformer-based language models (BERT, GPT) on both CPUs and GPUs, while reducing average tuning time by 95% compared to ArmNN, AutoTVM, Ansor, ONNXRuntime, and TFLite.
No votes yet.
Please wait...

You must be logged in to post a comment.

* * *

* * *

HGPU group © 2010-2025 hgpu.org

All rights belong to the respective authors

Contact us: