27865

CMLCompiler: A Unified Compiler for Classical Machine Learning

Xu Wen, Wanling Gao, Anzheng Li, Lei Wang, Zihan Jiang, Jianfeng Zhan
Institute of Computing Technology, Chinese Academy of Sciences University of Chinese Academy of Sciences
arXiv:2301.13441 [cs.LG], (1 Feb 2023)

@misc{https://doi.org/10.48550/arxiv.2301.13441,

   doi={10.48550/ARXIV.2301.13441},

   url={https://arxiv.org/abs/2301.13441},

   author={Wen, Xu and Gao, Wanling and Li, Anzheng and Wang, Lei and Jiang, Zihan and Zhan, Jianfeng},

   keywords={Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},

   title={CMLCompiler: A Unified Compiler for Classical Machine Learning},

   publisher={arXiv},

   year={2023},

   copyright={arXiv.org perpetual, non-exclusive license}

}

Download Download (PDF)   View View   Source Source   

655

views

Classical machine learning (CML) occupies nearly half of machine learning pipelines in production applications. Unfortunately, it fails to utilize the state-of-the-practice devices fully and performs poorly. Without a unified framework, the hybrid deployments of deep learning (DL) and CML also suffer from severe performance and portability issues. This paper presents the design of a unified compiler, called CMLCompiler, for CML inference. We propose two unified abstractions: operator representations and extended computational graphs. The CMLCompiler framework performs the conversion and graph optimization based on two unified abstractions, then outputs an optimized computational graph to DL compilers or frameworks. We implement CMLCompiler on TVM. The evaluation shows CMLCompiler’s portability and superior performance. It achieves up to 4.38x speedup on CPU, 3.31x speedup on GPU, and 5.09x speedup on IoT devices, compared to the state-of-the-art solutions — scikit-learn, intel sklearn, and hummingbird. Our performance of CML and DL mixed pipelines achieves up to 3.04x speedup compared with cross-framework implementations.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: