29267

CATBench: A Compiler Autotuning Benchmarking Suite for Black-box Optimization

Jacob O. Tørring, Carl Hvarfner, Luigi Nardi, Magnus Själander
NTNU
arXiv:2406.17811 [cs.LG], (24 Jun 2024)

@misc{tørring2024catbenchcompilerautotuningbenchmarking,

   title={CATBench: A Compiler Autotuning Benchmarking Suite for Black-box Optimization},

   author={Jacob O. Tørring and Carl Hvarfner and Luigi Nardi and Magnus Själander},

   year={2024},

   eprint={2406.17811},

   archivePrefix={arXiv},

   primaryClass={cs.LG},

   url={https://arxiv.org/abs/2406.17811}

}

Bayesian optimization is a powerful method for automating tuning of compilers. The complex landscape of autotuning provides a myriad of rarely considered structural challenges for black-box optimizers, and the lack of standardized benchmarks has limited the study of Bayesian optimization within the domain. To address this, we present CATBench, a comprehensive benchmarking suite that captures the complexities of compiler autotuning, ranging from discrete, conditional, and permutation parameter types to known and unknown binary constraints, as well as both multi-fidelity and multi-objective evaluations. The benchmarks in CATBench span a range of machine learning-oriented computations, from tensor algebra to image processing and clustering, and uses state-of-the-art compilers, such as TACO and RISE/ELEVATE. CATBench offers a unified interface for evaluating Bayesian optimization algorithms, promoting reproducibility and innovation through an easy-to-use, fully containerized setup of both surrogate and real-world compiler optimization tasks. We validate CATBench on several state-of-the-art algorithms, revealing their strengths and weaknesses and demonstrating the suite’s potential for advancing both Bayesian optimization and compiler autotuning research.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: