28034

BenchDirect: A Directed Language Model for Compiler Benchmarks

Foivos Tsimpourlas, Pavlos Petoumenos, Min Xu, Chris Cummins, Kim Hazelwood, Ajitha Rajan, Hugh Leather
Meta AI Research, University of Edinburgh
arXiv:2303.01557 [cs.LG], (2 Mar 2023)

@misc{https://doi.org/10.48550/arxiv.2303.01557,

   doi={10.48550/ARXIV.2303.01557},

   url={https://arxiv.org/abs/2303.01557},

   author={Tsimpourlas, Foivos and Petoumenos, Pavlos and Xu, Min and Cummins, Chris and Hazelwood, Kim and Rajan, Ajitha and Leather, Hugh},

   keywords={Machine Learning (cs.LG), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},

   title={BenchDirect: A Directed Language Model for Compiler Benchmarks},

   publisher={arXiv},

   year={2023},

   copyright={Creative Commons Attribution Non Commercial Share Alike 4.0 International}

}

The exponential increase of hardware-software complexity has made it impossible for compiler engineers to find the right optimization heuristics manually. Predictive models have been shown to find near optimal heuristics with little human effort but they are limited by a severe lack of diverse benchmarks to train on. Generative AI has been used by researchers to synthesize benchmarks into existing datasets. However, the synthetic programs are short, exceedingly simple and lacking diversity in their features. We develop BenchPress, the first ML compiler benchmark generator that can be directed within source code feature representations. BenchPress synthesizes executable functions by infilling code that conditions on the program’s left and right context. BenchPress uses active learning to introduce new benchmarks with unseen features into the dataset of Grewe’s et al. CPU vs GPU heuristic, improving its acquired performance by 50%. BenchPress targets features that has been impossible for other synthesizers to reach. In 3 feature spaces, we outperform human-written code from GitHub, CLgen, CLSmith and the SRCIROR mutator in targeting the features of Rodinia benchmarks. BenchPress steers generation with beam search over a feature-agnostic language model. We improve this with BenchDirect which utilizes a directed LM that infills programs by jointly observing source code context and the compiler features that are targeted. BenchDirect achieves up to 36% better accuracy in targeting the features of Rodinia benchmarks, it is 1.8x more likely to give an exact match and it speeds up execution time by up to 72% compared to BenchPress. Both our models produce code that is difficult to distinguish from human-written code. We conduct a Turing test which shows our models’ synthetic benchmarks are labelled as ‘human-written’ as often as human-written code from GitHub.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: