25938

Autotuning CUDA: Applying NLP Techniques to LS-CAT

Lars Bjertnes, Jacob O. Tørring, Anne C. Elster
Norwegian University of Science and Technology (NTNU), Trondheim, Norway
Norwegian University of Science and Technology, 2021

@inproceedings{bjertnes2021autotuning,

   title={Autotuning CUDA: Applying NLP Techniques to LS-CAT},

   author={Bjertnes, Lars and T{o}rring, Jacob O and Elster, Anne C},

   booktitle={Norsk IKT-konferanse for forskning og utdanning},

   number={1},

   pages={72–85},

   year={2021}

}

Download Download (PDF)   View View   Source Source   

717

views

The abstract relation between hardware parameters and program performance makes setting program parameters a difficult task. Without autotuning, software can miss low-level optimizations, resulting in lower performance. Traditionally, time-consuming trial and error search methods have been the staple of autotuning. Applying Natural language processing (NLP) based machine learning (ML) methods to source code as a means to perform autotuning-oriented tasks is a growing topic. Earlier research has, with success, performed a range of different autotuning tasks using multiple source code languages. However, most of the source code data is CPU-oriented, with very little GPU code. The LS-CAT (Large-Scale CUDA AutoTuning) dataset [BTE21] uses CUDA GPU-based kernels and generates a dataset to perform thread-coarsening. This paper implements several custom NLP-ML pipelines to evaluate ML-based thread-coarsening using the LS-CAT dataset, and a custom scoring function to find the performance impact for any choice. Several model configurations were able to beat both random choice, 0.9400, and only selecting the largest thread-block (1024), 0.9437. Finally, the best model achieves a score of 0.9483, giving an average performance increase and speedup of 0.49 percent over the largest thread-block. Implementing self-attention mechanisms proved to counteract overfitting, while a multi-label based learning task outperformed other approaches. Compared to previous datasets [Cum+17], the LS-CAT dataset’s higher thread-coarsening precision gives a more precise evaluation of the model’s performance. The inst2vec embedding used in earlier works was unable to correctly parse the CUDA LLVM IR tokens, resulting in high data loss. Approaches to addressing this, and other ideas for future work, are also included.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: