27193

Towards making the most of NLP-based device mapping optimization for OpenCL kernels

Petros Vavaroutsos, Ioannis Oroutzoglou, Dimosthenis Masouros, Dimitrios Soudris
School of Electrical and Computer Engineering, National Technical University of Athens, Greece
arXiv:2208.14124 [cs.LG], (30 Aug 2022)

@inproceedings{Vavaroutsos_2022,

   doi={10.1109/coins54846.2022.9855002},

   url={https://doi.org/10.1109%2Fcoins54846.2022.9855002},

   year={2022},

   month={aug},

   publisher={IEEE},

   author={Petros Vavaroutsos and Ioannis Oroutzoglou and Dimosthenis Masouros and Dimitrios Soudris},

   title={Towards making the most of {NLP}-based device mapping optimization for {OpenCL} kernels},

   booktitle={2022 {IEEE} International Conference on Omni-layer Intelligent Systems ({COINS})}

}

Download Download (PDF)   View View   Source Source   

719

views

Nowadays, we are living in an era of extreme device heterogeneity. Despite the high variety of conventional CPU architectures, accelerator devices, such as GPUs and FPGAs, also appear in the foreground exploding the pool of available solutions to execute applications. However, choosing the appropriate device per application needs is an extremely challenging task due to the abstract relationship between hardware and software. Automatic optimization algorithms that are accurate are required to cope with the complexity and variety of current hardware and software. Optimal execution has always relied on time-consuming trial and error approaches. Machine learning (ML) and Natural Language Processing (NLP) has flourished over the last decade with research focusing on deep architectures. In this context, the use of natural language processing techniques to source code in order to conduct autotuning tasks is an emerging field of study. In this paper, we extend the work of Cummins et al., namely Deeptune, that tackles the problem of optimal device selection (CPU or GPU) for accelerated OpenCL kernels. We identify three major limitations of Deeptune and, based on these, we propose four different DNN models that provide enhanced contextual information of source codes. Experimental results show that our proposed methodology surpasses that of Cummins et al. work, providing up to 4% improvement in prediction accuracy.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: