A Case Study: Exploiting Neural Machine Translation to Translate CUDA to OpenCL
College of Computer Science, Georgia Institute of Technology, Atlanta, GA, USA
arXiv:1905.07653 [cs.LG], (18 May 2019)
@misc{kim2019case,
title={A Case Study: Exploiting Neural Machine Translation to Translate CUDA to OpenCL},
author={Yonghae Kim and Hyesoon Kim},
year={2019},
eprint={1905.07653},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
The sequence-to-sequence (seq2seq) model for neural machine translation has significantly improved the accuracy of language translation. There have been new efforts to use this seq2seq model for program language translation or program comparisons. In this work, we present the detailed steps of using a seq2seq model to translate CUDA programs to OpenCL programs, which both have very similar programming styles. Our work shows (i) a training input set generation method, (ii) pre/post processing, and (iii) a case study using Polybench-gpu-1.0, NVIDIA SDK, and Rodinia benchmarks.
May 23, 2019 by hgpu