18906

A Case Study: Exploiting Neural Machine Translation to Translate CUDA to OpenCL

Yonghae Kim, Hyesoon Kim
College of Computer Science, Georgia Institute of Technology, Atlanta, GA, USA
arXiv:1905.07653 [cs.LG], (18 May 2019)

@misc{kim2019case,

   title={A Case Study: Exploiting Neural Machine Translation to Translate CUDA to OpenCL},

   author={Yonghae Kim and Hyesoon Kim},

   year={2019},

   eprint={1905.07653},

   archivePrefix={arXiv},

   primaryClass={cs.LG}

}

Download Download (PDF)   View View   Source Source   

697

views

The sequence-to-sequence (seq2seq) model for neural machine translation has significantly improved the accuracy of language translation. There have been new efforts to use this seq2seq model for program language translation or program comparisons. In this work, we present the detailed steps of using a seq2seq model to translate CUDA programs to OpenCL programs, which both have very similar programming styles. Our work shows (i) a training input set generation method, (ii) pre/post processing, and (iii) a case study using Polybench-gpu-1.0, NVIDIA SDK, and Rodinia benchmarks.
Rating: 1.5/5. From 2 votes.
Please wait...

Recent source codes

* * *

* * *

HGPU group © 2010-2019 hgpu.org

All rights belong to the respective authors

Contact us: