8003

Automated Tool to Generate Parallel CUDA code from a Serial C Code

Akhil Jindal, Nikhil Jindal, Divyashikha Sethia
Department of Computer Engineering, Delhi Technological University, Shahbad Daulatpur, Main Bawana Road, Delhi – 110042, India
International Journal of Computer Applications (0975 – 8887), Volume 50 – No.8, 2012

@article{key:article,

   author={Akhil Jindal and Nikhil Jindal and Divyashikha Sethia and},

   title={Article: Automated Tool to Generate Parallel CUDA Code from a Serial C Code},

   journal={International Journal of Computer Applications},

   year={2012},

   volume={50},

   number={8},

   pages={15-21},

   month={July},

   note={Published by Foundation of Computer Science, New York, USA}

}

Download Download (PDF)   View View   Source Source   

2481

views

With the introduction of GPGPUs, parallel programming has become simple and affordable. APIs such as NVIDIA’s CUDA have attracted many programmers to port their applications to GPGPUs. But writing CUDA codes still remains a challenging task. Moreover, the vast repositories of legacy serial C codes, which are still in wide use in the industry, are unable to take any advantage of this extra computing power available. Lot of attempts have thus been made at developing auto-parallelization techniques to convert a serial C code to a corresponding parallel CUDA code. Some parallelizes, allow programmers to add "hints" to their serial programs, while another approach has been to build an interactive system between programmers and parallelizing tools/compilers. But none of these are really automatic techniques, since the programmer is fully involved in the process. In this paper, we present an automatic parallelization tool that completely relieves the programmer of any involvement in the parallelization process. Preliminary results with a basic set of usual C codes show that the tool is able to provide a significant speedup of ~10 times.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: