13296

Extending OmpSs to support CUDA and OpenCL in C, C++ and Fortran Applications

Florentino Sainz, Sergi Mateo, Vicenc Beltran, Jose L. Bosque, Xavier Martorell, Eduard Ayguade
Barcelona Supercomputing Center, Barcelona, Spain
Barcelona Supercomputing Center, Research report, 2014

@article{sainz2014extending,

   title={Extending OmpSs to support CUDA and OpenCL in C, C++ and Fortran Applications},

   author={Sainz, Florentino and Mateo, Sergi and Beltran, Vicen{c{c}} and Bosque, Jose L and Martorell, Xavier and Ayguad{‘e}, Eduard},

   journal={Barcelona Supercomputing Center–Technical University of Catalonia, Computer Architecture Department, Tech. Rep},

   year={2014}

}

Download Download (PDF)   View View   Source Source   

1979

views

CUDA and OpenCL are the most widely used programming models to exploit hardware accelerators. Both programming models provide a C-based programming language to write accelerator kernels and a host API used to glue the host and kernel parts. Although this model is a clear improvement over a low-level and ad-hoc programming model for each hardware accelerator, it is still too complex and cumbersome for general adoption. For large and complex applications using several accelerators, the main problem becomes the explicit coordination and management of resources required between the host and the hardware accelerators that introduce a new family of issues (scheduling, data transfers, synchronization, …) that the programmer must take into account. In this paper, we propose a simple extension to OmpSs -a data-flow programming model- that dramatically simplifies the integration of accelerated code, in the form of CUDA or OpenCL kernels, into any C, C++ or Fortran application. Our proposal fully replaces the CUDA and OpenCL host APIs with a few pragmas, so we can leverage any kernel written in CUDA C or OpenCL C without any performance impact. Our compiler generates all the boilerplat code while our runtime system takes care of kernels scheduling, data transfers between host and accelerators and synchronizations between host and kernels parts. To evaluate our approach, we have ported several native CUDA and OpenCL applications to OmpSs by replacing all the CUDA or OpenCL API calls by a few number of pragmas. The OmpSs versions of these applications have competitive performance and scalability but with a significantly lower complexity than the original ones.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: