26500

Migrating CUDA to oneAPI: A Smith-Waterman Case Study

Manuel Costanzo, Enzo Rucci, Carlos Garcia Sanchez, Marcelo Naiouf, Manuel Prieto-Matias
III-LIDI, Facultad de Inform´atica, Universidad Nacional de La Plata – CIC, La Plata, Buenos Aires, Argentina
arXiv:2203.11100 [cs.DC], (21 Mar 2022)

@misc{https://doi.org/10.48550/arxiv.2203.11100,

   doi={10.48550/ARXIV.2203.11100},

   url={https://arxiv.org/abs/2203.11100},

   author={Costanzo, Manuel and Rucci, Enzo and Sanchez, Carlos Garcia and Naiouf, Marcelo and Prieto-Matias, Manuel},

   keywords={Distributed, Parallel, and Cluster Computing (cs.DC), Programming Languages (cs.PL), FOS: Computer and information sciences, FOS: Computer and information sciences},

   title={Migrating CUDA to oneAPI: A Smith-Waterman Case Study},

   publisher={arXiv},

   year={2022},

   copyright={Creative Commons Attribution Non Commercial Share Alike 4.0 International}

}

Download Download (PDF)   View View   Source Source   

1028

views

To face the programming challenges related to heterogeneous computing, Intel recently introduced oneAPI, a new programming environment that allows code developed in Data Parallel C++ (DPC++) language to be run on different devices such as CPUs, GPUs, FPGAs, among others. To tackle CUDA-based legacy codes, oneAPI provides a compatibility tool (dpct) that facilitates the migration to DPC++. Due to the large amount of existing CUDA-based software in the bioinformatics context, this paper presents our experiences porting SW#db, a well-known sequence alignment tool, to DPC++ using dpct. From the experimental work, it was possible to prove the usefulness of dpct for SW#db code migration and the cross-GPU vendor, cross-architecture portability of the migrated DPC++ code. In addition, the performance results showed that the migrated DPC++ code reports similar efficiency rates to its CUDA-native counterpart or even better in some tests (approximately +5%).
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: