24882

Under the Hood of SYCL – An Initial Performance Analysis With an Unstructured-mesh CFD Application

Istvan Z. Reguly, Andrew M.B. Owenson, Archie Powell, Stephen A. Jarvis, Gihan R. Mudalige
Faculty of Information Technology and Bionics, Pazmany Peter Catholic University, Budapest, Hungary
International Supercomputing Conference (ISC 2021), 2021

@article{reguly2021under,

   title={Under the Hood of SYCL–An Initial Performance Analysis With an Unstructured-mesh CFD Application},

   author={Reguly, Istvan Z and Owenson, Andrew MB and Powell, Archie and Jarvis, Stephen A and Mudalige, Gihan R},

   year={2021}

}

Download Download (PDF)   View View   Source Source   Source codes Source codes

Package:

1476

views

As the computing hardware landscape gets more diverse, and the complexity of hardware grows, the need for a general purpose parallel programming model capable of developing (performance) portable codes have become highly attractive. Intel’s OneAPI suite, which is based on the SYCL standard aims to fill this gap using a modern C++ API. In this paper, we use SYCL to parallelize MGCFD, an unstructured-mesh computational fluid dynamics (CFD) code, to explore current performance of SYCL. The code is benchmarked on several modern processor systems from Intel (including CPUs and the latest Xe LP GPU), AMD, ARM and Nvidia, making use of a variety of current SYCL compilers, with a particular focus on OneAPI and how it maps to Intel’s CPU and GPU architectures. We compare performance with other parallelisations available in OP2, including SIMD, OpenMP, MPI and CUDA. The results are mixed; the performance of this class of applications, when parallelized with SYCL, highly depends on the target architecture and the compiler, but in many cases comes close to the performance of currently prevalent parallel programming models. However, it still requires different parallelization strategies or code-paths be written for different hardware to obtain the best performance.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: