29924

Exploring SYCL as a Portability Layer for High-Performance Computing on CPUs

Hari Abram, Nikela Papadopoulou, Miquel Pericas
Chalmers University of Technology, University of Gothenburg, Gothenburg, Sweden
ISC High Performance 2025 Research Paper Proceedings (40th International Conference), 2025
BibTeX

As multicore vector processors improve in computational and memory performance, running SIMT (Single Instruction Multiple Threads) programs on CPUs has become increasingly appealing, potentially eliminating the need for dedicated GPU hardware. SYCL is a royalty-free cross-platform C++ programming model for heterogeneous computing that implements the SIMT model and provides a path to run GPU programs on CPU hardware. Several SYCL implementations have been developed. To understand their performance on multicore vector CPUs, this paper systematically evaluates the two major SYCL implementations using a set of micro-benchmarks to assess key features such as memory bandwidth, kernel scheduling, synchronization, and vectorization. We test SYCL implementations across Intel Icelake, AMD Zen2, Fujitsu A64FX, and AWS Graviton3 systems, comparing Ahead-of-Time (AoT) and Just-In-Time (JIT) compilation flows, with OpenCL and OpenMP backends, and compare the results with native OpenMP implementations. Based on the results from these experiments, we generate a table of recommendations for SYCL developers. In general, our results show that AoT outperforms JIT in memory bandwidth, while JIT excels in scheduling and synchronization. SYCL on A64FX achieves near-peak memory bandwidth but underperforms in parallelization and reduction tasks. Our study also explores data management strategies, including USM and buffer & accessor (B&A), and backend configurations, revealing that these factors significantly impact performance. Our findings underscore the importance of selecting the appropriate toolchain to maximize SYCL’s potential for portable and high-performance execution across diverse hardware platforms.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2025 hgpu.org

All rights belong to the respective authors

Contact us:

contact@hpgu.org