29158

SYCL in the edge: performance and energy evaluation for heterogeneous acceleration

Youssef Faqir-Rhazoui, Carlos García
Department of Computer Architecture and Automatics, Complutense University of Madrid, Madrid, Spain
The Journal of Supercomputing, 2024

@article{faqir2024sycl,

   title={SYCL in the edge: performance and energy evaluation for heterogeneous acceleration},

   author={Faqir-Rhazoui, Youssef and Garc{‘i}a, Carlos},

   journal={The Journal of Supercomputing},

   pages={1–21},

   year={2024},

   publisher={Springer}

}

Edge computing is essential to handle increasing data volumes and processing capacities. It provides real-time and secure data processing near data sources, like smart devices, alleviating cloud computing energy use, and saving network bandwidth. Specialized accelerators, like GPUs and FPGAs, are vital for low-latency edge computing but the requirements to customized code for different hardware and vendors suppose important compatibility issues. This paper evaluates the potential of SYCL in addressing code portability issues encountered in edge computing. We employed the Polybench suite to compare various SYCL implementations, specifically DPC++ and AdaptiveCpp, with the native solution, CUDA. The disparity between SYCL implementations was negligible, at just 5%. Furthermore, we evaluated SYCL in the context of specific edge computing applications such as video processing using three different optical flow algorithms. The results revealed a slight performance gap of 3% when transitioning from CUDA to SYCL. Upon evaluating energy consumption, the observed difference ranged from, depending on the application utilized. These gaps are the price one may need to pay when achieving the ability to successfully run the same code on two distinct edge boards. These findings underscore SYCL’s capacity to increase productivity in terms of development costs and facilitate IoT deployment without being locked into a particular platform or manufacturer.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: