Extending SYCL’s Programming Paradigm with Tensor-based SIMD Abstractions

Wilson Feng, Shucai Yao, Kai Ting Wang, Md Aamir Raihan, Laichun Feng, Chunrong Xu
Huawei Canada Research Centre, Markham, Canada
2022 ACM/SPEC on International Conference on Performance Engineering (ICPE’22), 2022


   title={Extending SYCL’s Programming Paradigm with Tensor-based SIMD Abstractions},

   author={Feng, Wilson and Yao, Shucai and Wang, Kai Ting and Raihan, Md Aamir and Feng, Laichun and Xu, Chunrong},

   booktitle={Proceedings of the 2022 ACM/SPEC on International Conference on Performance Engineering},




Download Download (PDF)   View View   Source Source   



Heterogeneous computing has emerged as an important method for supporting more than one kind of processors or accelerators in a program. There is generally a trade off between source code portability and device performance for heterogeneous programming. Thus, new programming abstractions to assist programmers to reduce their development efforts while minimizing performance penalties is extremely valuable. The Khronos SYCL [2] standard defines an abstract single-program-multiple-data (SPMD) programming model for heterogeneous computing. This paper presents a language extension on top of the SYCL standard to enable flexibility for programmers. We introduce a set of single-instruction-multiple-data (SIMD) abstractions based on multi-dimensional arrays (Tensors) in conjuction with the existing SPMD programming paradigm. Our work is based on a C++ language and a set new of LLVM intermediate representation (IR) for representing the SIMD programs. This also includes a set of custom optimization passes that performs instruction lowering, automatic address allocation, and synchronization insertion. We show how our work can be used in conjunction with conventional SYCL SPMD programming for various benchmarks such as general matrix multiplication (GEMM) and lower upper (LU) inverse [5] and evaluate its hardware utilization performance.
No votes yet.
Please wait...

* * *

* * *

* * *

HGPU group © 2010-2022 hgpu.org

All rights belong to the respective authors

Contact us: