26681

The Celerity High-level API: C++20 for Accelerator Clusters

Peter Thoman, Florian Tischler, Philip Salzmann, Thomas Fahringer
Distributed and Parallel Systems, University of Innsbruck, Technikerstaße 21a, 6020 Innsbruck, Tirol, Austria
International Journal of Parallel Programming, 2022

@article{thoman2022celerity,

   title={The Celerity High-level API: C++ 20 for Accelerator Clusters},

   author={Thoman, Peter and Tischler, Florian and Salzmann, Philip and Fahringer, Thomas},

   journal={International Journal of Parallel Programming},

   pages={1–19},

   year={2022},

   publisher={Springer}

}

Providing convenient APIs and notations for data parallelism which remain accessible for programmers while still providing good performance has been a long-term goal of researchers as well as language and library designers. C++20 introduces ranges and views, as well as the composition of operations on them using a concise syntax, but the efficient implementation of these library features is restricted to CPUs. We present the Celerity High-level API, which makes similarly concise mechanisms applicable to GPUs and accelerators, and even distributed memory clusters of GPUs. Crucially, we achieve this very high level of abstraction without a significant negative impact on performance compared to a lower-level implementation, and without introducing any non-standard toolchain components or compilers, by implementing a C++ library infrastructure on top of the Celerity system. This is made possible by two central API design and implementation strategies, which form the core of our contribution. Firstly, gathering as much information as possible at compile-time and using metaprogramming techniques to automatically fuse several distinctly formulated processing steps into a single accelerator kernel invocation. And secondly, leveraging C++20 “Concepts” in order to avoid type erasure, allowing for highly efficient code generation. We have evaluated our approach quantitatively in a comparison to lower-level manual implementations of several benchmarks, demonstrating its low overhead. Additionally, we investigated the individual performance impact of our specific optimizations and design choices, illustrating the advantages afforded by a Concepts-based approach.
No votes yet.
Please wait...

* * *

* * *

* * *

HGPU group © 2010-2022 hgpu.org

All rights belong to the respective authors

Contact us: