29363

Abstractions for C++ code optimizations in parallel high-performance applications

Jiří Klepl, Adam Šmelko, Lukáš Rozsypal, Martin Kruliš
Department of Distributed and Dependable Systems, Charles University, Malostranské nám. 25, Prague, 118 00, Czech Republic
Parallel Computing, 121, 103096, 2024

@article{klepl2024abstractions,

   title={Abstractions for C++ code optimizations in parallel high-performance applications},

   author={Klepl, Ji{v{r}}{‘i} and {v{S}}melko, Adam and Rozsypal, Luk{‘a}{v{s}} and Kruli{v{s}}, Martin},

   journal={Parallel Computing},

   pages={103096},

   year={2024},

   publisher={Elsevier}

}

Download Download (PDF)   View View   Source Source   Source codes Source codes

Package:

694

views

Many computational problems consider memory throughput a performance bottleneck, especially in the domain of parallel computing. Software needs to be attuned to hardware features like cache architectures or concurrent memory banks to reach a decent level of performance efficiency. This can be achieved by selecting the right memory layouts for data structures or changing the order of data structure traversal. In this work, we present an abstraction for traversing a set of regular data structures (e.g., multidimensional arrays) that allows the design of traversal-agnostic algorithms. Such algorithms can easily optimize for memory performance and employ semi-automated parallelization or autotuning without altering their internal code. We also add an abstraction for autotuning that allows defining tuning parameters in one place and removes boilerplate code. The proposed solution was implemented as an extension of the Noarr library that simplifies a layout-agnostic design of regular data structures. It is implemented entirely using C++ template meta-programming without any nonstandard dependencies, so it is fully compatible with existing compilers, including CUDA NVCC or Intel DPC++. We evaluate the performance and expressiveness of our approach on the Polybench-C benchmarks.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: