Alpaka – An Abstraction Library for Parallel Kernel Acceleration
Helmholtz-Zentrum Dresden – Rossendorf, Dresden, Germany
arXiv:1602.08477 [cs.DC], (26 Feb 2016)
@article{zenker2016alpaka,
title={Alpaka – An Abstraction Library for Parallel Kernel Acceleration},
author={Zenker, Erik and Worpitz, Benjamin and Widera, Rene and Huebl, Axel and Juckeland, Guido and Knupfer, Andreas and Nagel, Wolfgang E. and Bussmann, Michael},
year={2016},
month={feb},
archivePrefix={"arXiv"},
primaryClass={cs.DC}
}
Porting applications to new hardware or programming models is a tedious and error prone process. Every help that eases these burdens is saving developer time that can then be invested into the advancement of the application itself instead of preserving the status-quo on a new platform.
The Alpaka library defines and implements an abstract hierarchical redundant parallelism model. The model exploits parallelism and memory hierarchies on a node at all levels available in current hardware. By doing so, it allows to achieve platform and performance portability across various types of accelerators by ignoring specific unsupported levels and utilizing only the ones supported on a specific accelerator. All hardware types (multi- and many-core CPUs, GPUs and other accelerators) are supported for and can be programmed in the same way. The Alpaka C++ template interface allows for straightforward extension of the library to support other accelerators and specialization of its internals for optimization.
Running Alpaka applications on a new (and supported) platform requires the change of only one source code line instead of a lot of \#ifdefs.
The Alpaka library defines and implements an abstract hierarchical redundant parallelism model. The model exploits parallelism and memory hierarchies on a node at all levels available in current hardware. By doing so, it allows to achieve platform and performance portability across various types of accelerators by ignoring specific unsupported levels and utilizing only the ones supported on a specific accelerator. All hardware types (multi- and many-core CPUs, GPUs and other accelerators) are supported for and can be programmed in the same way. The Alpaka C++ template interface allows for straightforward extension of the library to support other accelerators and specialization of its internals for optimization.
Running Alpaka applications on a new (and supported) platform requires the change of only one source code line instead of a lot of \#ifdefs.
March 1, 2016 by hgpu