13809

OmpSs task offload

Florentino Sainz
Facultat d’Informatica de Barcelona (FIB), Universitat Politecnica de Catalunya
Universitat Politecnica de Catalunya, 2015

@article{sainz2014ompss,

   title={OmpSs Task offload},

   author={Sainz Manteca, Florentino},

   year={2014},

   publisher={Universitat Polit{`e}cnica de Catalunya}

}

Download Download (PDF)   View View   Source Source   

1990

views

Exascale performance requires a level of energy efficiency only achievable with specialized hardware. Hence, to build a general purpose HPC system with exascale performance different types of processors, memory technologies and interconnection networks will be necessary. Heterogeneous hardware is already present on some top supercomputer systems that are composed of different compute nodes, which at the same time, contains different types of processors and memories. However, heterogeneous hardware is much harder to manage and exploit than homogeneous hardware, further increasing the complexity of the applications that run on HPC systems. Most HPC applications use MPI to implement a rigid Single Program Multiple Data (SPMD) execution model that no longer fits the heterogeneous nature of the underlying hardware. However, MPI provides a powerful and flexible MPI Comm spawn API call that was designed to exploit dynamically heterogeneous hardware but at the expense of a higher complexity, which has hindered a wider adoption of this API. In this master thesis, we have extended the OmpSs programming model to offload dynamically MPI kernels, replacing the low-level and error prone MPI Comm spawn call with the high-level and easy to use OmpSs pragmas. The evaluation shows that our proposal dramatically simplifies the dynamic offloading of MPI kernels while keeping the same performance and scalability as MPI_Comm_spawn.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: