28030

Runtime Support for Performance Portability on Heterogeneous Distributed Platforms

Polykarpos Thomadakis, Nikos Chrisochoides
Department of Computer Science, Old Dominion University, Norfolk, 23529, Virginia, USA
arXiv:2303.02543 [cs.DC], (8 Mar 2023)

@misc{https://doi.org/10.48550/arxiv.2303.02543,

   doi={10.48550/ARXIV.2303.02543},

   url={https://arxiv.org/abs/2303.02543},

   author={Thomadakis, Polykarpos and Chrisochoides, Nikos},

   keywords={Distributed, Parallel, and Cluster Computing (cs.DC), FOS: Computer and information sciences, FOS: Computer and information sciences},

   title={Runtime Support for Performance Portability on Heterogeneous Distributed Platforms},

   publisher={arXiv},

   year={2023},

   copyright={arXiv.org perpetual, non-exclusive license}

}

Download Download (PDF)   View View   Source Source   

602

views

Hardware heterogeneity is here to stay for high-performance computing. Large-scale systems are currently equipped with multiple GPU accelerators per compute node and are expected to incorporate more specialized hardware. This shift in the computing ecosystem offers many opportunities for performance improvement; however, it also increases the complexity of programming for such architectures. This work introduces a runtime framework that enables effortless programming for heterogeneous systems while efficiently utilizing hardware resources. The framework is integrated within a distributed and scalable runtime system to facilitate performance portability across heterogeneous nodes. Along with the design, this paper describes the implementation and optimizations performed, achieving up to 300% improvement on a single device and linear scalability on a node equipped with four GPUs. The framework in a distributed memory environment offers portable abstractions that enable efficient inter-node communication among devices with varying capabilities. It delivers superior performance compared to MPI+CUDA by up to 20% for large messages while keeping the overheads for small messages within 10%. Furthermore, the results of our performance evaluation in a distributed Jacobi proxy application demonstrate that our software imposes minimal overhead and achieves a performance improvement of up to 40%. This is accomplished by the optimizations at the library level as well as by creating opportunities to leverage application-specific optimizations like over-decomposition.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: